Product Announcements

VDS vs. Cisco N1K

By Shudong Zhou, Sr. Staff Engineer, ESX Networking

I often get questions about the difference between VMware Distributed Switch (VDS) and Cisco Nexus 1000v (N1K). At a high level, VDS presents an integrated model where both network and VM deployment are managed from a single place, while N1K caters to organizations having a separate networking group trained with Cisco CLI. What I really want to talk about is the implementation architecture.

VDS

VDS data plane is unique in that it is not a learning switch. Because all VMs are in software, the hypervisor knows the unicast and multicast addresses programmed into each virtual NIC, VDS makes use of the authoritative information to do packet forwarding. The beauty is in the simplicity. When a packet enters the data plane, if the destination mac matches a local port, it is forwarded there. Otherwise, it is sent to one of the uplinks. Since all forwarding decisions are local, the scalability is infinite. VDS can span as many hosts as the vCenter can manage and can span over long distances, assuming you don’t run into limitations in other parts of the virtualization stack.

The simple scheme has a few consequences. Because VDS is not learning, it cannot automatically pick the right uplink to send packets out. In the spirit of keeping it simple, we require that all uplinks connected to a VDS be connected to the same physical network (surprisingly, this isn’t documented anywhere). This way, it doesn’t matter which uplink packets are sent out of. If you have separate physical networks connected via different network adapters, you need to create one VDS for each physical network (different VLANs going through the same adapter don’t count as separate physical networks).

Another consequence is that VDS can handle duplicate vNic mac addresses. The data plane never complains about duplicate mac addresses, just forward packets to all matching ports. In fact, the implementations of VMware Lab Manager and vCloud Director Networking Infrastructure take advantage of this.

VDS as a product is far more complex than what I described above. You can find some details of VDS implementation in a paper I wrote in the Dec. 2010 issue of OS Review. Unfortunately, the site asks you to buy the paper. I thought about the posting a copy myself, but I’m not really sure about the copyright and legal stuff.

N1K

N1K is a hybrid of two implementations, one in software and one in hardware. When we started on VDS four years ago, Cisco formed a new group to implement a software switch in ESX. The project code name is Swordfish. Since the migration of access ports into hypervisor was inevitable, Cisco might as well claim a piece the territory. Later on, Nouva approached us about the VN-Tag technology. The idea is more radical: all VM traffic is sent out to physical switch with a tag identifying the vNic port and the physical switch will do all the VM-to-VM packet forwarding. Effectively, the technology moves access ports back into the physical switch. To make it happen, Nouva needed our help to get the traffic out of the hypervisor. When Nouva was merged into Cisco, the two teams were merged into the same Cisco BU and the two switching schemes were merged into a single product: Cisco Nexus 1000v. When N1K is installed on UCS systems with Palo adapter, the hardware switching module takes effect. Otherwise, the software switching module is activated.

Swordfish

I don’t see the Cisco switching code, so I can’t offer more insight than what’s publicly available. Swordfish is a learning switch. There is a controller which can be installed as a VM or purchased in a hardware box (Nexus 1010). The controller must be up for data plane to function, so you should deploy dual controllers to avoid a single point of failure. Swordfish provides a rich set of features commonly available in Cisco hardware switches, richer than VDS.

Having a central controller provides some deployment flexibility. For example, you can enable PVLAN within N1K without any physical switch support. The PVLAN feature in VDS, in contrast, requires the same PVLAN map to be configured in the physical switches. On the other hand, the central controller can be a liability when it comes to scalability. The current N1K limit is 64 hosts. Spanning N1K over long distance could be a challenge as well.

VN-Tag

VN-Tag is a great technology. When coupled with passthrough, it takes virtual switch completely out of the picture. However, hardware VN-Tag will cost more per VM since a virtual port costs only a small amount of physical memory. Furthermore, passthrough requires guest VM memory to be locked, killing memory over commit and consolidation ratio. So I think VN-Tag is a niche technology at best. It might make sense to run VN-Tag alongside Swordfish, where only I/O intensive workloads are put on VN-Tag.

Moving forward, the question is which implementation provides a better foundation for multi-tenancy and scalability in the cloud environment. Only time will tell.