Product Announcements

Virtual Networking Observations

By Shudong Zhou, Sr. Staff Engineer, ESX Networking

This is the first of several posts about VMware virtual networking infrastructure. The goal is to help readers gain a deeper technical understanding and appreciation of the VMware offering. This is my personal view and does not represent VMware’s official position.

I joined VMware in Oct. 2006, around the time when ESX 3.5 was about to be released. While VMware was growing rapidly at the time, Hyper-V was looming in the horizon. On the networking front, 10G NICs were ready to enter the market and ESX was facing resistance from enterprise networking administrators. I joined the effort to tackle the issue of network management.

Virtual Switch

ESX introduced a virtual switch in ESX 2 to provide efficient traffic forwarding between VMs inside ESX hosts and to provide redundant connections to the physical network. The virtual switch was not popular among network admins for several reasons:

  1. It’s a foreign concept introduced by a company not known for networking.
  2. The design requires physical switch ports be configured in trunked (VLAN) mode. This was scary to many (and still scary to some admins today).
  3. All the safety valves applied to access ports are not longer available. Suppose you set up a list of allowed macs, you would need to enter the mac address of all VMs that can potentially be running on a host. This is practically impossible, particularly when VMs can migrate between hosts dynamically.

Distributed Switch and Cisco N1K

We decided to do two things to address the challenges. One was to bring access switch features into the virtual switch, the other was to make a Cisco branded virtual switch available in ESX. In designing the new beast, we made the following choices,

  • Since a VM can vMotion from one ESX host to another, access port features must move with the VM. This means ports should exist independent of ESX hosts. The obvious choice is to give each port its own identity (dvPort ID) and label the new switch “distributed”.
  • We chose a centralized approach to implementation. The distributed switch is created and managed from vCenter. Information is pushed to ESX hosts on demand as VMs are deployed and moved around. Once deployed, the data plane (packet forwarding) works independently of vCenter.
  • There would be two implementations of the distributed switch, one provided by VMware and one by Cisco. The VMware implementation was called Distributed Virtual Switch (DVS) during development, renamed vNetwork Distributed Switch (vDS) when released in vSphere 4.0, the current name is VMware Distributed Switch (VDS).

Our effort clearly paid off as VDS and Cisco N1K are now widely accepted by the industry. The concept of “distributed” switch is corroborated by Citrix with the release of the Citrix Distributed Virtual Switch late in 2010. In the meantime, my colleagues at VMware did a tremendous job in achieving 10G line rate with TSO, Jumbo Frame, and other offload technologies. And this was done without requiring passthrough or TOE! VMware survived Hyper-V and other competition.

Moving forward, we are looking to bring other network services currently in the physical network into the hypervisor. We are also looking to take advantage of 40G and 100G. If you like what we are doing and would like to join the effort, see job opportunities here.