Home > Blogs > VMware vSphere Blog > Monthly Archives: April 2009

Monthly Archives: April 2009

Considerations for Maximizing UDP Performance

Continuing the network performance theme…

Boon Seong Ang, Senior Staff Engineer for VMkernel IO, wrote the following response to one of our SEs. The SE’s customer was experiencing some virtual network performance issues with UDP. 

Considerations for virtual network performance:

  1. (1) vNIC types: do not use vlance.  vlance is our oldest vNIC and does not have good performance.  If a VM has “Flexible” vNIC without VMware tools installed, the vNIC will end up being vlance (pcnet32). 
  2. In addition, for UDP, use either e1000 vNIC, or with ESX 4, vmxnet3, in order to be able to configure a larger vNIC Rx ring size.  Because UDP can be a lot more bursty (due to lack of flow-control), having a larger Rx ring size helps to provide buffering/elasticity to better absorb the bursts.  Both e1000 vNIC and our new vmxnet3 allows resizing the vNIC’s Rx ring size, up to around 1 to 2 thousand buffers.  As a side note, there is some negative performance impact with larger ring size due to larger memory foot print. The new vxmnet3 vNIC is more efficient than the e1000 vNIC.  Also in general, ESX 4 has some performance improvements over ESX 3.5.
  3. In the past, we have seen better networking performance with RHEL than with Fedora Core.
  4. If there are many more Virtual CPUs than Physical CPUs in a server, there will be contention for Physical CPU cycles.  During the time a Virtual CPU of a VM waits for its turn to run, network Rx processing cannot happen in the VM and will more likely cause network packet drops.  A larger Rx ring size may help, but it has its limits depending on the degree of over-commit.
  5. Another note/question is the number of Virtual CPUs in a VM.  Having more Virtual CPUs may have some detrimental effects due to the added coordination needed between the multiple Virtual CPUs.  If a uniprocessor VM suffices, that sometimes performs better.
  6. Finally, if the customer can use the newest processors, e.g. Intel’s Nehalem (5500 series), the boost from hardware improvement is quite substantial.

Line Rate 10GigE

I blogged recently (or should I say, Mark Pariente did) about virtual switch performance. Howie Xu, our Director of R&D for VMkernel IO remarked recently that after talking with a few customers, many are still unaware we can achieve line rate 10GigE performance on ESX 3.5. I guess they haven’t read “10Gbps Networking Performance on ESX 3.5u1” posted on our network technology resources page. It’s a good read if you are interested in network performance (and aren’t we all?)

The story only gets better with vSphere 4 and ESX 4 with the new Intel Nehalem processors. Initial tests from engineering show a staggering 30Gbps throughput. Stay tuned for more.

More on the vSphere 4 Launch

Today’s launch of vSphere 4 featured a veritable who’s who of the industry with Paul Maritz (CEO, VMware) and Joe Tucci (Chairman, VMware and CEO, EMC); John Chambers (President and CEO, Cisco), Michael Dell (CEO, Chairman of Dell); Pat Gelsinger (SVP, Intel). Steve Herrod (CTO, VMware) demonstrated vSphere to the assembled masses.

I managed to reel off a few shots from my iPhone during the proceedings …

Efficiency, Control, Choice—that’s what you’re getting with vSphere 4

IMG_0491 (2)

Some of those who made it all happen …

IMG_0489 (2)

Cisco CEO, John Chambers, joins Paul Maritz on stage to thank the engineers responsible for the collaborative effort around the Cisco Nexus 1000V.

IMG_0498 (2)

VMware CEO, Paul Maritz

 IMG_0493 (2)

vSphere 4 Launches!

It all happened today … the launch of vSphere 4. If you’ve been following the announcements beginning back at VMworld in September last year, you will know about many of the new virtual networking features incorporated under the banner of vNetwork in vSphere 4.

Just a few of these:

  • vNetwork Distributed Switching. You now have three virtual switching choices to fit your environment. And, you can run all three simultaneously if you really want to (so long as you dedicate NICs to each).
    • vNetwork Standard Switch—this is the same as the familiar vSwitch from ESX 3.5 and VI3.
    • vNetwork Distributed Switch—the virtual switch control plane moves to vCenter to create a consolidated abstraction of a single distributed virtual switch that span multiple hosts. vDS incorporates a number of additional features such as Private VLANs, bidirectional traffic shaping and Network VMotion, and simplifies deployment, configuration and ongoing monitoring and troubleshooting. vDS incorporates third party virtual switch support and so is a prerequisite for the Cisco Nexus 1000V Series Virtual Switch.
    • Cisco Nexus 1000V—Cisco’s third party virtual switch implementation for vSphere. The N1k uses the same distributed virtual switch model as the vDS, but offers an extended Cisco Nexus/Catalyst feature set plus the familiar (if you’re a networking person) IOS cli (command line interface). You can, of course still manage the Nexus 1000V through vCenter Server.

And there’s more:

  • VMXNET3—continuing the evolution of  VMXNET and Enhanced VMXNET
  • IPv6—extending the IPv6 support for guest OS’s introduced in ESX 3.5 to IPv6 support for the vmkernel and service console interfaces.
  • VMDirectPath—enables direct control of PCI devices (such as NICs) from within a VM.

For more information, just head on over to the Resources section at vmware.com/go/networking

Virtual Switch Performance and Overhead

I’ve seen a few emails from our field fly by recently on the subject of virtual switch performance in ESX. It seems a few folks are operating under the impression that more vswitches equates to better performance (note: it doesn’t). Mark Pariente, one of senior engineers gave me some more detail. Here is his explanation:

It is a common misconception that increasing the number of virtual switches used in an ESX system allows for greater performance through more physical CPU’s being utilized in parallel for driving the I/O. For example, instead of having a single virtual switch with two physical NIC’s connected to it, some customers choose to create a separate virtual switch for each physical NIC, in hopes of getting performance benefits.

In reality, network I/O processing in the vmkernel is not tied to the virtual switch in terms of the context of execution. The virtual switch is part of the larger networking stack that gets executed on packets as they travel through vmkernel.

For physical NIC receive traffic, the context of execution is tied closely to the hardware. In ESX 3.5 and earlier, the vmkernel networking stack runs from a bottom-half context armed through the receive interrupt. From ESX 4.0 onwards, the receive processing runs in the context of polling from a lightweight kernel thread, which is associated with the physical NIC.

Packets transmitted from virtual machines can be processed in a variety of contexts. The most notable is directly in the context of the transmitting entity, such as a virtual machine (VM) CPU. However to reduce the number of exits from the VM as a performance optimization transmitted packets can also be processed through opportunistic polling from other contexts such as the receive thread.

None of these contexts are tied to the number of virtual switches in the system. Instead they are associated with the traffic generating entities, such as the physical NIC’s and VM’s. Thus, there is no adverse performance effect of associating multiple physical NIC’s with a single virtual switch.

The bottom line here, of course is that the number of virtual switches does not affect network performance. In most cases one vSwitch with proper use of VLANS (VST mode) and port group override of NIC teaming policies is quite ample.  

Which NIC is my VM using? Load Balancing Visibility with vSphere

One question that crops up from time to time is, “How do I determine which physical port is used for my VM traffic when I configure Originating Virtual Port ID or Mac Hash?”

When you configure a Port Group (or Distributed Virtual Port Group) with either of these load balancing algorithms, ESX hashes *each* virtual port or mac address to *one* of the available vmnics in the NIC team. The idea is that the VMs are balanced over the available vmnics in the team. For example, if we had 20 VMs on a NIC team with four vmnics, we should have five VMs allocated per vmnics.

With vSphere, we’re enhanced esxtop to show which vmnic is used by which VM, (or vmkernel and service console). A screen capture is shown below. I used explicit failover order for the service console (vswif0) and vmkernel ports (iSCSI, FT, VMotion) to ensure deterministic use of vmnic0 and vmnic2. The Port Groups supporting the VMs were configured using “Originating Virtual Port ID” load balancing over vmnic1 and vmnic3. As you can see, in my example, XP_VM1 and XP_VM3 hashed to vmnic1 and the others to vmnic3.

esxtop-nw

Note: this is included in vSphere. To use it, run “esxtop” from the ESX console and then type “n

(btw, you can quickly get up to speed on the major new VMware vDS and Cisco Nexus 1000V features here)