Home > Blogs > VMware VROOM! Blog


Networking Performance and Scaling in Multiple VMs

Last month we published a Tech Note summarizing networking throughput results using ESX Server 3.0.1 and XenEnterprise 3.2.0.  Multiple NICs were used in order to achieve the maximum throughput possible in a single uniprocessor VM.  While these results are very useful for evaluating the virtualization overhead of networking, a more common configuration is to spread the networking load across multiple VMs. We present results for multi-VM networking in a new paper just published.  Only a single 1 Gbps NIC is used per VM, but with up to four VMs running simultaneously. This simulates a consolidation scenario of several machines each with substantial, but not extreme, networking I/O.  Unlike the multi-NIC paper, there is no exact native analog, but we ran the same total load in a SMP native Windows machine for comparison.  The results are similar to the earlier ones: ESX stays close to native performance, achieving up to 3400 Mbps for the 4-VM case. XenEnterprise peaks at 3 VMs and falls off to 62-69% of the ESX throughput with 4 VMs.  According to the XenEnterprise documentation only three physical NICs are supported in the host, even though the UI let us configure and run four physical NICs without error or warning.  This is not surprising given the performance.  We then tried a couple of experiments (like making dom0 use more than 1 CPU) to fix the bottleneck, but only succeeded in further reducing the throughput.  The virtualization layer in ESX is always SMP, and together with a battle-tested scheduler and support for 32 e1000 NICs, scales to many heavily-loaded VMs. Let us know if you’re able to reach the limits of ESX networking!

4 thoughts on “Networking Performance and Scaling in Multiple VMs

  1. gaetano

    About networking performance: does the cap on concurrent half-open connections that has been added to xp with sp2 has any effect on vmware guests, or is tcpip.sys completely bypassed?
    We have been running network scanners from virtual images, and I was wondering if the network stack of the host system could have any impact…
    [ Sorry for the offtopic question, but I could not find this info anywhere else on the net ]

    Reply
  2. Randy Robertson

    VMware guests will suffer from any limitations that Microsoft imposes on native machines.
    Tcpip.sys is not bypassed.

    Reply
  3. Jon Miller

    So if I have a server with multiple NIC cards in it, and lets say I have 6 vm’s running on that server at once. Is there a way to configure so that the load is spread between the NIC’s or does one just get hammered??

    Reply
  4. Jeff Buell

    The easiest thing to do is to create a NIC team by adding all (or some subset) of the physical NICs to one virtual switch and then attaching the 6 VMs to that switch. The load will not be perfectly balanced since each vNIC will be associated with one pNIC. So if you have 5 pNICs, then one of them will support 2 of the VMs, while the other 4 VMs will have a pNIC each. But you won’t have one pNIC getting hammered. You can also do this manually by putting each pNIC into its own virtual switch, and choosing which VMs have access to which virtual switches. This is recommended if you know one of the VMs has a heavier networking load than the others and you want it to have a dedicated pNIC.
    Load balancing of one vNIC across multiple pNICs is also possible but needs configuration at the physical switch, so this is not the default.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>