Product Announcements

Got Network I/O Control?

vSphere 4.1 launched today with a bunch of great new features and improved performance.

Network I/O Control (NetIOC) was the major feature enhancement for networking. NetIOC is a must have for anyone using or considering 10GigE.

Why NetIOC?

NetIOC enables you to guarantee service levels (bandwidth) for particular vSphere traffic types. For example, if you’re concerned about iSCSI (or NFS) bandwidth/latency when a vMotion or some other activity fires up, or maybe you wish to protect your FT traffic from congestion; or of course, you wish to ensure your VM traffic meets minimum or required levels of service.

NetIOC can isolate and prioritize six traffic types:

  • VM traffic
  • FT Logging
  • iSCSI
  • NFS
  • Management
  • vMotion

Using the limits and shares parameters, you can tailor NetIOC precisely to the requirements of your environment.

NetIOC in Action

Sree from our performance engineering group performed a myriad of benchmarks and tests to see the effects with and without NetIOC. (The paper is posted here). One test involved running FT, NFS, VM traffic, and then seeing what happens when a vMotion starts. You can see the effect in the diagram below. The aggregate bandwidth usage before vMotion is ~4-5 Gbps. The vMotion (which with vSphere 4.1 can consume up to 8 Gbps), oversubscribes the 10GigE NIC, causing all traffic types to suffer. With NetIOC enabled with appropriate shares values, the critical traffic types are protected with vMotion consuming what remains of the link bandwidth.  


Here is another benchmark using NetIOC. This time using the SPECweb2005 benchmark. In this instance and without NetIOC, a vMotion causes 26% of the user sessions to fall below the service level requirements. 


New Network Features

Of course, we released a few other network features in vSphere 4.1. An overview is presented in the “What’s new in VMware vSphere 4.1: Virtual Networking” paper on

  • Network I/O Control (NetIOC)—see above
  • Load Based Teaming (LBT)—dynamic balancing of VM vnics across a team according to load
  • IPv6 Enhancements–toward NIST “Host” Profile compliance
  • Improved VM-VM and vmkernel Network Performance—major performance improvements across the board (vMotion now tops 8Gbps!)

You can read all about NetIOC in this 26-page paper posted on