Home > Blogs > VMware vSphere Blog


Got Network I/O Control?

vSphere 4.1 launched today with a bunch of great new features and improved performance.

Network I/O Control (NetIOC) was the major feature enhancement for networking. NetIOC is a must have for anyone using or considering 10GigE.

Why NetIOC?

NetIOC enables you to guarantee service levels (bandwidth) for particular vSphere traffic types. For example, if you’re concerned about iSCSI (or NFS) bandwidth/latency when a vMotion or some other activity fires up, or maybe you wish to protect your FT traffic from congestion; or of course, you wish to ensure your VM traffic meets minimum or required levels of service.

NetIOC can isolate and prioritize six traffic types:

  • VM traffic
  • FT Logging
  • iSCSI
  • NFS
  • Management
  • vMotion

Using the limits and shares parameters, you can tailor NetIOC precisely to the requirements of your environment.

NetIOC in Action

Sree from our performance engineering group performed a myriad of benchmarks and tests to see the effects with and without NetIOC. (The paper is posted here). One test involved running FT, NFS, VM traffic, and then seeing what happens when a vMotion starts. You can see the effect in the diagram below. The aggregate bandwidth usage before vMotion is ~4-5 Gbps. The vMotion (which with vSphere 4.1 can consume up to 8 Gbps), oversubscribes the 10GigE NIC, causing all traffic types to suffer. With NetIOC enabled with appropriate shares values, the critical traffic types are protected with vMotion consuming what remains of the link bandwidth.  

image

Here is another benchmark using NetIOC. This time using the SPECweb2005 benchmark. In this instance and without NetIOC, a vMotion causes 26% of the user sessions to fall below the service level requirements. 

image

New Network Features

Of course, we released a few other network features in vSphere 4.1. An overview is presented in the “What’s new in VMware vSphere 4.1: Virtual Networking” paper on vmware.com

  • Network I/O Control (NetIOC)—see above
  • Load Based Teaming (LBT)—dynamic balancing of VM vnics across a team according to load
  • IPv6 Enhancements–toward NIST “Host” Profile compliance
  • Improved VM-VM and vmkernel Network Performance—major performance improvements across the board (vMotion now tops 8Gbps!)

You can read all about NetIOC in this 26-page paper posted on vmware.com

6 thoughts on “Got Network I/O Control?

  1. Kendrick Coleman

    Wow, really awesome stuff. In the 1GbE world having redundant and isolated links was the easiest trick in the book to not worry about traffic congestion. Being able to put a QoS on different kinds of traffic is going to be a big part of the transition to 10GbE. Nice work and great feature! Is this available with all vSphere versions? What about a how-to guide and best practices?

  2. KendrickColeman

    Wow, really awesome stuff. In the 1GbE world having redundant and isolated links was the easiest trick in the book to not worry about traffic congestion. Being able to put a QoS on different kinds of traffic is going to be a big part of the transition to 10GbE. Nice work and great feature! Is this available with all vSphere versions? What about a how-to guide and best practices?

  3. Guy Brunsdon

    The “everything” guide to NetIOC (technology background, best practices, how to, etc) will be up in the next day or so. I’ll blog when it’s up.
    The extra nice thing about using NetIOC versus the hard partitioned NIC model is that you can guarantee a minimum when there is congestion/contention, and when there isn’t, that traffic flow can use *all* the available bandwidth.

  4. Pingback: The world of Marc O'Polo – Blog VCAP5-DCA Objective 2.4 – Administer vNetwork Distributed Switch Settings » The world of Marc O'Polo - Blog

Comments are closed.