Product Announcements

Jumbo Frames in vSphere 4.0

In vSphere 4.0, we introduced support for jumbo frames on vmkernel interfaces with ESX 4.0 and ESXi 4.0. This meant you could use jumbo frames for iSCSI, NFS, FT, and vMotion in both releases. Unfortunately, we had a minor documentation bug that stated jumbo frames were not supported in ESXi. This has since been corrected. You can find the updated docs for ESXi 4.0 here and ESXi4.0u1 here.

What is a jumbo frame?

A jumbo frame is an Ethernet frame with a “payload” greater than 1500 Bytes and up to ~9000 Bytes. This is also known as the MTU (Maximum Transmission Unit). The payload is what is carried in the frame, so a standard (non-jumbo) Ethernet frame with MTU of 1500 could be up to 1522 bytes in length. The additional 22 bytes is comprised of destination mac address (6B), source mac address (6B), optional 802.1Q VLAN header (4B), ethertype (2B), and the 4Byte CRC32 trailer.

9000Bytes is generally accepted as the maximum size for a jumbo frame, however, I’ve seen some Cisco switches with MTUs of 9216 Bytes.

Why use jumbo frames?

It’s a case of getting maximum bang for the buck. Processing overhead is proportional to the number of frames; so if you can pack as much as possible into each frame, you will have less overhead and better top end performance. Most modern data center grade physical switches will switch line rate right down to the minimum frame size of 64 bytes, so the main impact is seen at the source and destination systems.   

Using Jumbo Frames

Everything in the end-to-end network path has to be capable of handling the frame size thrown at it. So if you enable jumbo frames (MTU = 9000) on ESX, you have to be sure that every physical switch and the other end(s) can handle that sized frame. Layer 2 switches will just drop jumbo frames if they are not configured for it. L3 switches/routers can “fragment” larger frames into smaller frames for reassembly at the destination, but it can cause a huge performance hit. … Don’t rely on IP fragmentation. If you’re going to use jumbo frames, make sure everything in the path is configured for it.

Enabling jumbo frames with ESX 4.0 and ESXi 4.0     

As stated above, you have to enable jumbo frames end-to-end. In ESX/ESXi do the following:

1. Enable jumbo frames on the virtual switch (set the MTU on the uplinks/physical NICs)

  • For vSS (standard vSwitch) you need to use the vSphere cli.  For example, this cli command will set the MTU to 9000 bytes for the vSS named “vswitch0”:
    vicfg-vswitch –m 9000 vswitch0     
    Use “vicfg-vswitch –l” to list the vswitches and their properties
  • For vDS (vNetwork Distributed Switch), you can set the MTU via the vSphere Client UI. From the Networking inventory menu, select the vDS and then “Edit Settings”. Set the “Maximum MTU” to the desired MTU (e.g. 9000B is most likely for jumbo).

2.  Enable jumbo frames on the vmkernel ports

  • Use the esxcfg-vmknic command to delete and then add a vmkernel interface with an MTU of 9000. On ESXi, there seems to be a glitch in creating a vmkernel port on a vDS through the vcli, so the workaround is to create a vmkernel interface with MTU 9000 on a standard switch and then migrate it over to the vDS through the vSphere Client.

    You can get the status (name/address/mask/MAC addr/MTU) of the vmkernel interfaces via
    esxcfg-vmknic -l

3. Enable jumbo frames on the physical switches

  • This will depend upon the make/type of switch, but remember to enable end-to-end for the traffic type in use.

To enable jumbo frames for guest VMs, use the Enhanced VMXNET or VMXNET3 virtual nics and enable jumbos through the guest OS.