Uncategorized

Cloud Pod Architecture and Cisco Nexus 1000V Bug

Jeremy WheelerBy Jeremy Wheeler

I once worked with a customer who owned two vBlocks between two data centers. They ran Nexus 1000V for the virtual networking component. They deployed VDI, and when we enabled cloud pod architecture, global data replication worked great; however, all of our connection servers in the remote pod would show red or offline. I found that we could not telnet to the internal pod or remote pod connection servers over port 8472. All other ports were good. VMware Support confirmed that the issue is with the Nexus 1000V and found that there was a bug in the N1KV and a TCP Checksum Offload function.

The specific ports in question are the following:

VMware View Port 8472 – The View Interpod API (VIPA) interpod communication channel runs on this port. View Connection Server instances use the VIPA interpod communication channel to launch new desktops, find existing desktops, and share health status data and other information.

Cisco Nexus 1000V Port 8472 – VXLAN; Cisco posted a bug report about 8472 being dropped at the VEM for N1KV: Cisco Bug: CSCup55389 – Traffic to TCP port 8472 dropped on the VEM

The bug report mentions TCP Checksum being the root cause and offloading only 8472 packets. If removing the N1KV isn’t an option, you can disable TCP Offloading.

To Disable TCP Offloading

  • In the Windows server, open the Control Panel and select Network Settings Change Adapter Settings.
    JWheeler Ethernet Adapter Properties 1
    Right-click on each of the adapters (private and public), select Configure from the Networking menu, and then click the Advanced tab. The TCP Offload settings are listed for the Citrix adapter.JWheeler Ethernet Adapter Properties 2

I recommend applying the following:

  • IPv4 Checksum Offload
  • Large Receive Offload (was not present for our vmxnet3 advanced configuration)
  • Large Send Offload
  • TCP Checksum Offload

You would need to do this on each of the VMXNET3 Adapters on each connection server at both data centers. Once disabled (it did cause nic to blip), we were able to Telnet between the data centers on port 8472 again.

After making these adjustments you should be able to login to the View Admin portal and see all greens for remote connection servers. I have tested and validated this, and it works as intended. For more information I recommend you read Understanding TCP Segmentation Offload (TSO) and Large Receive Offload (LRO) in a VMware environment (2055140).


Jeremy Wheeler is an experienced senior consultant and architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013