Home > Blogs > VMware Support Insider


Troubleshooting Network Teaming Problems with IP Hash

Last week, I shed some light on the challenges having multiple Vmkernel ports in the same subnet can cause. Today, I’ll be shifting gears back to troubleshooting common network teaming problems – more specifically with IP hash.

IP Hash load balancing is really nothing new, but unfortunately, it is very often misunderstood. To be able to effectively troubleshoot IP hash and etherchannel problems, it’s important to first understand how IP hash works and its inherent limitations.

From the vSphere 5.1 Networking Guide definition of IP Hash load balancing:

“Choose an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, whatever is at those offsets is used to compute the hash”

This may seem like an overly concise definition, but it does indeed summarize what the host is doing in a network team utilizing IP hash load balancing. To expand on this a bit – the host will take the source and destination IP address in a HEX format, perform an Xor operation on them, and then run the result through another calculation based on the number of uplinks in the team. The final result of the calculation is a number between 0 and the number of uplinks, minus one. So if you had four uplinks in an IP hash team, the result of each calculation would be a number between 0 and 3 – four possible outcomes that are each associated with one NIC in the team. This end result of the calculation is commonly referred to as the interface index number.

You may also be wondering how non-IP traffic would be put through this calculation since there wouldn’t be an IP header and thus it wouldn’t have a 32-bit source and destination IP address for this calculation. As per the definition from the networking guide: “For non-IP packets, whatever is at those offsets is used to compute the hash”. This essentially means that two 32 bit binary values will be taken from the frame/packet from where the IP addresses would usually be located if it was an IP packet. It really doesn’t matter if these two 32-bit values are not actually IP addresses. So long as there is binary data in these locations, the calculation can still be applied to balance this traffic out across the adapters in the team.

This behavior is very different from the default ‘Route based on the originating virtual port ID’ load balancing method. Instead of having a one-to-one mapping of virtual NICs to physical adapters in the host, any single virtual machine could potentially use any of the NICs in the team depending on the source and destination IP address.

It is this fact that makes IP Hash so desirable from a performance standpoint. It is the only load balancing type that could (in theory) allow any single virtual machine to utilize the bandwidth available on all network adapters in the team. If a virtual machine operates in a client/server type of role with a large number of independent machines accessing it (think hundreds of Outlook clients accessing an Exchange VM for example) the IP hash algorithm would provide a very good spread across all NICs in the team.

On the flipside, however, if you have a small number of IP addresses (or only one) doing most of the talking, you’ll find that the majority of the traffic may be favoring only one network adapter in the team. An example of this would be a database server that is accessed by only one application server. Since there is only one source and destination address pair, the hash calculation will always work out to the same index value and the host will only select one network adapter in the team to use.

That’s how it works from the host’s perspective, but how does the return traffic know which adapter to take? As you probably expected – it’s not just the host that has some math to do, the physical switch also needs to run packets through the same algorithm to select the correct physical switch port.

For this load balancing type to work correctly, the physical switch ports must be configured as an etherchannel bond (also called a port channel, or a trunk in the HP world). This essentially allows the bonding of multiple network adapters and causes the switch to perform the same type of hash calculation to ensure that it’s behaving in the exact same way for all traffic heading in the other direction. If the physical switch is not configured for an etherchannel, it’ll start getting confused very quickly as it’ll see the same MAC address sending traffic out of multiple ports resulting in what’s referred to as ‘MAC flapping’. I won’t go into too much detail into this scenario, but needless to say, all sorts of undesirable things start to happen when this occurs.

IP Hash and Etherchannel Limitations

Now that we have covered how IP hash works, let’s cover off briefly on the limitations of this type of teaming setup. The potential performance benefits of IP hash may not always outweigh the flexibility that is lost, or the other limitations that will be imposed.

I’d definitely recommend reading KB article: ESX/ESXi host requirements for link aggregation (1001938), which discusses this in detail, but to summarize a few key limitations:

ESX/ESXi supports IP hash teaming on a single physical switch only: This one can be a real deal-breaker for some. Because etherchannel bonding is usually only supported on a single switch, it may not be possible to distribute the uplinks across multiple physical switches for redundancy. There are some exceptions to this rule as some ‘stacked’ switches, or modular switches with a common backplane support etherchannel across physical switches or modules. Cisco’s VPC (virtual port channel) technology can also address this on supported switches. Consult with your hardware vendor for your options.

ESX/ESXi supports only 802.3ad link aggregation in ‘Static’ mode: This is also referred to as ‘Mode On’ in the Cisco world. This means that LACP (link aggregation control protocol) cannot be used. The only exception is with a vNetwork Distributed Switch in vSphere 5.1, and with the Cisco Nexus 1000V. If you are using vNetwork Standard Switches, you must use a static etherchannel.

Beacon Probing is not supported with IP hash: Only link status can be used as a failure detection method with an IP hash team. This may not be desirable in some situations.

An Example

Let’s walk through another fictitious example of a network teaming problem. Below is the problem definition we’ll be working off of today:

“I’m seeing very odd behavior from one of my hosts! Each virtual machine can’t ping different sets of devices on the physical network. If I try to ping a device, it may work from some virtual machines, but not from others. Connectivity is all over the place. I can’t make heads or tails of this! Help!”

Well this certainly sounds very odd – why are some devices accessible while others are not? And to make matters even more confusing, things that don’t work from one VM work fine from another and vice versa.

Let’s have a quick look at the vSwitch configuration on the host having a problem:

We’ve got three physical adapters associated with a simple virtual machine port group. Let’s have a look at the teaming configuration to determine the load balancing type employed:

And sure enough, Route based on IP hash is being employed as the load balancing method. With this very limited amount of information we can determine quite a bit about how the environment should be configured. We know the following:

  1. All three physical switch ports associated with vmnic1, vmnic2 and vmnic3 should be configured identically and should be part of a static etherchannel bond. Since this is a vNetwork Standard Switch, LACP should not be used.
  2. No VLAN tag is defined for the port group, so this tells us that we’re not employing VST (virtual switch tagging) and that the switch should not be configured for 802.1q VLAN trunking on the etherchannel bond. It should be configured for ‘access’ mode.
  3. The five virtual machines in the vSwitch will be balancing their traffic across all three network adapters based on source and destination IP addresses. The host will perform a mathematical calculation to determine which adapter to use.

To start our troubleshooting, we’ll first try to determine what works and what doesn’t. We’ll use two of the virtual machines, ubuntu1 and ubuntu2 for our testing today. We obtained a list of IP addresses and their ping results from our fictitious customer for these two virtual machines. Below is the result:

For ubuntu1:

10.1.1.1 Physical Router, default gateway Success
10.1.1.22 Physical Server, SQL database Success
10.1.1.23 Physical Server, SQL database Success
10.1.1.24 File Server on another ESXi host Fail
10.1.3.27 Physical Server, Primary Domain Controller Fail
10.1.3.28 Secondary Domain Controller on other ESXi host Success
10.1.1.101-106 All other VMs in the same port group Success

 

For ubuntu2:

10.1.1.1 Physical Router, default gateway Success
10.1.1.22 Physical Server, SQL database Success
10.1.1.23 Physical Server, SQL database Fail
10.1.1.24 File Server on another ESXi host Success
10.1.3.27 Physical Server, Primary Domain Controller Success
10.1.3.28 Secondary Domain Controller on other ESXi host Success
10.1.1.101-106 All other VMs in the same port group Success

 

Well that certainly seems odd doesn’t it? It’s pretty hard to find a pattern here and things really do seem to be “all over the place” as our fictitious customer claims. Neither of the VMs is having difficulty getting to the default gateway, but ubuntu1 can’t hit one of the VMs on another host and can’t hit a physical domain controller. Ubuntu2 can’t hit one of the physical SQL servers, but didn’t have any trouble reaching the other five machines we tried.

We can pretty much rule out the virtual machines in this situation as they can communicate with each other within the vSwitch just fine, and can also communicate with most of the devices upstream on the physical network as well. At this point, you may be suspecting that there is some kind of a problem with the teaming configuration, a physical switch problem or possibly something with one of the physical NICs on the host. All of these would be good guesses and would be in the correct direction here.

As I discussed in my blog post relating to NIC team troubleshooting last week, the esxtop utility can be very useful. Let’s have a look at the network view to see if it can lend any clues:

If you recall from my previous post when using ‘Route based on originating virtual port ID’, you’ll have a one-to-one mapping of virtual machines to physical network adapters when each VM has only a single virtual NIC. Not so with IP Hash. Instead, each virtual machine lists ‘all’ and the number of physical adapters it could be utilizing based on its calculation of the source and destination IP hash.

Well, that stinks. But we’re not quite ready to crawl to our network administration team in shame asking them to “check the switch”. Let’s take this a step further, get a better understanding of the problem and perhaps even impress them a bit in the process!

We can quite easily conclude that what we are dealing with here is an IP hash or an etherchannel problem. How do we know this? Very classic symptoms. Some source/destination IP combinations work just fine, while others don’t. Forget VLANs, forget subnet masks, routing – none of this matters in this situation. The same situation could present itself if you were communicating with something in the same network, or something 15 hops away on the other side of the world. The pattern that we are looking for here is actually the complete LACK of a pattern across the VMs. What we know is that we’re experiencing consistent difficulty reaching certain IP destinations, and it’s different from each VM we try in the vSwitch.

Unraveling the Mystery of IP Hash

Okay, so now what? Let’s think back to how IP hash works – it’s just a mathematical calculation. If the host can figure out which uplink to use, why can’t we do the same? Unfortunately, esxtop can’t help us, but if we have the source and destination IP addresses as well as the number of physical uplink adapters that are link-up, we can determine that path it will be taking.

I don’t want to scare anyone away here, but this does require some math. I’ll be the first to admit that I’m not the greatest with math, but if I can do it with the help of a calculator, then so can you. VMware has a great KB on how to do this calculation – KB: Troubleshooting IP-Hash outbound NIC selection (1007371), and I’d recommend giving it a read as well. I’ll walk you through this process for one of the source/destination address pairs and then we can see if a pattern finally emerges.

Let’s start with ubuntu1 and its default gateway. We know communication across this pair of addresses is working fine:

Source IP address: 10.1.1.101

Destination IP address: 10.1.1.1

Number of physical uplinks: 3

Let’s get started. The first thing we’ll need to do is to convert the IP addresses into HEX values for the calculation. There are some converters available that can do this in one simple step, but I’ll take the slightly longer road for educational purposes. First, let’s convert each of these IP addresses into their binary equivalents. This will need to be done for each of the four octets of the IP address separately. If you aren’t sharp with IP subnetting, or don’t really know how to convert from decimal values to binary, not to worry. Simply use the Windows Calculator in Programmer Mode and it can do the conversion for you.

Our IP address 10.1.1.101 – broken into its four independent octets – converts to binary as follows:

10

1

1

101

00001010

00000001

00000001

01100101

 

Pro-Tip: Remember, each of the octets of an IP address is expressed as an 8 bit value. Even if a binary value is calculated as fewer than 8 bits, you’ll need to append the 0’s at the beginning to ensure it’s always 8 bits in length. So a binary value of 1 for example, should be expressed as 00000001 for the purposes of our calculations today. Only the first octet can be expressed with less than 8 bits for this purpose because any leading zeros won’t change the total value of the number.

Next, we need to combine all four binary octets into a single 32-bit binary number. To do this, we simply append all of these octets together into the following:

00001010000000010000000101100101

Or if we drop the leading zeros at the beginning of the 32-bit value, we have:

1010000000010000000101100101

This number equates to 167,838,053 when converted to decimal format, but we’ll need the HEX value. To convert this, you can simply enter the long 32-bit binary number into Calculator in ‘Programmer’ mode, and then click the ‘Hex’ radio button. If you did it correctly, you should get the value:

0xA010165

Great, now we’ll do the same thing for our destination IP address, which is 10.1.1.1. After converting this to binary and then to HEX, we get the following value:

0xA010101

Pro-Tip: From here on out, I’ll always express hexadecimal values with a 0x prefix. This is so that there is no confusion between regular decimal numbers and hexadecimal numbers. You can usually identify a HEX value if it has an A, B, C, D, E or an F in it, but it could just as easily be made up of only numbers between 0 and 9. Our hexadecimal value of A010165 could also be represented as 0xA010165 – simply ensure your calculator is in HEX mode and drop the 0x when doing the upcoming calculations.

Now we’ve got everything we need to run this through the IP Hash calculation:

Source HEX address: 0xA010165

Destination HEX address: 0xA010101

Number of physical uplinks: 3

From this information, we run it through the following formula outlined in KB article: 1007371

(SourceHEX Xor DestHEX) Mod UplinkCount = InterfaceIndex

Even if you have no idea what Xor or Mod is, don’t worry – any scientific or programmer calculator can make quick work of this. In the Windows Calculator, simply go into programmer mode and the Xor and Mod functions are available. Don’t forget to ensure you are in HEX mode before entering in the values. Plugging our values into this equation, we get the following:

(0xA010165 Xor 0xA010101) Mod 3 = InterfaceIndex

Thinking back to high school math class, let’s calculate what’s in the brackets first, and we are left with the following:

0×64 Mod 3 = InterfaceIndex

And then we simply use Mod 3 of that value to calculate the interface index. Remember, because we have only three uplink adapters, a valid answer will be anything between 0, 1 or 2.

InterfaceIndex = 1

And there you have it. That wasn’t so bad was it? From our calculation we were able to determine that communication between 10.1.1.101 and 10.1.1.1 will be using interface index 1, which is the second uplink in the team as this value starts at 0.

A New View of the Situation

I went ahead and calculated the remaining interface index values for the other source and destination IP address combinations for ubuntu1 and ubuntu2. Below is the result:

From ubuntu1:

Source IP Destination IP Ping Success? Interface Index
10.1.1.101 10.1.1.1 Success 1
10.1.1.101 10.1.1.22 Success 1
10.1.1.101 10.1.1.23 Success 0
10.1.1.101 10.1.1.24 Fail 2
10.1.1.101 10.1.3.27 Fail 2
10.1.1.101 10.1.3.28 Success 0

 

From ubuntu2:

Source IP Destination IP Ping Success? Interface Index
10.1.1.102 10.1.1.1 Success 1
10.1.1.102 10.1.1.22 Success 1
10.1.1.102 10.1.1.23 Fail 2
10.1.1.102 10.1.1.24 Success 0
10.1.1.102 10.1.3.27 Success 1
10.1.1.102 10.1.3.28 Success 1

 

Well now, I’d say we are now seeing a very distinct pattern here. The physical adapter or physical switch port associated with interface index 2 is no doubt having some kind of a problem here. At this point, we can now hold our heads up high, walk down the hall to our network administration team and let them know that we’ve clearly got an etherchannel problem here. They may even be impressed when you tell them that you’ve narrowed it down to one of the three NICs in the team based on your IP Hash calculations!

After a quick look at the physical switch configuration, the network team provides you with the configuration of the port-channel and the three interfaces in question:

interface Port-channel8
switchport
switchport access vlan 10
switchport mode access
!
interface GigabitEthernet1/11
switchport
switchport access vlan 10
switchport mode access
channel-group 8 mode on
!
interface GigabitEthernet1/12
switchport
switchport access vlan 10
switchport mode access
channel-group 8 mode on
!
Interface GigabitEthernet1/13
switchport
switchport trunk allowed vlan 99-110
switchport mode trunk
switchport trunk encapsulation dot1q
switchport trunk native vlan 99

 

And there you have it. On the physical switch, port-channel 8 is made up of only two interfaces – not three. The third NIC in the team is associated with port GigabitEthernet1/13, which is an independent VLAN trunk port. Not only is it not a part of the etherchannel, its VLAN configuration is way off. Remember – IP Hash requires a consistent configuration on the physical switch. Uplink adapters cannot be removed or added to the vSwitch without ensuring that the physical switch ports associated with them are added to the etherchannel bond.

I hope you enjoyed this write-up and found it educational. I’d recommend checking out the following KB articles for more information on some of the topics I’ve discussed today:

9 thoughts on “Troubleshooting Network Teaming Problems with IP Hash

  1. Totie Bash

    Nice… I use IP hash paired with my Cisco 3750 and I am very happy with it. One thing, there was one event where the UPS where the 3750 is plugged in had a hiccup and when it came back up the VM’s were all confused. Half of the machines can get to the VM and half can’t. VM reboot did not work. VMotion to another host got the etherchannel gears working. I could have done multiple things but I was up against the clock. Looking back, some of the things that I could have done was to leave one cat5 and briefly unplug the other cat5 and plug them back in. Also, instead of the reboot, I should try reset as well.

  2. Bozo Popovic

    Hi Mike,

    thanks for the help. One question i have is related to which version of vSphere is this applicable. I believe in 5.1 we have removed all the requirement except the last one related to Beacon probing which will be removed soon.

    Thanks,
    Bozo

    1. Totie Bash

      5.1 at the time. It happened again last week on 5.1U1 on just 1 of the 15 vm on that particular host. I had one of my admin VMotion it and that fixed it again. He was actually calling to get permission to restore it from VDP after toying with it for more than two hours but I have good memory and I immediately recognized the symptom. This is only the second time for about 4 years that I have that ettherchannel setup so for me that’s actually acceptable and I kinda know what cause it so it’s not fair to blame it on VSphere

  3. RedRooster

    I’m not sure if this would work on the VMs but in priciple it should. I remember when a power outage caused a similar failure of my etherchannels on a 3750, I had to shutdown and restart the the etherchannel ports.

  4. Jason

    Excellent info. Can you provide some clarification about how to determine which vmnic corresponds to a given Interface Index number? Your example indicates that Interface Index 1 corresponds to the “second uplink” in the team, but which vmnic is that? Since there are only three NICs in your example, I assume that Vmnic2 is the appropriate answer. What about Interface Index 0? Would that correspond to vmnic1 or vmnic3? What if there were more than 3 uplinks in a team? Does the system count the uplinks from top to bottom, bottom to top, or in some other fashion when determining which Interface Index number to assign to each uplink?

    Thanks,
    Jason

    1. Mike

      Hi Jason – I don’t have a lab environment where I can test this theory at the moment, but I believe the interface index will increment from zero based on the ‘Uplink Order’ determined for the vSwitch. For example, I created a vSwitch with four uplinks – vmnic2,3,4 and 5. According to ‘esxcfg-info’, the uplink order is:

      |—-Uplink Order……………………………vmnic2,vmnic3,vmnic4,vmnic5

      So in this case, interface index 0 is vmnic2, interface index 1 is vmnic3 and so on. In my experience, ‘esxcfg-vswitch -l’ will list the vmnics in the correct order from left to right for each vSwitch, but ‘esxcli network vswitch standard list’ and some other views in the vSphere Client will order them incorrectly. When in doubt, use ‘esxcfg-info’.

      Note: ‘esxcfg-info’ will provide a TON of information. You can pipe it to ‘less’ – esxcfg-info | less and search for lb_ip to jump to any vSwitch with IP Hash configured.

  5. Pingback: vSphere HA detected that host is in a different network partition than the master | Favoritevmguy

Comments are closed.