Product Announcements

vSphere 5.1 – VDS New Features – Link Aggregation Control Protocol (LACP)

After the holiday break I am happy to be back and want to continue where I left off in terms of blog posts. Before I do that let me first wish you all a Happy New Year!!! End of the last year, I did couple of posts providing some technical details on the new vSphere Distributed Switch (VDS) features released as part of vSphere 5.1. In this post I will discuss the new Link Aggregation Control Protocol (LACP) feature. While discussing this feature, I will also talk about its configuration parameters and scenarios in which this teaming option will provide you better throughput and better utilization of uplinks (physical NICs).

Link aggregation allows you to combine two or more physical NICs together and provide higher bandwidth and redundancy between a host and a switch or between two switches. Whenever you want to create a bigger pipe to carry any traffic or you want to provide higher reliability you can make use of this feature. However, it is important to note that the increase in bandwidth by clubbing the physical NICs depends on type of workloads you are running and type of hashing algorithm used to distribute the traffic across the aggregated NICs.

Broadly there are two types of link aggregation approaches one is static and another is dynamic. Static link aggregation is configured individually on hosts or switches and no automatic negotiations happen between the two end points. This link aggregation approach doesn’t detect any cabling or configuration mistakes and any switch port failures that don’t result in loss of link status. Dynamic link aggregation, also called as LACP, on the other hand addresses the concern that static link aggregation has and thus provides better operational experience by detecting configuration or link errors and automatically re-configuring the aggregation channel. This is possible because of the heart-beat mechanism Active LACP has between the two endpoints.

With the previous releases of vSphere, VMware supported static link aggregation option, which worked with external physical switches with similar capabilities. For example, while connecting to a Cisco switch a “static ether channel” configuration on the physical switch is required. Another step user have to perform on the virtual switch side is to configure IP hash as their teaming algorithm on the port groups.

What does this IP hash configuration has to do in this Link aggregation setup? IP hash algorithm is used to decide which packets get sent over which physical NICs of the link aggregation group (LAG). For example, if you have two NICs in the LAG then the IP hash output of Source/Destination IP and TCP port fields (in the packet) will decide if NIC1 or NIC2 will be used to send the traffic. Thus, we are relying on the variations in the source/destination IP address field of the packets and the hashing algorithm to provide us better distribution of the packets and hence better utilization of the links.

It is clear that if you don’t have any variations in the packet headers you won’t get better distribution. An example of this would be storage access to an nfs server from the virtual machines. On the other hand if you have web servers as virtual machines, you might get better distribution across the LAG. Understanding the workloads or type of traffic in your environment is an important factor to consider while you use link aggregation approach either through static or through LACP feature.

In this release to enable LACP feature, users have to follow similar steps as in static ether channel configuration, where they enable the feature on the physical switch and then configure it on the VDS.

Let’s now take a look at the LACP configuration steps on a VDS for a host with 2 – 10 gig uplinks shown in the figure below.

Example Design – Host with two 10 gig physical NICs

Part of the LACP configuration is performed at the uplink port group level and other part on the port group (yellow) shown in the figure above. On the uplink port group level enable LACP Active or Passive mode and then on the port groups enable IP hash algorithm as part of the teaming configuration. The following screen shots provide step by step instructions on how to configure LACP feature:

1) Select the uplink port group under the vSphere Distributed Switch

Configuration Step 1

2) Click Edit to change the properties of the uplink port group

Configuration Step 2

3) Select LACP

Configuration Step 3

4) Through the drop down menu of Status Enable LACP.

Configuration Step 4

5) You have a choice to enable Active or Passive mode for the enabled LACP.

Configuration Step 5

6) Until this point we completed one part of the LACP configuration. The other part is to configure IP hash on the port groups as noted during the last uplink port group configuration step.

Configuration Step 6

7) Select the port group and click edit to change the teaming configuration.

Configuration Step 7

8) Select Teaming and failover.

Configuration Step 8

9) Then through the drop down menu of Load balancing algorithm choose “Route based on IP hash”

Configuration Step 9

You should repeat the steps 6,7, 8, and 9 for the other port groups in your deployment.

The following are some of the things you should note about the LACP feature.

1)   We only allow you to create one LACP LAG per VDS and per host. For the users who have 2 ten-gig type of deployments this is not an issue, but for those who have multiple one gig NICs and want to create multiple LAGs this could be a limitation.

2)   There is limit of 32 uplinks in a LAG. I am sure this is more than enough.

3)   There is support for only one load balancing algorithm (IP hash)

4)   You can’t provide LAG as a destination for the port mirror traffic but you can choose one of the physical NIC within the LAG as a destination.

5)  You should also follow the recommendations from the Physical switch vendor when it comes to configuring LACP.

Please let me know if you have any specific questions on this feature.

Get notification of these blogs postings and more VMware Networking information by following me on Twitter:  @VMWNetworking