In the previous blog, we investigated the basic feature set of NSX Load Balancing, some of the business reasons to use it, and deployed an ESG (Edge Services Gateway), the NSX load balancing platform. Today, we are going to setup our first virtual server. When we look at load balancing, it operates at the Transport layer or above of the OSI model and is inclusive of the network layer. In the most basic of terms, Load Balancing looks at a “session” from the transport layer and applies a load balancing algorithm and a NAT policy to the traffic. I put “session” in quotes because we can load balance both TCP and UDP based applications, but UDP does not have a stateful session, but we can still load balance UDP services.
Whenever someone has stated that and given application cannot be load balanced, I first ask them if the traffic can be processed by a NAT at either the client or server end. If the answer is yes, odds are that it can be load balanced with sufficient understanding of the application and the required ports, protocols and persistence to make the application function correctly. This is true with all load balancing platforms, but there are some corner cases where a very specific application proxy is required to function (such as SIP), so engage your partner SE or VMware NSX SE for help. Ultimately you need to combine the knowledge of how the application functions and what it takes to bend the packets in the network to meet those requirements.
With NSX Load Balancing, we have two packet pipelines for load balancing.
- Accelerated Virtual Server, which supports TCP and UDP traffic, and makes all the decisions based on layer 4 and lower data. Accelerated virtual servers do not proxy the TCP connection, and thus these deployments support larger session concurrency and higher transactions per second. Note that the only difference in configuring a TCP to UDP virtual server is the application profile so we will not build one for each.
- TCP-proxy (aka Full Proxy Virtual Server) which supports TCP and TCP-based applications such as HTTP and SSL. These virtual services are much more flexible than Accelerated Virtual Servers, as they have the ability to use a traffic policy language, perform SSL offload, and HTTP parsing. While they are more flexible they consume greater compute resources because the proxy the TCP connection and parse higher layer data.
The accelerated virtual server will be the focus of our investigation for the rest of this blog, and more information can be found in the NSX Administration Guide
For the basic topology depicted below, we have a client, NAT and an Application Server. We are running a basic web application on the server. We have created Destination NAT (DNAT) to allow inbound connections to the web application. A DNAT simply looks at the IP Header, makes the address translation per policy, and then it updates all the required fields in the packet. The response packet is processed via the same pipeline, and the IP addresses and checksums are updated as well.
In this more advanced topology depicted below, we have a client, Virtual Server (which contains the DNAT), a monitor and a Pool with one Application Server. This is slightly more complex than the first NAT described above because we added in logic that also interrogates the transport layer (TCP/UDP) via an Application Profile.
In this next topology depicted below, we have a Client, Virtual server and a Pool with three Application Servers. This example is more complex because not only do we have a DNAT and an application profile, but also because we add a load balancing decision. By adding pool members, we have increased the capacity of the application as well as provided high availability, though adding additional complexity. Once you add pool members, it is important that you understand the persistence requirements of the application, as this will not show itself by a simple DNAT or using a pool with a single member.
We will first build an accelerated virtual server on TCP port 80. Navigate to the vCenter Web Client → Networking & Security → NSX Edges and select the ESG that we had previously deployed.
First, we will assign an IP address to the uplink interface to use for the virtual server.
Next, we will create a pool of application servers. Navigate to Load Balancer → Pools and click on the green + sign. We will give the pool the name, and for each member, we will assign a name, IP address, and port that the application is running on. By default, the monitor will use the application port, but you can add a custom port if necessary.
Next, we will create application profile that does not have persistence. Navigate to Load Balancer → Application Profiles and click on the green + sign. Select the Type of “TCP”, and leave the Persistence set to “None”.
Finally, we will create an accelerated virtual server. Navigate to Load Balancer → Virtual Servers and click on the green + sign. Ensure the Enable Virtual Server and Enable Acceleration check boxes are selected. Next, select your Application Profile, assign a Name, select your IP Address and assign the Port and Default Pool.
Now that we have created a virtual server we should generate some traffic to it and then look at the load balancing and related statistics. You can generate this traffic with a web browser or with tools like CURL, WGET or Apache Bench. In the screen shots below, I will SSH into our example ESG and take a look.
Looking at an Accelerated Virtual Server, you will see the IP address and the pool members that are bound to the virtual server.
After running some traffic, you will notice that each server in the pool will have statistics in the Session row.
Now we will look at the virtual server as a whole which. Notice that you can see the aggregate session statistics equal the pool session statistics.
Now to update our profile with source IP persistence to consistently get connected to the same backend server from the same client. Navigate to the Load Balancer → Application Profiles, select the TCP profile you created before and click on the pencil symbol. Select “Source IP” from the Persistence drop down.
Now generate new traffic, and you will notice that all the traffic goes to only one of the backend servers from your client. In the screen shot below, I generated 100 new requests, and you will notice that they all persisted to SERVER-2.
Now that we have built a basic virtual server, investigated load balancing and setup persistence, we have all we need to establish a basic load balanced service that provides high availability and scalability. In our next installment, we will look at a full virtual server (proxies the TCP connection) so we can leverage application rules, insect the SSL or HTTP data.
Want to learn more about NSX load balancing before the next blog installment?