1 Comment

In my conversations with customers and peers, load balancing is becoming an increasingly popular discussion.  Why you may ask?  Simple, load balancing is a critical component for most enterprise applications to provide both availability and scalability to the system.  Over the last decade we have moved from bare metal servers to virtual servers and from manual deployment of operating systems to using tools like Chef, Puppet, vRA or other custom workflows. In addition to the movement towards virtualization and the API being the new CLI, we are also seeing a movement to Network Functions Virtualization (NFV) where Virtualized Network Functions (VNF) such as routing, VPN, firewalls, and load balancing are moving to software. The value of automation, SDN, and NFV has been proven in the largest networks today and this migration to software has proven to have tremendous ROI. Many companies also want to leverage the same cost effective models.   To get us started, here are the most common questions:

  1. Does NSX provide load balancing? Yes, NSX has a feature set that addresses the most common deployment requirements for load balancing in enterprises today.
  2. Do you charge more for NSX Edge Load Balancing? No, NSX Load Balancing is enabled by the overall NSX license and is included in the NSX Advanced and Enterprise editions.  In fact, our license model is not constricted on bandwidth or features.  If you own NSX Advanced or Enterprise, all you have to do is enable the NSX edge load balancing capability.
  3. Do I have to purchase licensing per load balancer instance? Absolutely not.  You can deploy 1, 10, 100, or 1000. This is powerful when you think of how you deploy load balancers today and the future direction of micro-services and DevOps.  Historically, load balancers are large, shared platforms that are intimately tied to many applications, but not managed by the application teams.  You can now choose to assign a load balancer to the application team without increased costs or worries about one team impacting another.
  4. I am happy with my current vendor; can I use them? Yes! You can continue to use the current products just like you do today.

So let’s dive in and take a look at the features that NSX load balancing provides.

When we start the transport layer of the OSI model, load balancing starts at the transport layer. We support TCP, UDP, and TCP full proxy virtual servers.  Moving up the stack to session layer we support SSL and TLS and finally moving to the application layer we support HTTP.  Can you load balance IIS? Yes!  How about SMTP or FTP? Yes and Yes! Exchange? Yes! Horizon? Yes! You get the picture if the application uses TCP or UDP the answer is Yes, and if we want to support SSL/TLS or HTTP we can do that too!

When it comes to load balancing, topology matters.  Ultimately a load balancer intercepts, translates, and somehow manipulates traffic.   When load balancers are deployed, a decision is made to place them in-line or in, what is referred to as, a one-armed or SNAT mode.  In an in-line deployment, the load balancer is also the default gateway for backend servers or is “naturally” inline between the backend servers and the clients via the network topology.

NSX load balancing inline topology

In one-armed mode, the load balancer is deployed in parallel to the application servers and the client traffic has to be translated to ensure that the load balancer has access to all packets in the connection flow.

NSX load balancing one-armed topology

The good news is that with NSX Load Balancing you can support both in-line and one-armed modes, and you can support both on the same system or chose to deploy a second or third system to meet a specific need.

When you were in school, perhaps a few of you thought, “When am I going to use this math again?” Well, in load balancing algorithms matter! There is always a question of what algorithms do you support.  The good news is that NSX Load Balancing has support for Round Robin, Weighted Round Robin, Least Connections, Weighted Least Connections, and computational hashes.

NSX load balancing algorithms

Great! Now that we can spread the load over multiple servers, we may need to make sessions persist or “stick” to the application servers.  And guess what – we do that too! We can persist of source IP address, MSRDP, insert a cookie, parse a server set cookie, use a hash, or use SSL session ID. Chances are that we have the capability required to meet the application persistence requirements you need to deliver the service.

To be enterprise grade, a load balancer needs to have a policy language or rule engine. NSX Load Balancing offers this too! We call them Application Rules, and with application rules, you define criteria and action.  You can add headers, delete headers, rewrite headers, perform content switching, delay connections, deny connections just to name a few.  If the data is exposed in the IP, TCP, UDP, SSL, or HTTP headers we can see at, read it, and act on it.

Now that we understand the basics of NSX load balancing let’s enable it.  In your environment or in our free Hands-on-Lab (HOL), Navigate to vCenter Networking and Security –> Edge Services and deploy an Edge Services Gateway. Once the system is deployed open the new ESG navigate to Load Balancing and enable it like below.  Finally, navigate to the firewall and make sure it is enabled. In less time than it takes to get a cup of coffee you can have a load balancer deployed.  If infrastructure as clicks is not your thing you can do all this via an API! For detailed instructions on deploying an ESG please click HERE, and for detailed instructions on setting of Load Balancing on a deployed ESG please click HERE.

Congratulations! You have installed a new load balancer that can be deployed in production, and you did not need to contact purchasing or wait for someone to rack it and plug it in.

In our next installment, we will look at basic load balancing at L4, what we can do with NSX and how to configure it.

Want to learn more about NSX load balancing before the next blog installment?