Since acquiring Avi Networks last year, VMware’s suite of cloud solutions is more comprehensive than ever. Our Senior Cloud Specialist, Vern Bolinius recently deployed an Avi Vantage load balancing service for a three-tier app using VMware Cloud on AWS. He describes the process step-by-step below…
Table of Contents
Overview
Building the Solution
Workload VMs
Avi Vantage Configuration
Avi Vantage Service Engine
Avi Virtual Service
Testing the Solution
Overview
Avi Networks provides software-defined application services across multi-cloud platforms. Avi Vantage, the company’s flagship product, delivers load balancing, web application firewalls and service mesh to any application, including metrics on service health and performance.
VMware purchased Avi Networks in mid-2019, further enriching its growing portfolio of cloud solutions.
I recently had the opportunity to deploy an Avi Vantage load balancing service for a traditional three-tier application in VMware Cloud on AWS, leveraging an existing on-cloud Vantage environment. I was impressed by the intuitive interface and ease of deployment; however, I also encountered a surprising gotcha.
Here I’ll share a summary of the process, hoping it will be useful should you require a similar setup. I plan to extend the configuration and leverage Avi Vantage for a cross-site on-prem-to-VMware Cloud on AWS failover, which I’ll cover in a future article.
First, you’ll need a bit of background on the Avi Vantage components. Avi Vantage uses a Controller + Service Engine model. The Controller is normally deployed as a 3-node cluster of VMs and is the central point of management for the Vantage infrastructure. Service Engines are individual VMs in the data plane that provide the desired virtual services. From the Avi documentation:
The Service Engine(s) hosts the Virtual Service, in our case a Load Balancer. A Virtual IP (VIP) is assigned to a Service Engine which listens for traffic on a designated port and forwards it to a Server Pool. Here’s a closer look:
Armed with the above Avi Vantage primer, let’s look at what we are trying to accomplish. We want to deploy a very simple 3-tier application in VMware Cloud on AWS with Web, App and DB layers and would like to load balance across two Web servers. The final solution should look like this:
Building the Solution
In the Desired Solution diagram, I show six VMs. Four are the workload VMs, sitting on the WEB, APP and DB networks. Two are the Avi Vantage components, the Controller and Service Engine.
Workload VMs
The workload VMs are simple Ubuntu Linux machines, deployed as OVAs. Nothing special there, but we should note the output from a successful connection to the web servers. When connecting to WEB-01, we get:
Connecting to WEB-02 gives:
We want to create a Virtual Service that uses a Virtual IP (VIP) to load balance between these two web servers.
Avi Vantage Configuration
For this scenario, we will not focus on the deployment of the Avi Vantage Controller. Instead, we will log into an existing Controller and create a Virtual Service.
Avi Vantage Service Engine
We first need to deploy a Service Engine that will be used in our Virtual Service. In VMware Cloud on AWS, the Service Engine needs to be manually deployed as an OVA (On other platforms, deployment can be done automatically by the Controller). It will have multiple vNICs and requires connectivity to the Vantage Controller as well as the servers that will be used for load balancing.
The OVA can be downloaded by logging into the Controller.
- Log into the Avi Controller
- Navigate to the “Infrastructure” menu, select the cloud platform to which you will deploy, and click the download icon:
- While the image is downloading, click on the key icon:
- Copy the Cluster UUID as well as the token to a text document. You’ll use them when you deploy the template. Note that the cluster UUID includes the word ‘cluster’:
- Once the se.ova image has been downloaded, deploy it into VMware Cloud on AWS through the VMware Cloud on AWS vCenter. Take note of the network mappings:
Management: This vNIC needs to be able to talk to the AVI Controller
Data Network 1: This vNIC is the Service Engine’s leg into servers that will be used for load balancing. In this case, two Web servers that sit on the WEB Tier.
Note: There can also be a VIP network that will host the IP address of the VIP for the load balancing. In this deployment the VIP will be in the same network as the servers for load balancing, so we don’t need a separate vNIC.
- “Customize template” is where you will enter the Cloud UUID and Token you captured above:
Gotcha – Watch out for this:
In the template deployment above, we specified the Management vNIC and the Data Networks 1 vNIC. When the VM gets deployed and powered on, the Service Engine does not enumerate the vNICs in the expected order.
eth0 – This will be the Management vNIC (as expected)
eth5 – This will be the Data Networks 1 vNIC (not eth1 as might be expected)
In the Service Engine, it’s important to ensure that that IP address for the Data Networks 1 vNIC is assigned to eth5, not eth1:
- Log into the Avi Controller
- In the “Infrastructure” menu, select “Service Engine” and edit the newly created Service Engine:
- If required, change IP address for eth1 to DHCP and set the IP address for eth5 to the Data Networks 1 IP:
- You can verify that eth5 is the correct interface by confirming that its MAC address matches the MAC address for Data Networks 1 vNIC for the VM in vCenter. Above, the MAC address of eth5 is 00:50:56:a5:3a:bb. In vCenter:
Avi Virtual Service
In Avi terminology, the load balancer we want to create is a “Virtual Service”. Let’s create this service.
- Log into the Avi Controller
- Under “Applications”, select “Create Virtual Service” then “Basic Setup”
- Select the Cloud. Mine is “VMC”:
- Give the Virtual Service a name and add the servers across which it will load balance:
- All other fields can be left at their default values for this simple load balancer. Click “Save”.
There is one small tweak I wanted to make. By default, the load balancing policy is set to “Least Connections”. For demo purposes, I’d rather use “Round Robin”.
- In the Avi Controller, select “Applications” and then “Pools”. Note that a server pool for the “books-srm” Virtual Service was automatically created. Click the “Edit” icon:
- Change the Load Balance policy from “Least Connections” to “Round Robin”:
Testing the Solution
We need to test the solution to ensure that everything is working as expected. We assigned the IP address 172.30.111.21 as the VIP for the Virtual Service.
- If we browse to the VIP we should see the books database along with the server chain:
- Refreshing the screen a few times should show that the service has successfully load balanced to the second web server:
Success!
I hope this was useful. In a follow-up article, I’ll set up an Avi Virtual Service that will provide intelligent failover between sites.