Cloud load balancing ensures maximum throughput in minimum response time, resulting in high-performing applications that can handle sudden traffic spikes. One way to achieve this is with Amazon Application Load Balancers. Here we explain the steps to integrating AWS Application Load Balancing with VMware Cloud on AWS.
There are various ways to load balance servers in a modern, three-tier (web/app/DB) application architecture. For highly optimized, secured, and well-distributed three-tier application architecture hosted on VMware Cloud on AWS, one approach is using Amazon Application Load Balancers, along with various highly available and resilient native Amazon services. These include Amazon Route53, Amazon CloudFront, Amazon WAF/Shield, Amazon S3, Amazon RDS, Amazon CloudWatch and AWS CloudTrail, among others
In this application architecture, two separate application load balancers are configured in the Customer VPC – one for web server traffic and one for internal application server traffic. You can configure target groups for your application load balancers with the pools of web servers and application servers that are part of the VMware Cloud on AWS SDDC.
An Amazon Application Load Balancer (ALB) functions at the application layer – the seventh layer of the Open Systems Interconnection (OSI) model. After the load balancer receives a request, it evaluates the listener rules in priority order and selects a target from the target group for the rule action.
You can add and remove targets from your load balancer as your needs change without disrupting the overall flow of requests to your application. As traffic to your application changes over time, AWS Application Load Balancing scales your load balancer – with the vast majority of workloads scaled automatically. You can also configure health checks for ALBs so the load balancer only sends requests to healthy targets. The database tier in this architecture leverages Multi-AZ Amazon RDS DB instances for high availability and resiliency.
The overall application architecture in this reference architecture leverages multiple AWS services that are highly optimized, secured and well-distributed. The application traffic flow between client and servers is as follows:
- Web users send their requests.
- Amazon Route53, a highly available DNS system, handles the users’ requests and sends them to CloudFront.
- Amazon CloudFront, a CDN with edge locations around the globe and secured by Amazon WAF and Amazon Shield, serves the web requests appropriately. It caches static and streaming content and delivers dynamic content with low latency from locations close to the web users. If it’s a new request to dynamic web content, or if the existing CloudFront cache has expired, CloudFront sends the request to the Origin server hosted on VMC on AWS.
- Amazon S3 – as an origin for static content, such as video/audio media files andmanuals – serves the content to Amazon CloudFront. It can also store the log files generated by CloudFront and web and application servers hosted on VMC on AWS.
- Application Load Balancer appropriately load-balances and sends the incoming web requests to the web and application servers hosted on VMC on AWS through Cross VPC Elastic Network Interface1 (ENIs). The traffic flow to and from ALB is secured with Security Group Rules.
- Compute Gateway (a T1 router), secured with compute gateway firewall rules, routes the web/app traffic to and from web and application servers appropriately.
- Web and application servers hosted on VMC on AWS serve the traffic and handle internal communications between the web/application server and RDS Database servers.
- Amazon RDS MSSQL database server, a highly available Multi-AZ DB server, serves the DB requests to web/app servers residing on VMC on AWS.
- VMWare Cloud Admin manages/administers VMC on AWS resources over the Internet connection or from on-premises networks using VPN or Direct Connect connections.
- Management Gateway (a T1 router), secured with management gateway firewall rules, routes all the administrative traffic to and from management appliances/VMs.
[^1]: Generally, services running in the SDDC’s connected VPC are only accessible within the SDDC, as changes are made to the VPC routing table to send traffic to unknown destinations (i.e. the “default route”) back via the ENIs to the SDDC. In this example, if the SDDC web servers were to see the client connection’s original IP addresses, they would try to take their default route across the ENis, through the Compute Gateway and SDDC Router, resulting in asymmetric routing. Stateful firewall processing would then drop the traffic as the inbound connection would not be present in its connection tables. However, the web servers only see connections from the Application Load Balancers in the connected VPC, and only see connections from the Cloudfront IP addresses to which they can route directly across AWS infrastructure.
Looking to better understand VMware’s unique approach to multi-cloud architecture? Get the definitive guide here.
Download the VMware Cloud on AWS reference architecture here.