If you have been following along in this series, first of all, thank you! Here is a summary of our work so far:
- Download Photon and Build a Template
- Build the Database Server
- Build the Application Server
- Build the Web Server
Next, you should have a basic three-tier application created:
I tried to use simple components to make it usable in either a home lab or a nested environment, so they should perform exceedingly well in a real environment.
Virtual Machine Profile
The component Photon OS machines boot in a few seconds, even in our nested environment, and their profiles are fairly conservative:
- 1 vCPU
- 2 GB RAM
- 15.625 GB disk
Once configured as indicated in this series, these VMs will export as OVAs that are around 300 MB each, making them reasonably portable.
The storage consumed after thin-provisioned deployment is less than 650 MB for each virtual machine. At runtime, each consumes an additional 2 GB for the swapfile. During boot, in my environment, each VM’s CPU usage is a little over 600 MHz and the active RAM reports 125 MB, but those normalize quickly to nearly 0 MHz and 20 MB active RAM (+23 MB virtualization overhead). You may be able to reduce their RAM allocations, but I have not tried this.
So, what can I do with this thing?
It is nice to have tools, but without a reason to use them, they’re not that much fun. We use tools like this in our labs to demonstrate various functionality of our products and help our users understand how they work. Here are a few ideas, just to get you thinking:
vMotion, Storage vMotion, SRM Protection and Recovery
The virtual machines that you created can be used as a set, but the base Photon OS template also makes a great single VM for demonstrating vMotion or Site Recovery Manager (SRM) recovery in a lab environment. They are small, but they have some “big VM” characteristics:
- The VMware Tools provide appropriate information up to vCenter
- They respond properly to Guest OS restart and power off actions
- Photon OS handles Guest Customization properly, so you can have the IP address changed during template deployment and SRM recovery.
- You can ping and SSH into them
- You can use them to generate load on your hosts and demonstrate Distributed Resource Scheduler (DRS) functionality
Firewalling/Micro-segmentation
We use a previous version of this application in several of our NSX labs that debuted at VMworld 2016. For a good micro-segmentation use case, you can look at HOL-1703-USE-2 – VMware NSX: Distributed Firewall with Micro-Segmentation. The manual is available for download here, or you can take the lab here.
For a more complicated use case using a similar application to demonstrate SRM and NSX integration, look at HOL-1725-USE-2 – VMware NSX Multi-Site DR with SRM. For that lab, the manual is available here and the lab is here.
Each of the tiers must communicate with the others using specific ports
- Client to Web = 443/tcp
- Web to App = 8443/tcp
- App to DB = 80/tcp
You can use this application to test firewall rules or other network restrictions that you are planning to implement. If a restriction breaks the application, you can determine where and why, then try again. If you want to change the port numbers to match your needs, you can do that as well. Keeping the application simple means that modifications should also be simple.
Load Balancing (Distribution)
The basic idea here is that you can create clones of the web-01a machine as many times as you like and pool them behind a load balancer. In your lab, if you have it, you may want to use NSX as a load balancer. If you want to do that, I suggest checking out Module 3 – Edge Services Gateway in the HOL-1703-SDC-1 – VMware NSX: Introduction and Feature Tour lab, which covers how to set that up. The manual is here and the lab is here.
If you want to use another vendor’s solution, feel free to do that as well. This application is REALLY simple. Some free load balancing solutions can be implemented using nginx or haproxy. Fortunately, we already know about nginx from the build of our web servers, so I will cover that later in this post. First, though, I want to cover a DNS round robin configuration since understanding that makes the nginx load balancing simpler for the lab.
Example 1 – Load Distribution via DNS Round Robin
If you don’t have the resources for another VM, you can implement simple load distribution via DNS round robin as long as you understand a few limitations:
- You must have access to change DNS for your lab environment.
- Using only DNS, you get load distribution but not really balancing; there is no awareness of the load on any particular node. Rather, you simply get the next one in the list.
- There is no awareness of the availability of any node in the pool. DNS simply provides the next address, whether it is responding or not.
- Connecting from a single client may not show balancing since optimizations in modern web browsers may keep existing sockets open.
In this first example, I have 3 web servers (web-01a, web-02a, web-03a) with IP addresses 192.168.120.30, 31, and 32. My SSL certificate contains the name webapp.corp.local and it is loaded onto each of the web servers. The picture looks something like this:
Create the VMs
To create web-02a and web-03a, I simply clone my web-01a VM then reset the hostnames and IP addresses of each clone to the new values:
- web-02a – 192.168.120.31
- web-03a – 192.168.120.32
Alternatively, I can make a template from the web-01a VM and deploy the copies using Guest Customization to reconfigure them. Just make sure to populate the /etc/hosts file on the customized machines since the process wipes out and rebuilds that file.
Configure DNS
The required DNS changes are not complicated. You basically assign the name webapp.corp.local to the IP addresses of your web servers and set the time-to-live (TTL) to a low, non-zero value.
Using PowerShell against my lab DNS server called controlcenter.corp.local that manages the corp.local zone, I add DNS records with a 1 second TTL, associating all of the web server IP addresses to the name webapp.corp.local:
$ttl = New-TimeSpan -Seconds 1 Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webapp' -IPv4Address '192.168.120.30' -TimeToLive $ttl Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webapp' -IPv4Address '192.168.120.31' -TimeToLive $ttl Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webapp' -IPv4Address '192.168.120.32' -TimeToLive $ttl
If you use a BIND DNS server, just create multiple A records pointing to the same name. BIND 4.9 or higher will automatically rotate through the records. In my case, I have a Windows 2012 DNS server, and it cycles through the addresses when the webapp.corp.local name is requested.
Testing the Rotation
Here is a simple example of what this looks like from an ESXi host in my lab. A simple ping test shows the rotation occurring as intended:
[root@esx-03a:~] ping -c 1 webapp.corp.local PING webapp.corp.local (192.168.120.30): 56 data bytes 64 bytes from 192.168.120.30: icmp_seq=0 ttl=64 time=1.105 ms --- webapp.corp.local ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 1.105/1.105/1.105 ms [root@esx-03a:~] ping -c 1 webapp.corp.local PING webapp.corp.local (192.168.120.32): 56 data bytes 64 bytes from 192.168.120.32: icmp_seq=0 ttl=64 time=1.142 ms --- webapp.corp.local ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 1.142/1.142/1.142 ms [root@esx-03a:~] ping -c 1 webapp.corp.local PING webapp.corp.local (192.168.120.31): 56 data bytes 64 bytes from 192.168.120.31: icmp_seq=0 ttl=64 time=1.083 ms --- webapp.corp.local ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 1.083/1.083/1.083 ms
Accessing the Application
Use the https://webapp.corp.local/cgi-bin/app.py URL from your web browser to access the application. Within the three-tier application, the script on the app server displays which web server made the call to the application.
The script will show the IP address of the calling web server unless it knows the name you would like it to display instead. You provide a mapping of the IPs to the names you would like displayed at the top of the app.py script on the app server:
webservers = { '192.168.120.30':'web-01a', '192.168.120.31':'web-02a', '192.168.120.32':'web-03a' }
Simply follow the syntax and replace or add the values which are appropriate for your environment.
A Challenge Showing Load Distribution from a Single Host
Hmm… while the ping test shows that DNS is doing what we want, clicking the Refresh button in your web browser may not be switching to a different web server as you expect.
A refresh does not necessarily trigger a new connection and DNS lookup, even if the TTL has expired. Modern web browsers implement optimizations that will keep an existing connection open because odds are good that you will want to request more data from the same site. If a connection is already open, the browser will continue to use that, even if the DNS TTL has expired. This means that you will not connect to a different web server.
You can wait for the idle sockets to time out or force the sockets closed and clear the web browser’s internal DNS cache before refreshing the web page, but that is not really convenient to do every time you want to demonstrate the distribution functionality. If you want to be able to click Refresh and immediately see that you have connected to a different web server in the pool, you can use NSX or a third-party load balancer. If you want to use the tools that we have currently available, the next example works around this issue.
Example 2 – Implementing a (Really) Basic Load Balancer
Making a small change to the nginx configuration on one of the web server machines and adjusting DNS can provide a simple demonstration load balancer for your lab. This requires a slight deviation from our current architecture to inject the load balancer VM in front of the web server pool:
Note that there are better, more feature-rich ways to do this, but we are going for quick and simple in the lab.
Create the Load Balancer
Create the load balancer VM. You can deploy a new one from a Photon OS base template and go through the configuration from there, but conveniently, the difference between the load balancer configuration and that of our web servers is just one line!
So, make a copy of the web-01a VM and update its address and hostname:
- lb-01a – 192.168.120.29
Change the nginx Configuration
On the lb-01a VM, edit the /etc/nginx/nginx.conf file
# vi +130 /etc/nginx/nginx.conf
Change line 130 from
proxy_pass https://app-01a.corp.local:8443/;
to
proxy_pass https://webpool.corp.local/;
This will allow us to leverage DNS round-robin to rotate through the list of web servers and distribute the load. Nginx has advanced configurations to handle load balancing, but this will get the job done for a lab or demonstration. Terminating SSL on the load balancer while using plain HTTP on the web servers allows a lot more flexibility, but the configuration changes are beyond the scope of what I want to do here.
Restart nginx
# systemctl restart nginx
Adjust DNS
Finally, adjust DNS to move the webapp.corp.local name to point at the load balancer and put the web servers into webpool.corp.local instead.
If you are using Windows DNS, you can use PowerShell. For BIND, edit and create the records as needed.
- Remove the existing webapp.corp.local pool by deleting all of the A records that point to the individual web servers:
$rec = Get-DnsServerResourceRecord -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -Name 'webapp' -RRType A if( $rec ) { $rec | % { Remove-DnsServerResourceRecord -InputObject $_ -ZoneName 'corp.local' -Force } }
2. Create a new webapp.corp.local A record that points to the lb-01a machine:
Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webapp' -IPv4Address '192.168.120.29'
3. Create the new webpool.corp.local that contains the individual web servers:
$ttl = New-TimeSpan -Seconds 1 Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webpool' -IPv4Address '192.168.120.30' -TimeToLive $ttl Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webpool' -IPv4Address '192.168.120.31' -TimeToLive $ttl Add-DnsServerResourceRecordA -ComputerName 'controlcenter.corp.local' -ZoneName 'corp.local' -name 'webpool' -IPv4Address '192.168.120.32' -TimeToLive $ttl
Access the Application
Now, point your web browser to the https://webapp.corp.local/cgi-bin/app.py URL. Each time you click Refresh in your web browser or enter a new search string in the Name Filter box and click the Apply button, the data refresh and the Accessed via: line should update with a different web server from the pool:
Because the web browser’s connection is to the load balancer VM, which controls which web server receives the request, we eliminate the issue experienced when using only DNS round robin. This very basic implementation does not handle failed servers in the pool and is not something that would be used in production, but, hey, this is a lab!
It is possible to extend this idea to put a load balancer in front of a pool of application servers as well: replace line 130 in each web server’s /etc/nginx/nginx.conf file with the URL of an app server pool instead of pointing them directly at the app-01a VM.
That’s a Wrap!
That concludes the series on building a minimal three-tier application. I am hopeful that you have found this interesting and can use these tools in your own environment.
Thank you for reading!