I wrote this post prior on my personal blog at HumairAhmed.com. You can also see many of my prior blogs on multisite and Cross-vCenter NSX here on the VMware Network Virtualization blog site. This post expands on my prior post, Multi-site Active-Active Solutions with NSX-V and F5 BIG-IP DNS. Specifically, in this post, deploying applications in an Active-Active model across data centers is demonstrated where ingress/egress is always at the data center local to the client, or in other words localized ingress/egress.
Again, I want to thank my friend Kent Munson from F5 Networks who helped me get this up and running in my NSX Multi-site lab. Kent also presented with me in the US VMworld 2017 Session: Multisite Networking and Security with Cross-VC NSX: Part 2 [NET1191BU]; make sure to check it out.
I’ll keep this post short, because I explain in detail as I step through a demo in the video embedded further below. I also showed this demo in the US VMworld 2017 Session: Multisite Networking and Security with Cross-VC NSX: Part 2 [NET1191BU] and explained how the solution can be used for DR solutions as well in the Europe VMworld 2017 Session: Disaster Recovery Solutions with NSX [NET1188BE]. You can watch the recordings of all US and Europe VMworld 2017 sessions I presented in at the links below.
US VMworld 2017:
Europe VMworld 2017:
The goal of the demo shown in the video embedded below was to show an application running on NSX and spanning between two sites where ingress/egress for the application is handled locally from where the client connects. Cross-VC NSX provides for the multi-site platform the application runs on and provides for consistent networking, consistent security, and inherent automation across sites.
F5 BIG-IP DNS is used to provide for GSLB functionality which handles local ingress/egress for the application. Additionally, I decided to leverage Palo Alto Networks security to demonstrate how advanced 3rd party security can be leveraged in such a multi-site solution. The latest release of PAN-OS (8.0), allows for managing multiple NSX Managers from one Panorama, and Panorama can be deployed in active-standby mode across sites for high availability.
The lab setup is shown in the diagram below. The two sites in my setup are Palo Alto, CA and San Jose, CA, but these sites could also be much further apart or even within different countries.
In the setup I have a web application where the web app consists of four Web VMs (WebF5, WebF5 2, WebF5 3, WebF5 4) which are spanning across the two sites. As shown below, initially, only the WebF5 VM is at Site 1 Palo Alto and the other three VMs are at Site 2 San Jose. All Web VMs are on the same subnet, 220.127.116.11/24, and same Universal Logical Switch, Universal Web – F5.
Logging into the active site 1 F5 LTM where SNAT is being done and the Virtual IP is 10.100.9.14, it can be confirmed the LTM sees that in the application pool, only WebF5 is active at Site 1.
Logging into the active site 2 F5 LTM where SNAT is being done and the Virtual IP is 10.200.9.14, it can be confirmed the LTM sees that in the application pool, WebF5 2, WebF5 3, and WebF5 4 are active at Site 2.
In above scenario the LTM is monitoring the web servers in the application pool via configured HTTP GET request. As shown below, using NSX DFW rules, I ensure the Site 2 San Jose LTMs cannot communicate to the Site 1 Palo Alto Web VMs. I identify the Web VMs at each site using NSX Universal Security Tags.
Vice-versa, using NSX DFW rules, I ensure the Site 1 Palo Alto LTMs cannot communicate to the Site 2 San Jose Web VMs.
Logging into the Site 1 Palo Alto F5 BIG-IP DNS, it can be seen an A record has been setup with the address of demoweb.nsxlab18.local.
Looking at the Pool List members, I can see all four of the LTMs; there are two LTMs at each site. One LTM is currently active at each site because an active Web VM currently exists at each site. The other two LTMs are the standby LTMs at each respective site. It can also be seen that the load balancing method deployed is Topology, meaning the VIP of the LTMs that are local to the client request will respond.
In the GSLB configuration, two regions have also been created and associated with respective data centers. There are well known allocated IP address ranges/subnets for respective regions/countries that can be used, but customization as shown below is also possible. The custom PA_REGION region is mapped to the Palo Alto data center and the custom SJ_REGION region is mapped to the San Jose region.
Looking at the Palo Alto region, PA_REGION, any client within the listed subnet/IP Range falls under this region.
The San Jose region, SJ_REGION, is also defined in the same manner.
Now when a nslookup is done from a respective client in the Site 1 region, the correct VIP for site 1 F5 LTMs is returned.
Using the web client, it can also be confirmed that the correct server sitting at Site 1 responds. Since there is only one web server at Site 1, the response is always from the respective Web Server 1.
Doing a nslookup from a respective client in the Site 2 region, returns the correct VIP for Site 2 F5 LTMs.
Similarly, using the web client, it can also be confirmed that the correct server sitting at Site 2 responds. Since there is only three web servers at Site 2, the response can be from any one of the three web servers. The F5 LTMs have been configured to do Round Robin load balancing locally, so as new tabs are opened and new and requests made, the next web server responds as shown below.
Site 2 San Jose Web Client Request 1:
Site 2 San Jose Web Client Request 2:
Site 2 San Jose Web Client Request 3
Next, in the demo, I use Palo Alto Networks Panorama to show how the same Panorama instance can talk to multiple NSX Managers and push down security policies across both sites. For management HA purposes, Panorama can be deployed in Active/Standby mode across the sites as shown below. Additionally, I demonstrate how Panorama can auto-generate redirection rules for both NSX Managers based on the configured Panorama security policies.
In the demo, I use Panorama to push down a security policy that blocks all communication to the Web servers at Site 1, thus, now when a client at Site 1 makes a DNS request, the VIP at Site 2 is returned since the Web VM at Site 1 is no longer reachable. I also demonstrate vMotioning VMs across sites and how the F5 BIG-IP DNS always returns the correct VIP based on the status of the Web VMs and application pools.
In affect this demo shows the powerful capabilities of NSX as a platform for multi-site solutions.
Learn more by watching the complete demo in the video below: