posted

2 Comments

A few months ago, Andrew Morgan in the EUC Business Unit wrote a great blog post on using Amazon Route 53 to provide Global Load Balancing Services for VMware Horizon. The example in Andrew’s post showed a failover situation between an on-premises VMware Horizon environment and a Horizon environment running in VMware Cloud on AWS.

I recently joined the Cloud Provider Architecture Team, and one of the first questions I was asked was “How can we build out a Horizon environment on VMware Cloud on AWS for our global team?” Our goal was to provide the team with secure access to our labs from anywhere on the planet for use in both personal knowledge growth and showcasing VMware technologies like HCX and NSX-T.

Lab Architecture

To help showcase VMware Cloud on AWS to our MSP and VMware Cloud Provider Program partners, we are in the process of building out four VMware Cloud on AWS environments.  These environments are located in Oregon, London, Tokyo, and Sydney.  We also have an on-premises lab hosted in Chicago.

Each VMC tenant consists of a four-node cluster, and we are utilizing NSX-T for our networking.  All of the labs are interconnected using the Routed BGP VPN option, providing us with a network mesh topology for intersite connections.

Each VMC tenant will host a local Horizon environment.  The Horizon environment consists of two Connection Servers, two Unified Access Gateways and VMware User Environment Manager.  We can showcase multiple use cases including Full Clone Persistent Desktops, Instant Clone non-persistent desktops, and Published Apps and Shared Hosted Desktops from Remote Desktop Session Host Servers.  We have also deployed a global Active Directory and File Services environment to support these services.


When we designed this lab, we wanted to provide a single URL for access.  This way, the engineers and architects on the team could access their desktop from any point on the globe without having to know the URL for the regional access point. And we wanted our users to get directed to the closest access point so they would get the best experience when showing this environment to customers.

Addressing these challenges would require some sort of global load balancing service.  Since we were already building out Horizon environment on VMC in AWS, we made a design decision to use a native Amazon Service – Route 53 – to provide this capability.

Amazon Route 53 highly-scalable cloud-hosted DNS service.  It not only can act as the nameserver for your domain names, it also provides advanced monitoring, geolocation, and load balancing features.

So how are we using Route 53?  As we build out the Horizon environment, each region will get a local URL.  A health check is also set up against this URL to detect uptime.  We also set up a global URL that our users will use to access the environment.  The global record has multiple entries with each entry pointing to the DNS record for one of our deployed sites.  These global records also have the health check set up and are configured to route customers to a specific site based on their latency to the AWS region that is hosting the SDDC environment.

If one site goes down, or is taken down for maintenance, then Route 53 will detect the failure and resolve the global address to the next best site based on that user’s latency to the AWS regions we have configured. And since our environment is using Cloud Pod Architecture to connect all of our Horizon pods, users can still access their desktop even when it’s not in the region that is closest to them.

So what does this mean?  Basically, we are using Amazon Route 53 to provide a highly available global EUC environment that self-heals.  If a site goes down, Route 53 detects the failure and routes to the next best option.

The video below shows an example of this.  In our lab, we shut down the Unified Access Gateways in our London SDDC. After the health check detects the failure, our user is routed through the next available SDDC, which is Sydney in this case.  And since our failure is just at the UAG layer, they are still able to access their desktop.  My user, who is in the north central United States, authenticates through Sydney and then launches their London desktop.

 

Benefits to VMware Cloud Provider Partners and Managed Service Providers

Delivering end-user computing services across multiple regions often requires expensive hardware to provide global load balancing services.  The combination of VMware Cloud on AWS with native Amazon services like Route53 provides partners with the ability to offer highly resilient managed services with consumption-based billing to support customer workloads.  It also opens up new managed services revenue opportunities.

Partners can more easily provide seamless and scalable managed disaster recovery solutions for customer end-user computing environments without capital-heavy investments in data centers or hardware. These services can also be used to help customers migrate from an on-premises workload to a cloud-based workload with minimal impact to user experience or help them expand their footprint to cover more regions while providing the same flexibility and operational practices as on-premises environments.

These benefits don’t stop with end-user computing services, though. Partners can provide managed DNS services for their customers as a standalone offering or as part of a DRaaS offering.  They can also be used as part of a migration service to migrate on-premises workloads into managed VMware Cloud on AWS instances.

Combining VMware Cloud on AWS with native Amazon services opens up limitless opportunities for VMware Cloud partners and their customers.