I prior shared this post on the LinkedIN publishing platform and my personal blog at HumairAhmed.com. In my prior blog post, I discussed how with VMware Cloud on AWS (VMC on AWS) customers get the best of both worlds for their move to a Software Defined Data Center (SDDC) – the leading compute, storage, and network virtualization stack for enterprises deployed on dedicated, elastic, bare-metal, and highly available AWS infrastructure. Another benefit of VMC on AWS, and the focus of this post, is that you can easily have a global footprint by deploying multiple VMC SDDCs in different regions.
As mentioned, in my prior post – today two AWS regions are available, US West (Oregon) and US East (N. Virginia) with more regions planned for the near future. By clicking a button and deploying SDDCs in different regions, you can easily have a global SDDC infrastructure backed by all the vSphere, vSAN, and NSX functionality you love.
Below you can see I’ve already linked my VMC to my AWS account as explained in my prior post and deployed two SDDCs both inherently running vSphere, vSAN, and NSX. One SDDC is deployed in the AWS US West (Oregon) region and the other is deployed in the US East (N. Virginia) region.
Below is my lab setup within VMC and respective connectivity to my on-prem lab. I’ve connected the two SDDCs in VMC via IPSEC VPN. My SDDC deployed in the AWS US West (Oregon) region is also connected via IPSEC VPN to my on-prem environment in Palo Alto, CA.
It’s important to note here all the networking capabilities within VMC, including IPSEC VPN used here, is provided by NSX. The workloads in VMC sit on NSX logical networks, the NSX DLR is used for east/west distributed routing, and the NSX Edge can be used to provide North/South capabilities out the AWS Internet Gateway and also for edge services like firewall, NAT, VPN, etc. Below, I’m leveraging IPSEC VPN on the NSX Edge to connect to another SDDC in another region and also to connect to my local on-prem environment.
At AWS re:Invent 2017 new capabilities of L2VPN and AWS Direct Connect were also announced. These new capabilities provide for additional use cases and capabilities such as high-speed private network connectivity from on-prem directly to VMC, stretched network support, and faster cold and live application migration capabilities. I will leave these to discuss for a follow-up post.
Below you can see the logical networks I’ve created in the VMC SDDCs in both the US West (Oregon) and US East (N. Virginia) regions respectively.
In the below Compute Gateway (CGW) IPSEC VPN configuration for both SDDCs, you can see I am exposing the VMC_App network between the SDDCs. From above logical networks, you can see the VMC_App network in the SDDC in the US West (Oregon) region has a subnet of “10. 61. 4. 16/28” and the VMC_App network in the SDDC in the US East (N. Virginia) region has a subnet of “10. 71. 4. 16/28” VMs/workloads on these networks can communicate to each other across SDDCs via policy-based IPSEC VPN configuration and respective security policies shown further below.
Note, the SDDC in the US West (Oregon) region is also connected to the local data center in Palo Alto, CA via another IPSEC VPN configuration. In this configuration the VMC_Web network is exposed as there are some on-prem workloads that need to communicate to the Web VMs in the VMC SDDC in the US West (Oregon) region.
SDDC in US West (Oregon)
SDDC in US East (N. Virginia)
The respective security policies in my VMC lab environment allow for ICMP communication between the respective workloads between VMC SDDCs and also ICMP communication from on-prem workloads; this configuration is shown below.
Below are two App VMs on the VMC_App NSX logical network at both regions respectively. The VM in the SDDC in the US West (Oregon) region has an IP address of “10. 61. 4. 17” and the VM in the SDDC in the US East (N. Virginia) region has an IP address of “10. 71. 4. 17“.
Below you can see the App VMs in the different VMC SDDCs and respective AWS Regions can communicate with each other.
Additionally, per my VMC lab configuration shown further above, my local workload on-prem in Palo Alto, CA with an IP address of “10. 114. 223. 70” can communicate to my Web VM with IP address of “10. 61. 4. 1” in the SDDC in the US West (Oregon) region.
As you can see, with VMC on AWS, you can easily have a global footprint by deploying multiple VMC SDDCs in different regions. Connectivity is possible between SDDCs in different regions and also to an on-prem environment.
For more information on VMC on AWS, and how to get started check-out my prior post and the VMC on AWS Documentation page.
How about showing on-prem VM ( 10.114.223.70 ) pinging to US East N,Virginia VM ( 10.71.4.17 ) ? Dynamic Routing between these 3 sites ? Or Do you see customers using these scenarios ?