Hybrid Cloud VMware Cloud

Intro to Google Cloud VMware Engine – Network and Connectivity Overview

This post is the fourth in a series on Google Cloud VMware Engine and Google Cloud Platform. In previous posts, I’ve shown you how to deploy a private cloud in Google Cloud VMware Engine, connect the private cloud to a VPC, and deploy a bastion host for managing your environment. This post walks through the basic network configuration of a newly deployed private cloud, as well as methods for and notes on connecting to a Google Cloud VMware Engine private cloud. We’ll take a pause on deploying anything new to take a closer look at our new environment.

Other posts in this series:

Networking Overview

Google Cloud VMware Engine Overview by Google, licensed under CC BY 3.0

A private cloud running in Google Cloud VMware Engine consists of VMware vSphere, vCenter, vSAN, NSX-T, and optionally HCX, all running on top of Google Cloud infrastructure. Let’s take a peek at a new deployment.

VDS and N-VDS Configuration

Configuration of the single VDS is basic, and used to provide connectivity for HCX. The VLANs listed are locally significant to Google’s infrastructure and not something we need to worry about.

The virtual switch settings for one of the ESXi hosts provides a better picture of the networking landscape. Here we can see both the vanilla VDS deployed, along with the N-VDS managed by NSX-T. Almost all of the networking configuration we will perform will be in NSX-T, but I wanted to show the underlying configuration for curious individuals.

We’ll look at NSX-T further below, but this screenshot from NSX-T is a simple visualization of the N-VDS deployed.

VMkernel and vmnic Configuration

VMkernel configuration is straightforward, with dedicated adapters for management, vSAN, and vMotion. The IP addresses correspond with the management, vSAN, and vMotion subnets that were automatically created when the private cloud was deployed.

There are four 25 Gbps vmnics (physical adapters) in each host, providing an aggregate of 100 Gbps per host. Two vmnics are dedicated to the VDS, and two are dedicated to the N-VDS.

NSX-T Configuration

The out-of-the-box NSX-T configuration for Google Cloud VMware Engine should look very familiar to you if you have ever deployed VMware Cloud Foundation. The T0 router has redundant BGP connections to Google’s infrastructure.

There are no NAT rules configured, and the firewall has a default allow any any rule. This may not be what you were expecting, but by the end of this post, it should be more clear. We will look at traffic flows in the Networking Capabilities section below.

The configured transport zones consist of three VLAN TZs, and a single overlay TZ. The VLAN TZs facilitate the plumbing between the T0 router and Google infrastructure for BGP peering. The TZ-OVERLAY zone is where workload segments will be placed.

Finally, there is one edge cluster consisting of two edge nodes to host the NSX-T logical routers.

Networking Capabilities

Now that we’ve peeked behind the curtain, let’s talk about what you can actually do with your private cloud. This is by no means an exhaustive list, but here are some common use cases:

  • Create workload segments in NSX-T
  • Expose VMs or services to the internet via public IP
  • Leverage NSX-T load balancing capabilities
  • Create north-south firewall policies with the NSX-T gateway firewall
  • Create east-west firewall policies (i.e., micro-segmentation) with the NSX-T distributed firewall
  • Access and consume Google Cloud native services
  • Migrate VMs from your on-premises data center to your Google Cloud VMware Engine private cloud with VMware HCX

I will be covering many of these topics in future posts, including automation examples. Next, let’s look at the options for ingress and egress traffic.

Egress Traffic

Google Cloud VMware Engine Egress Traffic Flows by Google, licensed under CC BY 3.0

One of the strengths of Google Cloud VMware Engine is that it provides you with options. As you can see on this diagram, you have three options for egress traffic:

  1. Egress through the Google Cloud VMware Engine internet gateway
  2. Egress through an attached VPC
  3. Egress through your on-premises data center via Cloud Interconnect or Cloud VPN

In Deploying a Private Cloud with HCX, I walked through the steps to enable Internet Access and Public IP Service for your private cloud. This is all that is needed to provide egress internet access through the internet gateway. Internet-bound traffic will be routed from the T0 router to the internet gateway, which NATs all traffic behind a public IP.

Egress through an attached VPC or on-premises datacenter requires additional steps that are beyond the scope of this post, but I will provide documentation links at the end of this post for these scenarios.

Ingress Traffic

Google Cloud VMware Engine Ingress Traffic Flows by Google, licensed under CC BY 3.0

Ingress traffic to Google Cloud VMware Engine follows similar paths as egress traffic. You can ingress via the public IP service, connected VPC, or through your on-premises data center. Using the public IP service is the least complicated option and requires that you’ve enabled Public IP Service for your private cloud.

Public IPs are not assigned directly to VM. Instead, a public IP is allocated and NATed to a private IP in your private cloud. is You can allocate a public IP in the Google Cloud VMware Engine portal by supplying a name for the IP allocation, region, and the private address.

Connecting to your Private Cloud

My previous post, Deploying a Private Cloud with HCX, outlines the steps to set up client VPN access to your private cloud, and Bastion Host Access with IAP provides an example bastion host setup for managing your private cloud. These are “day 1” options for connectivity, so you will likely need some other method to connect to your on-premises data center to your Google Cloud VMware Engine environment. On my personal blog, I covered cloud connectivity options in Cloud Connectivity 101, and many of the methods outlined that post are available for connecting to Google Cloud VMware Engine. Today, your options are to use Cloud Interconnect or an IPSec tunnel via Cloud VPN or NSX-T IPSec VPN.

In our lab, we are lucky to have a connection to Megaport, so I am using Partner Interconnect for my testing with Google Cloud VMware Engine. This is a very easy solution for connecting to the cloud, and their documentation provides simple step-by-step instructions to get up and running. Once complete, BGP peering will be established between the Megaport Cloud Router and a Google Cloud Router.

Advertising Routes to Google Cloud VMware Engine

VPC peering in Google Cloud does not support transitive routing. This means that I had to add a custom advertised IP range for my Google Cloud VMware Engine subnets to the Google Cloud Router. After adding this configuration, I was able to ping IPs in my private cloud. You will need to configure your DNS server to resolve queries for gve.goog to be able to access vCenter, NSX and HCX by their hostnames.

ICMP in Google Cloud VMware Engine

One nuance in Google Cloud VMware Engine that threw me off is that ICMP is not supported by the internal load balancer, which is in the path for egress traffic if you are using the internet gateway. Trying to ping 8.8.8.8 will fail, even if your private cloud is correctly connected to the internet. To test internet connectivity from a VM in your private cloud, use another tool like curl or follow the instructions here to install tcpping for testing.

Next Steps

Next, we will stage networking segments and connect HCX to begin migrating workloads to Google Cloud VMware Engine. I highly recommend you read the Private cloud networking for Google Cloud VMware Engine whitepaper, which goes into many of the subjects I’ve touched on in this blog in greater detail.