Networking VMware Cloud

Intro to Google Cloud VMware Engine – Common Networking Scenarios, Part 2

This is the seventh and final post in a series on Google Cloud VMware Engine and Google Cloud Platform. This post covers common networking scenarios, including accessing cloud-native services, viewing routing information, VPN connectivity, notes on DNS, and additional helpful resources.

Other posts in this series:

Accessing Cloud-Native Services

The last addition to this example that was started in the previous post is to include a cloud-native service. I’ve chosen to use Cloud Storage because it is a simple example, and it provides incredible utility. This diagram illustrates my desired configuration.

My goal is to stage a simple static website in a Google Storage bucket, then mount the bucket as a read-only filesystem on each of my webservers. The bucket will be mounted to /var/www/html and will replace the testing page that had been staged on each server. You may be thinking, “This is crazy. Why not serve the static site directly from Google Storage?!” This is a valid question, and my response is that this is merely an example, not necessarily a best practice. I could have chosen to use Google Filestore instead of Google Storage as well. This illustrates that there is more than one way to do many things in the cloud.

The first step is to create a Google Storage bucket, which I completed with this simple Terraform code:

provider "google" {
  project = var.project
  region  = var.region
  zone    =

resource "google_storage_bucket" "melliott-vmw-static-site" {
  name          = "melliott-vmw-static-site"
  location      = "US"
  force_destroy = true
  storage_class = "STANDARD"

resource "google_storage_bucket_acl" "melliott-vmw-static-site-acl" {
  bucket =

  role_entity = [

Next, I found a simple static website example, which I stored in the bucket and modified for my needs. After staging this, I completed the following steps on each webserver to mount the bucket.

  • Install the Google Cloud SDK (
  • Install gcsfuse (, which is used to mount Google Storage buckets in linux via FUSE
  • Authenticate to Google Cloud with gcloud auth application-default login. This will provide a URL that will need to be pasted into a browser to complete authentication. The verification code returned will then need to be pasted back into the prompt on the webserver.
  • Remove existing files in /var/www/html
  • Mount the bucket as a read-only filesystem with gcsfuse -o allow_other -o ro [bucket-name] /var/www/html
root@ubuntu:/var/www# gcsfuse -o allow_other -o ro melliott-vmw-static-site /var/www/html
2021/05/04 16:19:10.680365 Using mount point: /var/www/html
2021/05/04 16:19:10.686743 Opening GCS connection...
2021/05/04 16:19:11.037846 Mounting file system "melliott-vmw-static-site"...
2021/05/04 16:19:11.042605 File system has been successfully mounted.
root@ubuntu:/var/www# ls /var/www/html
assets  error  images  index.html  LICENSE.MD  README.MD

After mounting the bucket and running an ls on /var/www/html, I can see that my static website is mounted correctly.

Browsing to the public IP fronting my load balancer VIP now displays my static website, hosted in a Google Storage bucket. Pretty snazzy!

Google Private Access

My Google Cloud VMware Engine environment has internet access enabled, so native services are accessed via the internet gateway. If you don’t want to allow internet access for your environment, you can still access native services via Private Google Access. Much of the GCP documentation for this feature focuses on access to Google APIs from locations other than your private cloud, but it is not too difficult to apply these practices to Google Cloud VMware Engine.

Google Private Access is primarily enabled by DNS, but you still need to enable this feature for any configured VPCs. The domain names used for this service are and I was able to resolve both of these from my VMs, but my VMs are configured to use the resolvers in my Google Cloud VMware Engine environment. If you cannot resolve these hostnames, make sure you are using the DNS resolvers provided with your private cloud. As a reminder, these server addresses can be found under Private Cloud DNS Servers in the summary page for your private cloud. You can find more information on Google Private Access here.

Viewing Routing Information

Knowing where to find routing tables is incredibly helpful when troubleshooting connectivity issues. There are a handful of places to look in GCP and Google Cloud VMware Engine to find this information.

VPC Routes

You can view routes for a VPC in the GCP portal by browsing to VPC networks, clicking on the desired VPC, then clicking on the Routes tab. If you are using VPC peering, you will notice a message that says, “This VPC network has been configured to import custom routes using VPC Network Peering. Any imported custom dynamic routes are omitted from this list, and some route conflicts might not be resolved. Please refer to the VPC Network Peering section for the complete list of imported custom routes, and the routing order for information about how GCP resolves conflicts.” Basically, this message says that you will not see routes for your private cloud in this table.

VPC Network Peering Routes

To see routes for your Google Cloud VMware Engine environment, browse to VPC Network Peering and choose the servicenetworking-googleapis-com entry for your VPC. You will see routes for your private cloud under Imported Routes and any subnets in your VPC under Exported Routes. You can also view these routes using the gcloud tool.

  • View imported routes: gcloud compute networks peerings list-routes servicenetworking-googleapis-com --network=[VPC Name] --region=[REGION]] --direction=INCOMING
  • View exported routes: gcloud compute networks peerings list-routes servicenetworking-googleapis-com --network=[VPC Name] --region=[REGION]] --direction=OUTGOING

Example results:

melliott@melliott-a01 gcp-bucket % gcloud compute networks peerings list-routes servicenetworking-googleapis-com  --network=gcve-usw2 --region=us-west2 --direction=INCOMING
DEST_RANGE         TYPE                   NEXT_HOP_REGION  PRIORITY  STATUS    DYNAMIC_PEERING_ROUTE  us-west2         0         accepted    DYNAMIC_PEERING_ROUTE  us-west2         0         accepted   DYNAMIC_PEERING_ROUTE  us-west2         0         accepted   DYNAMIC_PEERING_ROUTE  us-west2         0         accepted    DYNAMIC_PEERING_ROUTE  us-west2         0         accepted    DYNAMIC_PEERING_ROUTE  us-west2         0         accepted  DYNAMIC_PEERING_ROUTE  us-west2         0         accepted  DYNAMIC_PEERING_ROUTE  us-west2         0         accepted      DYNAMIC_PEERING_ROUTE  us-west2         0         accepted      DYNAMIC_PEERING_ROUTE  us-west2         0         accepted    DYNAMIC_PEERING_ROUTE  us-west2         0         accepted    DYNAMIC_PEERING_ROUTE  us-west2         0         accepted    DYNAMIC_PEERING_ROUTE  us-west2         0         accepted    DYNAMIC_PEERING_ROUTE  us-west2         0         accepted


Routing and forwarding tables can be downloaded from the NSX-T manager web interface or via API. It’s also reasonably easy to grab the routing table with PowerCLI. The following example displays the routing table from the T0 router in my Google Cloud VMware Engine environment.

Import-Module VMware.PowerCLI
Connect-NsxtServer -Server
$t0s = Get-NsxtPolicyService -Name com.vmware.nsx_policy.infra.tier0s
$t0_name = $t0s.list().results.display_name
$t0.list($t0_name).results.route_entries | Select-Object network,next_hop,route_type | Sort-Object -Property network

network                  next_hop         route_type
-------                  --------         ----------         t0s         t0s      t1c      t1c                            t0c                          t0c                          t0c                          t0c                          t0c                         t0c                         t0c    t1c    t1c

VPN Connectivity

I haven’t talked much about VPNs in this blog series, but it is an important component that deserves more attention. Provisioning a VPN to Google Cloud Platform is an easy way to connect to your private cloud if you are waiting on a Cloud Interconnect to be installed. It can also be used as backup connectivity if your primary connection fails. NSX-T has can terminate an IPSec VPN, but I would recommend using Cloud VPN instead. This will ensure you have connectivity to any Google Cloud Platform-based resources along with Google Cloud VMware Engine.

I’ve put together some example Terraform code to provision the necessary VPN-related resources in GCP. The example code is available at in the gcve-ha-vpn subdirectory. Using this example will create the minimum configuration needed to stand up a VPN to Google Cloud Platform/Google Cloud VMware Engine. It is assumed that you have already created a VPC and configured peering with your private cloud. This example does not create a redundant VPN solution, but it can be easily extended to do so by creating a secondary Cloud Router, interface, and BGP peer. You can find more information on HA VPN topologies in the GCP documentation. After using the example code, you will still need to configure the VPN settings at your site. Google provides configuration examples for several different vendors at Using third-party VPNs with Cloud VPN. I’ve written previously about VPNs for cloud connectivity, as well as other connection methods, in Cloud Connectivity 101.

DNS Notes

I’ve saved the most important topic for last. DNS is a crucial component when operating in the cloud, so here are a few tips and recommendations to make sure you’re successful. Cloud DNS has a 100% uptime SLA, which is not something you see very often. This service is so crucial to GCP that Google has essentially guaranteed that it always be available. That is the type of guarantee that provides peace of mind, especially when you will have so many other services and applications relying on it.

In terms of Google Cloud VMware Engine, you must be able to properly resolve the hostnames for vCenter, NSX, HCX, and other applications deployed in your environment. These topics are covered in detail at these links:

The basic gist is this: the DNS servers running in your Google Cloud VMware Engine environment will be able to resolve A records for the management applications running in your private cloud (vCenter, NSX, HCX, etc.). If you have configured VPC peering with your private cloud, Cloud DNS will be automatically configured forward requests to the Google Cloud VMware Engine DNS servers for any hostname. This will allow you to resolve A records from your VPC or bastion host. The last step is to make sure that you can properly resolve Google Cloud VMware Engine-related hostnames in your local environment. If you are using Windows Server for DNS, you need to configure a conditional forwarder for, using the DNS servers running in Google Cloud VMware Engine. Other scenarios, like configuring BIND, are covered in the documentation links above.

Wrap Up

This is a doozy of a post, so I won’t waste too many words here. I genuinely hope you enjoyed this blog series. There will definitely be more Google Cloud VMware Engine-related blogs in the future, and you can hit me up any time @NetworkBrouhaha and let me know what topics you’d like to see covered. Thanks for reading!

Helpful Resources

You can find a hands-on lab for Google Cloud VMware Engine at and searching for HOL-2179-01-ISM