VMware Cloud Provider

VMware Cloud Director service with Google Cloud VMware Engine – Part 1

VMware recently announced the general availability of support for Cloud Director service with Google Cloud VMware Engine as an SDDC endpoint. As a provider, you can leverage a multi-tenant Google Cloud VMware Engine as an SDDC endpoint for a production environment or to serve as a disaster recovery location for an on-prem VMware Cloud Director deployment or another hyperscaler based SDDC endpoint that is serving as a production environment.

This blog is the first in a series to demonstrate how to setup the environment to implement CDs with Google Cloud VMware Engine for multi-tenancy. The picture of the reference architecture below depicts the target configuration as shown in the setup documentation.

The following are details on the function of each component covered in the demo and why they are required:

  • Provider Project – The provider project is the GCP project you as the provider own that the Google Cloud VMware Engine will be deployed in and that the tenant projects will connect to for any other services you may provide such as services backed by Google native services. In the diagram below, the provider project is the green box in the upper left hand corner.
  • Provider Owned Customer/Tenant Project – The provider owned customer/tenant project is the GCP project you as the provider own and manage for each tenant that will be deployed. The project serves as a landing spot for all ingress and egress traffic from the tenant’s presence in Google Cloud VMware Engine.
    • These projects ensure the tenant traffic is isolated for each tenant traversing a VPN from their NSX-T T1 inside of Google Cloud VMware Engine to a VPN device inside the tenant’s project.
    • This project could be the only presence the tenant has in GCP or the tenant could have their own GCP presence that the project is peered to. In the diagram below, the top middle green box depicts a tenant without any other GCP presence, so all ingress and egress traffic leaves this project without traversing any other hops in GCP. The middle purple box depicts a tenant that also has their own GCP presence (bottom middle purple box), so all ingress and egress traffic goes from the provider owned tenant project to the customer’s GCP project via a peering connection.

See the following video demo on how to setup the provider project, tenant networking and creating a Google Cloud VMware Engine to associate to CDs.

For the full instructions on how to configure VMware Cloud Director service with Google Cloud VMware Engine, see the our documentation.

Disaster Recovery and Migration Support

Disaster recovery and migration is supported into Google Cloud VMware Engine by VMware Cloud Director Availability from on-prem deployments of VMware Cloud Director, other hyperscaler endpoints, as well as other Google Cloud VMware Engine private cloud endpoints. For more information on how to configure Google Cloud VMware Engine with VMware Cloud Director Availability, see DR and Migrations to VMware Cloud Director service Using VMware Cloud Director Availability .