VMware and Google have been collaborating on a hybrid cloud for application platform and development teams. Both Google and VMware’s platforms are built on community-driven open-source technologies – namely Kubernetes, Envoy, and Istio. Having a common hybrid cloud foundation allows teams to run their applications on the optimal infrastructure and gives them more choice when modernizing existing applications or developing new cloud-native applications.

Digital transformation is rapidly changing the IT and application landscape. We are seeing a confluence of transformations that are happening simultaneously. These include hybrid clouds, microservice architectures, containerized applications, and service meshes – to name a few.

In this blog post, I will walk you through the architecture and specific use cases to illustrate the value a hybrid cloud deployment can deliver to application platform and development teams. We’ll do this by showing how a retail company can leverage many of these technology trends to help transform its business.

A Large Global Retailer Pursing Digital Transformation

Our retailer has a digital business transformation initiative. Its main goals are to become more agile and leapfrog its competitors. It operates a global network of stores. The retailer has data centers and branch offices across multiple countries. These data centers and branches run hundreds of applications, including the retailer’s primary e-commerce application used by its customers to browse and shop for products. The e-commerce application is made up of several polyglot microservices and different databases. The retailer also offers its own branded credit card and loyalty program, so it also handles PCI and Personally Identifiable Information (PII) of its customers.

Over the past few years, our retailer has built several new microservice applications. Most recently it developed a containerized e-commerce application. The retailer wants to deploy some of the e-commerce microservices on-premises and some on Google Cloud Platform to take advantage of Google services and Google’s global infrastructure, which will allow the retailer to meet application SLAs and data residency requirements. Let’s look at how the retailer may go about this journey.

Establish Hybrid Cloud Connectivity

The retailer is a global corporation, so it requires multiple entry points into Google Cloud across the world. First, our retailer must connect its corporate data centers and branches to Google Cloud Platform (GCP). It uses VMware SD-WAN by VeloCloud to establish more secure and optimized network connectivity between its on-premises sites and GCP.

The workflow to establish these connections is completely automated via the VMware SD-WAN Orchestrator portal. Now our retail organization has hybrid cloud connectivity between its on-premises sites and GCP public cloud – without having to manually change VPN and network firewall configurations for GCP service end points. Having a hybrid cloud that can provide resources to regions globally allows the teams to more easily address application SLAs and data residency requirements. You can read this blog for more details about VMware SD-WAN support for GCP.

 

Consistent K8s-based Environments for Containerized Applications

Our retailer is a long-time VMware customer and has recently deployed VMware Enterprise PKS to run containerized applications on-premises. Enterprise PKS simplifies the lifecycle management of Kubernetes clusters on multiple clouds. Enterprise PKS also offers tight integration with VMware NSX Data Center, providing advanced network virtualization and micro-segmentation security to Kubernetes. My colleague Niran Even-Chen recently wrote an insightful blog post about “How Istio, NSX Service Mesh and NSX Data Center Fit Together.” The retailer also wants to deploy some of its microservices to Google Kubernetes Engine (GKE) on Google Cloud. After doing so, it now has consistent Kubernetes-based environments for managing its containerized applications at scale, with bi-directional application portability between environments.

 

Service Mesh Data Plane Across Hybrid Cloud K8s Clusters

Before the team deploys any microservices on GCP, they want to get a service mesh into place, so they have a consistent way to discover, observe, control, and secure their services across environments. The team installs Istio-based service meshes into each cluster and enables auto-injection of Envoy to enable each microservice to get a sidecar proxy when its deployed.

The job of the Envoy sidecar proxy is to control all the East-West traffic flowing between microservices inside the meshes, and to act as an ingress and egress gateway to control all of the North-South traffic at the edge of these meshes.

Once Istio is installed in each cluster, it can now federate across any of the services running in those clusters. The retailer can leverage Istio’s advanced routing to easily communicate across meshes, and leverage mTLS encryption for more secure service-to-service communications across environments. And they can do this while using different identity and infrastructure providers for the different on-premises and public cloud service meshes.

 

Consistent Operations and Security for Cloud-Native Applications and Data

In addition to enabling federation, Istio can provide discovery, observability, control, and better security for all microservices running in the PKS and GKE clusters. Teams can define traffic management rules, global security policies, and application resiliency features (e.g., circuit breaking) – and apply those uniformly to their cloud-native applications. Many different use cases are possible, for example, canary deployments with performance and health monitoring. Or, the retailer can create and enforce security policies that help protect customer PCI and PII data by default.

Federation across on-premises and public clouds, enabling consistent developer, operational, and security models for microservice applications running on Kubernetes.

Application Innovation with Google Cloud’s Anthos

The development teams that work on the retailer’s e-commerce application want to leverage services on Google Cloud’s Anthos to enhance the customer experience and deliver more value. The team can push inventory and sales data to Google’s BigQuery for data analytics. The business goal is to help the retailer avoid stockouts and better personalize the shopping experience for customers.

Application teams can deliver more value to their end-users by rapidly adding Google Cloud’s Anthos services to their applications – including databases, storage, data analytics, machine learning, and more.

This week at Google Cloud Next, we are discussing a proof of concept, similar to that described here. You can attend the session jointly presented by Pere Monclus, VMware CTO of Networking & Security, and Ines Envid, Group Product Manager of Google Cloud, or stop by the VMware booth to hear more about it. You may also want to head over to the Google Cloud blog to hear what the good folks at Google have to say about our partnership.

Some Final Words Specifically About VMware NSX Service Mesh

Independently from our collaboration with Google, the VMware service mesh team is focused on adding value above and beyond Istio and Envoy. VMware NSX Service Mesh extends the concept of a service mesh to end-users who use the microservice applications (e.g., a retailer’s customers or suppliers) and the data stores (e.g., MariaDB, MySQL, and MongoDB) and data elements the microservices interact with on behalf of these end-users. Extending the service mesh beyond services – to users and data – allows VMware enterprise customers to achieve end-to-end visibility, control, enhanced security, and confidentiality across any Kubernetes environment.

Discovery, visibility, control, and enhanced security for users, apps, and data – on ANY application or cloud services platform.

Later this year you will also be able to try NSX Service Mesh in a private beta. You can add yourself to the NSX Service Mesh beta list here.