Technical Architecture

vSphere 7 with Kubernetes – Shared Infrastructure Services

(By Michael West, Technical Product Manager, VMware)

 

The early adoption of Kubernetes generally involved patterns of relatively few large clusters deployed to bare metal infrastructure.  While applications running on the clusters tended to be ephemeral, the clusters themselves were not.  What we are seeing today is a shift toward many smaller clusters, aligned with individual development teams, projects or even applications.  These clusters can be deployed manually but more often are the result of automation – either static scripts or CI/CD pipelines.  The defining characteristic is that not only are the applications running on the clusters short lived ie. ephemeral, but the clusters themselves follow the same pattern.VMware Cloud Native Apps Icon

Though Kubernetes clusters may be deployed as on-demand resources, they often need access to core infrastructure services like logging, metrics, image registries or even persistent databases.  These services will tend to be long lived and ideally shared across many clusters.  They also might have resource, availability or security requirements that differ from the “workload clusters” that need to consume them.  In short, infrastructure services may be deployed and managed separately from the workload clusters, but must be easily accessible without the need to modify the application services that rely on them.

Separating application and infrastructure services into separate clusters might seem obvious, but connecting workloads from one cluster to services running in another within Kubernetes can be a little tricky.  This blog and attached demonstration video describe Kubernetes services and how to set up cross cluster connectivity that will allow workload cluster’s applications to consume infrastructure services running on separate clusters.

What is a Kubernetes Service?

As most of you are probably aware, a Kubernetes Service provides a way to discover an application running on a set of pods and expose it as a network service.   Each service gets a single DNS name and provides routing to the underlying pods.  This solves the challenge of ephemeral pods with changing IPs and potential DNS caching issues.  Services are created with a specification that includes a “Selector”.  This Selector includes a set of labels that define the pods that make up the service.  The IPs of the pods that make up the service are added to a Kubernetes object called an Endpoint.   Endpoints are updated as pods die or are recreated with new IPs.   When a service needs access to another service, it does a DNS lookup to the DNS server running within the cluster, then accesses the service via the returned ClusterIP.

In this case the web-app pod needs to access the db service running in a different namespace on the same Kubernetes cluster.   The pod calls the service by specifying the “servicename.namespace.svc.cluster.local” in a DNS lookup.  The DNS server, usually something like core-dns running as a pod in the cluster, returns the cluster IP.  The web-app pod then calls the db service via that cluster IP.  Cluster IP is a virtual IP defined in the cluster.  It has no physical interface, but routing to the underlying pod IPs from this virtual interface is plumbed into the cluster nodes.  This plumbing is specific to the networking that has been implemented for your cluster.   The key points here are that the endpoint object is automatically updated based on the selector defined in the Kubernetes Service and the web-app pod doesn’t need to know anything about those IPs.

 

 

What if DB Service and Web-App Service are on different clusters?

If our organization wants to adopt a shared service model where our database services reside on centralized clusters, then web-app would be deployed on a separate cluster.  In the case of vSphere 7 with Kubernetes, the shared database service could be deployed to the Supervisor Cluster and take advantage of the vSphere Pod Service to be deployed as a pod running directly on the hypervisor.  This model provides the resource and security isolation of a VM, but with Kubernetes pod and service orchestration.  The Web-App could be deployed onto a Tanzu Kubernetes cluster.  The TK cluster is deployed via the Tanzu Kubernetes Grid Service for vSphere and provides a fully conformant and upstream aligned Kubernetes cluster for non-shared infrastructure components of the application.  Note that we could have just as easily used another TK cluster to run the database pods.  The point here is the separation of application components across clusters.

 

 

Once deployed onto the TK cluster, the web-app pod attempts to call the db service, but the DNS lookup fails.  This is because the DNS server is local to the cluster and does not have an entry for the db service running on the Supervisor Cluster.  Even if it did have an entry, the Cluster IP of the db service returned would not be a routable IP that could be accessed from the TK cluster.  We have to solve those two problems in order to make this work.

 

Exposing the db Service outside the cluster

The first thing that we need to do is provide ingress to the db service from outside the cluster.  This is standard Kubernetes service capability.  We will change the service to be of Type LoadBalancer.  This will cause NSX to allocate a Virtual Server for the existing Supervisor Cluster Load Balancer and allocate a routable ingress IP.  This IP comes from an Ingress IP range that was defined at Supervisor cluster creation.

 

 

 

 

Creating Selectorless Service with the Tanzu Kubernetes cluster

Once the db service is made accessible from outside the cluster, we need a way for the Web-App service to discover it from the TK cluster.  This can be done through the use of a Selectorless service.  Remember that the Endpoint object holds the IPs for the pods associated with a service and is populated via the Selector Labels.  In our example above, all pods labeled with app: db are part of the db service.   When we create a service without a Selector, no endpoint object is maintained automatically by a Kubernetes controller, so we populate it directly.   We will create a Selectorless Service and an Endpoint.  The endpoint will be populated with the Load Balancer VIP of the db service.

 

Now our web-app can look up the db service locally and the DNS will return the Cluster IP of the local service, which will be resolved to the Endpoint of the Load Balancer associated with the db service on the Supervisor Cluster.

 

Distributed Microservice application with shared infrastructure services.

 

Now let’s expand this concept to an applications with several services deployed across clusters.  ACME Fitness Shop is a demo application composed of a set of services to simulate the function of an online store.  The individual services are written in different languages and are backed by various databases and caches.  You can learn more about this app at https://github.com/vmwarecloudadvocacy/acme_fitness_demo.  We will deploy the application with the databases centralized to the Supervisor cluster and running as native pods directly on ESXi, while the rest of the application workloads are deployed to a TK cluster managed through the Tanzu Kubernetes Service for vSphere.

ACME Fit Services

 

The process is the same as in the previous example.  The database pods are deployed to the Supervisor cluster, along with Load Balancer services for each of them.

Selectorless services are created on the TK cluster, with the endpoints updated to the Virtual IPs of the corresponding Load Balancer service for the database running on the Supervisor Cluster.  The rest of the non-database application services are also deployed on this TK cluster.

 

Selectorless Services:

 

Endpoints for the Services:

 

 

 

 

Let’s see it in action!!

This video will walk through a simple example of shared infrastructure services and then actually deploy the ACME Fit application in the same way. For more information on vSphere 7 with Kubernetes, check out our product page:  https://www.vmware.com/products/vsphere.html