Calling vSphere 7 a major release is an understatement! This latest version of vSphere has numerous added features, including native integration of the Tanzu Kubernetes Grid (TKG) to drive adoption of Kubernetes through familiar tools. As a result, vSphere 7 provides the platform needed to convert a data center into a self-contained cloud capable of supporting multiple service offerings that go above and beyond infrastructure provisioning services. Let’s find out how.
Why did we bring Kubernetes to vSphere?
We know that organizations are already adopting Kubernetes to manage distributed containerized workloads and running them on vSphere to advance their business goals. In looking at these Kubernetes deployments, there were a few things that we observed.
- Both vSphere and Kubernetes act as a cluster that pools together infrastructure resources. The Kubernetes cluster provides resources to a pod/container from the pool, whereas the vSphere DRS cluster provides resources to a virtual machine (VM).
- Both vSphere and Kubernetes distribute workloads across their clusters. vSphere distributes VM workloads across ESXi hosts and Kubernetes distributes containers across its nodes.
- Kubernetes handles tenancy/resource management through namespaces, whereas vSphere handles them through resource pools.
In short, both Kubernetes and vSphere feature aspects of resource pooling, workload distribution and resource management.
The one function that Kubernetes does beyond orchestration is maintain state. For example, if you ask Kubernetes to deploy five copies of a container, it will ensure there are five pods running with those containers. And if one of the pods dies, it will try to bring it back to match the initial state of five.
This is where Kubernetes differs from vSphere. vSphere handles the availability of VMs through vSphere HA and Fault Tolerance. However, when an app within the VM has issues not tied to ESXi failures, vSphere’s ability to heal it are limited. At the same time, if an app needs to be scaled automatically, vSphere doesn’t have a in-built mechanism for defining scaling criteria that automatically duplicates VMs.
Kubernetes also provides a single API to manage an application’s dependencies. If an app needs an ingress/load balancer, a simple text file describing the desired end state and a call to the Kubernetes API is all it takes. Kubernetes then talks to the underlying infrastructure and automates the building out of load balancers, enabling appropriate routing between the load balancer and pods. With vSphere, the same workflow action will need an app owner to talk to the vSphere API, the NSX API and a configuration automation system to establish the same workflow.
We realized that by bringing these two technologies together, we can make vSphere even more powerful! By embedding Kubernetes into vSphere, we not only bring to it state management capabilities, but provide a single API that helps lay out services workloads can consume while also considerably reducing the architectural footprint of the overall stack.
At a high level, vSphere 7 with Kubernetes compresses the vSphere and Kubernetes cluster into a single entity. This unified cluster can provide infrastructure resources and orchestrate both VMs and containers to provide a single platform that caters to workloads irrespective of which one the the apps are packaged in. vSphere 7 does this by extending ESXi hosts to also be nodes for a Kubernetes cluster. As a result, a cluster in vSphere can cater to both vSphere APIs as well as Kubernetes APIs.
The embedded Kubernetes cluster can also be used to build Kubernetes-style services. Once embedded in vSphere, the cluster knows how to work with vSphere resource pools, VMs, datastores and networks. A developer can subsequently call the embedded Kubernetes API to create services like volume, network, registry, and pod VM, etc.
This is a great way to integrate Kubernetes into vSphere. The embedded Kubernetes cluster has all the necessary components to handle various Kubernetes resource objects and APIs. The embedded Kubernetes cluster runs each container enclosed in a compact VM on the hypervisor, also known as the Pod VM. This mode enables a robust security profile. The upstream open-source Kubernetes project does not design Kubernetes to work with ESXi hosts as its nodes; our engineering teams have done tremendous work to embed Kubernetes components in ESXi, enable Kubernetes to deploy containers in pod VMs, etc. This provides a solid foundation for vSphere 7 and the services architecture.
However, Some development teams are required to work with upstream Kubernetes, to run containers without being contained within a pod VM or to deploy containers natively in a Linux-based host. To ensure we support teams working with vSphere 7 that have these requirements, we added a second integration method between Kubernetes and vSphere 7.
Tanzu Kubernetes Grid service
In the second integration method, we leverage Kubernetes embedded within vSphere to build a service known as Tanzu Kubernetes Grid (TKG) service, which will help deploy multiple Kubernetes clusters on top of vSphere. These clusters are all upstream-compliant; they run containers on Linux hosts and do not embed containers in a pod VM.
The TKG service is based on a project within the Kubernetes community called Cluster API. The fundamental theory underpinning Cluster API is to use Kubernetes to deploy and maintain more Kubernetes clusters. Doing so helps apply the principles of Kubernetes state management to provision, upgrade and maintain new clusters. TKG uses the embedded Kubernetes cluster (the management cluster, in Cluster API terminology) to instantiate a set of VMs (using the OVA VM template) that have the latest upstream Kubernetes bits. These VMs then form the building blocks of a new Kubernetes cluster that’s upstream-compliant.
Using quotas and limits, vSphere admins or operation teams can carve out resources to define how much power these clusters can get by leveraging a new concept called namespaces. Namespaces can be allocated to teams or projects to help bring tenancy and logical separation. Development teams can log in to vSphere or talk to the embedded Kubernetes API to provision new Kubernetes clusters in one or more pre-authorized namespaces. This gives development teams the power to build, run and manage Kubernetes clusters via self-service and access them without having to talk to IT or operation teams. At the same time, VI admins can allocate resources via namespaces.
TKG also helps manage Day 2 operations for maintaining these Kubernetes clusters, such as scaling out to increase the pool of nodes that comprise the cluster and executing rolling upgrades of Kubernetes. With the help of Cluster API, TKG can also heal a cluster if any of the nodes within the cluster are not functioning.
Run Kubernetes clusters anywhere
So far, we looked at how vSphere 7 with its TKG service helps provision and lifecycle upstream-conformant Kubernetes clusters. However, organizations may want a consistent upstream-conformant Kubernetes runtime that works on their infrastructure today. They may require more time to upgrade their clusters to vSphere 7, or they may want to provision a consistent runtime across infrastructure/cloud providers in a multi-cloud scenario. To give customers the same capabilities, TKG can be deployed and operated as a standalone on vSphere 6.7, AWS and more. The standalone TKG also relies on Cluster API to manage lifecycles of clusters in local on-premises environments, in regional cloud environments or at the edge.
VMware TKG along with vSphere 7 provides tools that empower VI admins to build platforms able to cater to workloads, irrespective of how the workload is built (as a VM or container). VI admins can manage a modern platform via a familiar vSphere UI, and developers can leverage a singular API for self-service access to resources. vSphere 7 with Kubernetes, enabled by TKG, offers significant benefit to stakeholders across the organization.