Architecture

Infrastructure Self-Service with Project Pacific

With Project Pacific, we have integrated Kubernetes natively into vSphere. This new control plane allows you to manage both VMs and containers side-by-side in the vCenter you know and love. As mentioned in our technical overview post, there are two types of Kubernetes clusters that now run natively in vSphere: a “Supervisor Kubernetes cluster” control plane over ESXi, and a Kubernetes cluster service

Project Pacific: Kubernetes native vSphere platform
Project Pacific: Kubernetes native vSphere platform

Supervisor Kubernetes Cluster

This type of Kubernetes cluster uses ESXi hosts as its worker nodes, instead of Linux nodes. We’ve made this possible in Project Pacific with the creation of our version of the Kubelet, aka the Spherelet, that runs directly on ESXi. While most Kubernetes constructs remain the same, we’ve taken some liberties to ensure a tighter and more performant integration with vSphere. The introduction of the Supervisor means there is now a native Kubernetes control plane for the SDDC which enables Kubernetes as a service, VMs as a service, and an entire ecosystem of other applications as a service.

Kubernetes Cluster

We also refer to this as a Guest Kubernetes Cluster, but really it is exactly what the name states, a Kubernetes cluster service that creates clusters on demand which are conformant with upstream Kubernetes. It runs on top of the Supervisor layer, which also exposes the underlying storage and networking capabilities through CSI and CNI plugins. Let’s take a closer look at how a Kubernetes cluster is spun up on vSphere Project Pacific using this service.

Guest Cluster control plane in Supervisor Cluster
Guest Cluster control plane in Supervisor Cluster

This is a pretty complex looking diagram, but let’s break it down into its individual components. Internally, we’ve been calling this the “three-layered cake” and you can see why. Let’s start at the top.

Guest Cluster Controller

When a developer requests a Kubernetes cluster through kubectl, the (1) Guest Cluster Controller spins up a Kubernetes cluster using the user-specified configuration for control plane nodes and worker nodes. This is an easy button API that provides an opinionated Kubernetes cluster and does all the orchestration required with the layers beneath it for you. How does it do that?

Cluster API Controller

The Guest Cluster Controller produces ClusterAPI CRD objects in the Supervisor namespace that you want to create the Kubernetes cluster in. You can read more about these namespaces and how they enable secure multi-tenancy in our previous blog post. These CRD objects are consumed by the (2) ClusterAPI Controller. ClusterAPI is an open-source project that is gaining widespread adoption in the Kubernetes community, and declaratively manages Kubernetes cluster lifecycle operations. The ClusterAPI operator can create, delete, horizontally scale, and perform rolling upgrades of Kubernetes.

VM Controller

ClusterAPI then takes advantage of the vSphere provider to directly interface with the (3) VM Operator that actuates the creation, delivery, and lifecycle of VMs in vSphere. VM Operator becomes the developer facing VM API, which allow developers to create VMs using kubectl, and automate the lifecycle of services and applications on top of VMs using the Kubernetes control plane.

Conclusion

The advantage of this three-tiered model for the Kubernetes cluster service is that it serves a wide variety of use-cases based on the degrees of freedom you desire. If you need to quickly create a Kubernetes cluster with pre-defined configuration settings, then all you need to do is request a (1) Guest Kubernetes Cluster and we take care of the rest for you. If you are looking to customize your Kubernetes cluster in a certain way based on the needs of your application, or want to use a different distribution of Kubernetes than what Project Pacific provides out of the box, you can go a level deeper and talk to the (2) Cluster API Controller instead. If that isn’t enough customization for you, and you want to spin up a Kubernetes cluster from scratch, then you can interface with the (3) VM Operator.

The best part is that as an infrastructure manager, you still have full visibility and oversight of the Guest Cluster, ClusterAPI and VM Operator objects in the vCenter UI. This allows for developers to easily access self-service infrastructure, while at the same time, allowing you to have greater visibility into what’s happening at every layer of the stack. Ultimately, this helps us achieve the goal of Project Pacific – to provide access to on-demand compute, network, and storage to developers, with first class pods, VMs, and Kubernetes clusters, while at the same time providing IT with the ability to manage, secure, upgrade, and audit their SDDCs.