Architecture

Project Pacific – Technical Overview

Introduction

Today we’re introducing Project Pacific as a Technology Preview and we think it’s going to change the way you think about the cloud.

Project Pacific is a re-architecture of vSphere with Kubernetes as its control plane. To a developer, Project Pacific looks like a Kubernetes cluster where they can use Kubernetes declarative syntax to manage cloud resources like virtual machines, disks and networks. To the IT admin, Project Pacific looks like vSphere – but with the new ability to manage a whole application instead of always dealing with the individual VMs that make it up.

Project Pacific will enable enterprises to accelerate development and operation of modern apps on VMware vSphere while continuing to take advantage of existing investments in technology, tools and skillsets. By leveraging Kubernetes as the control plane of vSphere, Project Pacific will enable developers and IT operators to build and manage apps comprised of containers and/or virtual machines. This approach will allow enterprises to leverage a single platform to operate existing and modern apps side-by-side.

The introduction of Project Pacific anchors the announcement of VMware Tanzu, a portfolio of products and services that transform how the enterprise builds software on Kubernetes.

The modern application challenge

When enterprises build modern applications, they consist of a myriad of technologies. Sure, they use Kubernetes and containers, but they also typically need to work with existing, non-containerized applications and stateful workloads like databases. They’re often deployed as distributed systems and maintained by multiple independent software development teams.

Modern enterprise workloads
Modern enterprise workloads

Modern applications pose problems for developers. How do you deploy and operate an application like this? You can’t just use Kubernetes, because big parts of the app aren’t even built on Kubernetes. Once you’ve deployed an application like this, how do you maintain it and update it? What kinds of tools can you use to change, monitor, diagnose and debug the deployment?

Modern applications also present problems to infrastructure teams. Some organizations have decided to build up a new, container/cloud oriented stack alongside their existing vSphere infrastructure. But going down this path is fraught with peril. If your apps look like the one above, they’re going to span both your vSphere and container based platforms. At best, the complexity of applying governance and policy is twice as difficult as doing it in one platform. Everything is split across infrastructure silos and that limits what’s possible and how quickly you can achieve it.

Kubernetes as a platform platform

The key insight we had at VMware was that Kubernetes could be much more than just a container platform, it could be the platform for ALL workloads. When Joe Beda, co-creator of Kubernetes, talks about Kubernetes, he describes it as a platform platform; a platform for building new platforms. Yes, Kubernetes is a container orchestration platform, but at its core, Kubernetes is capable of orchestrating anything!

What if we used this “platform platform” aspect of Kubernetes to reinvent vSphere? What if when developers wanted to create a virtual machine, or a container, or a kubernetes cluster, they could just write a kubernetes YAML file and deploy it with kubectl like they do with any other Kubernetes object?

Using Kubernetes as the vSphere API
Using Kubernetes as the vSphere API

 

This is a pretty powerful concept. This brings the great Kubernetes developer experience to the rest of our datacenter. It means developers can get the benefits of Kubernetes not just for their cloud native applications, but for ALL of their applications. It makes it easy for them to deploy and manage modern applications that span multiple technology stacks.

App-focused management

vSphere lets us do some amazing things to VMs. We can reserve and limit cpu and memory resources for a VM. We can vMotion a VM. We can snapshot a VM. We can encrypt a VM. We can set storage policies for a VM. But our modern app isn’t one VM, it’s perhaps dozens of VMs. All of the things that were so easy to do to VMs are much more difficult with modern applications.

Luckily, Kubernetes brings another feature to bear that can solve this; the Namespace. A Namespace in Kubernetes is a collection of resource objects (containers, VMs, disks, etc..). What if we used Kubernetes Namespaces to model modern applications and all of their parts, and then made all of those same operations we can do on VMs work on Namespaces too. What if you could do things like control resource allocation, vMotion, Encryption, HA, and snapshots to a whole namespace of objects at a time rather than having to deal with the individual VMs?

Namespaces as the unit of management
Namespaces as the unit of management

 

This has two really transformative effects on our VI admins.

First, we think this provides a huge productivity improvement to VI admins. In the past, you might have had thousands of VMs in your vCenter inventory that you needed to deal with. But once you group those VMs into their logical applications, you may only have to deal with dozens of Namespaces. In the past if you wanted to encrypt an application, you’d have to first find all of the VMs that were part of the app and then turn on encryption on each and every one. Now you can just click a button on the Namespace in vCenter and it does it all for you. You get a huge productivity improvement because you can deal with groups of stuff instead of individual VMs.

Second, we think Namespaces give us a better model for developer self-service. One of the reasons so many IT organizations rely heavily on ticketing systems is that it’s the only way to provide governance over developer applications. If your developers are deploying a regulated application, you need to be there when the VMs are getting created to make sure they’re setup the right way. But with Namespaces, you can set a policy on the Namespace once, and then let the developer self-service resources into that Namespace all day long. Every object in the Namespace will inherit the policies you set. Developers get their fast, self-service access to infrastructure while IT can enable corporate policies to be enforced.

A kubernetes native vSphere platform

Project Pacific transforms vSphere into a kubernetes native platform. We integrated a Kubernetes control plane directly into ESXi and vCenter – making it the control plane for ESXi and exposing capabilities like app-focused management through vCenter.

Kubernetes native vSphere platform
Kubernetes native vSphere platform

Supervisor clusters

The supervisor is a special kind of Kubernetes cluster that uses ESXi as its worker nodes instead of Linux. This is achieved by integrating a Kubelet (our implementation is called the Spherelet) directly into ESXi. The Spherelet doesn’t run in a VM, it runs directly on ESXi.

The supervisor cluster is a Kubernetes cluster of ESXi instead of Linux
The supervisor cluster is a Kubernetes cluster of ESXi instead of Linux

ESXi Native Pods

Workloads deployed on the Supervisor, including Pods, each run in their own isolated VM on the hypervisor. To accomplish this we have added a new container runtime to ESXi called the CRX. The CRX is like a virtual machine that includes a Linux kernel and minimal container runtime inside the guest. But since this Linux kernel is coupled with the hypervisor, we’re able to make a number of optimizations to effectively paravirtualized the container.

Despite the perception of virtualization as being slow, ESXi can launch native pods in 100s of milliseconds, supporting over 1000 pods on a single ESXi host (same limits as for VMs on ESXi). Are Pods in a VM slow? Well, in our internal testing we’ve been able to demonstrate that ESXi Native Pods achieve 8% higher throughput on a standard Java benchmark than Pods on bare metal Linux!

Virtual Machines

The supervisor includes a Virtual Machine operator that allows kubernetes users to manage VMs on the Supervisor. You can write deployment specifications in YAML that mix container and VM workloads in a single deployment that share the same compute, network and storage resources.

The VM operator is just an integration with vSphere’s existing virtual machine lifecycle service, which means that you can use all of the features of vSphere with kubernetes managed VM instances. Features like RLS settings, Storage Policy, and Compute policy are supported.

In addition to VM management, the operator provides APIs for Machine Class and Machine Image management. To the VI admin, Machine Images are just Content Libraries.

Guest Clusters

While the supervisor uses Kubernetes, it’s not a conformant Kubernetes cluster. This is by design – it’s intending to use kubernetes to improve vSphere, rather than trying to turn vSphere into a Kubernetes clone.

For general purpose Kubernetes workloads, you can use Guest Clusters. A Guest Cluster is a kubernetes cluster that runs inside virtual machines on the Supervisor Cluster. A Guest Cluster is fully upstream compliant Kubernetes, so it’s guaranteed to work with all of your Kubernetes applications.

Guest Clusters in vSphere use the open source Cluster API project to lifecycle manage Kubernetes clusters, which in turn uses the VM operator to manage the VMs that make up a guest.

Guest cluster control plane in Supervisor Cluster
Guest cluster control plane in Supervisor Cluster

Container Registry

In order to run containers, you need someplace to put their container images. So we integrated the Harbor image registry into vSphere. You can turn on the Container Registry from vCenter. Each namespace in the supervisor gets its own project in that shared registry.

Summary

So what does it all look like? Check out this short video that walks you through some of the highlights.

We’re really excited about Project Pacific. It brings together the best parts of Kubernetes and vSphere into a unified application platform that can transform all of your apps; old and new. But we’re just scratching the surface of what it can do here. In the coming days, we plan to publish follow on articles about various features and capabilities of the platform along with details about how they work.

I’m really excited about our broader vision for helping customers build software on Kubernetes. You should read more about VMware Tanzu.