By Michael West, Technical Product Manager, VMware

As digital transformation moves from a talking point to an IT mandate, infrastructure teams are being asked to step up to expanded roles. An understanding of containers and container orchestration technologies, combined with deep expertise in storage, networking and compute virtualization, are the minimum requirements for platform engineering teams­—which now must provide containers as a service to developers on demand.

More specifically, developers are demanding a reliable, secure and highly-available platform consumable on demand at whatever scale they require. They are demanding Kubernetes-as-a-Service. Relevance for the platform operations teams means providing Kubernetes with a level of service and friction that rivals what is available in the public cloud. One significant barrier is the steep learning curve. Platform engineers without a deep understanding of Kubernetes system components must still be able to deploy and manage the platform.

On February 12, 2018, Pivotal and VMware announced the general availability of VMware Pivotal Container Service (PKS). PKS leverages a combination of open source and closed source technologies to provide a secure platform that encompasses both day one and day two operations on Kubernetes clusters, without requiring Kubernetes expertise. What does that mean? Let’s start by looking at what Kubernetes does well. It allows developers to easily deploy applications at scale. It handles the scheduling of workloads (via pods) across a set of infrastructure nodes. It provides an easy-to-use mechanism to increase availability and scale by allowing multiple replicas of application pods, while monitoring those replicas to ensure that the desired state (number of replicas) and the actual state of the application coincide. Kubernetes also facilitates reduced application downtime through rolling upgrades of application pods. PKS is providing similar capabilities for the Kubernetes clusters themselves.

With PKS, platform engineering teams can deliver Kubernetes clusters through a single API call or CLI command. Health monitoring is built into the platform, so if a service fails or a VM crashes, PKS detects that outage and rebuilds the cluster. As resources become constrained, clusters can be scaled out to relieve the pressure. Without PKS, upgrading Kubernetes is not as easy as upgrading the application pods running on the cluster. PKS provides rolling upgrades to the Kubernetes cluster itself.

The platform integrates with the VMware vSphere ecosystem, so platform engineers can use the tools they are familiar with to manage these new environments. Lastly, PKS includes licensed and supported Kubernetes; Harbor, an enterprise container registry; and VMware NSX-T. PKS is available on vSphere and public cloud platforms.

Let’s net this out. PKS gives you the latest version of Kubernetes–VMware and Pivotal have committed to constant compatibility with Google Kubernetes Engine (GKE), so you can always be up to date. There is an easy-to-consume interface for deploying Kubernetes clusters, scale-out capability, health monitoring and automated remediation, rolling upgrade and an enterprise container registry with notary image signing and Clair vulnerability scanning. All of this is deployed while leveraging NSX-T logical networking from the VMs down to the Kubernetes pods.

The following product demos show you how these components work in PKS:

PKS Overview

Creating and resizing Kubernetes clusters can be done with simple CLI commands. Health monitoring of cluster components is done without intervention from the user. For a concise overview of PKS and a demonstration of how to create and resize a Kubernetes cluster, as well as of automatic failure remediation, check out this demonstration video.



Persistent Volumes in PKS

Containers are ephemeral by nature. In the context of Kubernetes, this means that data stored on the pod’s disks are lost when a pod fails or is restarted. Persistent volumes provide the capability for stateful applications to emerge. Application pods must be specifically defined with persistent volumes mounted on underlying persistent storage infrastructure. The interfaces to the underlying storage are platform specific and generally require manual configuration. PKS-deployed Kubernetes clusters are already configured with the vSphere Storage Provider, meaning that application developers can handle the definition and creation of underlying persistent volumes as part of their Kubernetes pod specifications without engaging the storage team. This video shows you how to create storage classes and persistent volume claims, mount the volume on a pod and enable persistence in an otherwise stateless application.



Container Registry (Harbor)

Container registries are more than just a place to store images. An organized set of image repositories is core functionality, but an enterprise-grade registry must do more. Images should be secured via role-based access control, their provenance should be verified via digital signature and package vulnerabilities should be identified. PKS is integrated with Harbor, an open source enterprise-grade container registry. Check out this demonstration video to see an overview of Harbor and how to enable content trust and vulnerability scanning to ensure only signed images without significantly high-threat vulnerabilities are deployed to your Kubernetes clusters.



Container Networking with NSX-T

PKS includes software-defined networking with NSX-T. NSX-T supports logical networking from the Kubernetes cluster VMs to the pods themselves, providing a single network management and control plane for your container-based applications. Integration through the Kubernetes CNI plugin framework enables the automatic creation of network components in NSX-T when Kubernetes specifications are deployed. In this demonstration video, you will see an overview of NSX-T integration with Kubernetes. You will find out how to create a namespace in Kubernetes and verify that logical routers and switches are created on your behalf. Check out network policy integration with the NSX-T distributed firewall and see how the Traceflow utility can trace packet flow from logical interfaces on VMs or pods, through the entire infrastructure stack to destination VMs or pods.



These videos were created using scenarios that are available for you to try directly in the VMware Hands-on Labs. Go to and select lab HOL-1832-01-CNA.

Stay tuned to the Cloud-Native Apps blog for more insights into Kubernetes, and be sure to follow us on Twitter (@cloudnativeapps).