Technical

vSphere 7 – Introduction to Tanzu Kubernetes Grid Clusters

VMware Cloud Native Apps Icon

As you can see by all the other blog articles, a LOT of new stuff is available in vSphere 7! What is also new is me working on something other than vSphere Security (Bob Plankers has that now). My new focus is vSphere with Kubernetes and how it affects the traditional vSphere Administrator (VI Admin).

Long before my time in security I was a “system manager” (sysadmin) running VAXcluster systems that serviced a whole business unit and later, the OpenVMS operating system development team. Technology has changed quite a bit since then, but the challenges are still, basically, the same. This is why the needs and career of the vSphere Administrator are very important to me.

Enough about ancient history. Let’s take a look at “Tanzu Kubernetes Grid clusters” — a big component of vSphere with Kubernetes — from the standpoint of the vSphere Administrator.

What are “Tanzu Kubernetes Grid clusters”?

When we first announced “Project Pacific” in 2019 at VMworld these were referred to as “Guest Clusters.” They are now called Tanzu Kubernetes Grid clusters or “TKG clusters.” With vSphere 7 with Kubernetes this functionality is delivered as part of VMware Cloud Foundation 4.0.

Note: Kubernetes is often abbreviated to “K8S” — there are 8 letters between the ‘K’ and the ‘S’…

A Tanzu Kubernetes Grid (TKG) cluster is a Kubernetes (K8s) cluster that runs inside virtual machines on the Supervisor layer and not on vSphere Pods. It is enabled via the Tanzu Kubernetes Grid Service for vSphere. Since a TKG cluster is fully upstream-compliant with open-source Kubernetes it is guaranteed to work with all your K8s applications and tools. That alone is a big advantage.

TKG clusters in vSphere use the open source Cluster API project for lifecycle management, which in turn uses the VM Operator to manage the VMs that make up the cluster. Let’s look at an architecture layout and then dive into the components:

Here’s the glossary of what you see in this diagram.

  • SDDC: the Software-defined Data Center, also known as your vSphere infrastructure
  • vSphere Pod Service: The vSphere Pod Service is a special kind of Kubernetes cluster that uses ESXi as its worker nodes instead of Linux.
    • VMware Principle Engineer Joe Beda, one of the co-founders of Kubernetes, refers to Kubernetes as a “Platform Platform. Meaning, Kubernetes is a platform for building new platforms. For example, Kubernetes can run other platforms, like Kubernetes! It’s not just about running containers.
  • Namespace:  A namespace is the unit of management that provides governance needed by vSphere admins to assign resources, permissions, policies and controls around Kubernetes workloads. There are two ways we use the term Namespace when we talk about vSphere with Kubernetes:
    • Kubernetes Namespace: These are used within Kubernetes for resource management and controls. In a TKG cluster the developer could use namespaces to control access to certain parts of their application.
    • vSphere Namespace: My colleague Michael West introduced vSphere Namespaces in a blog post at VMworld 2019. vSphere Namespaces align with vSphere constructs like Resource Pools and vSphere Encryption implementations.
  • Pods: In this example above these are vSphere Pods. The definition of a pod is a unit of deployment. It runs one or more containers that share resources.
  • Tanzu Kubernetes Cluster: – This is a fully conformant Kubernetes cluster running on virtual machines. In the example above it has a Control Plane VM, three worker nodes, and a full K8s stack that can be used by a developer. Running on these worker VMs are pods, and inside the pods are containers.

Why is this interesting to me, the vSphere Admin?

If you think about it, as a vSphere Admin, the common link to developers is via a help ticketing system. Developers open a ticket and request resources. Virtual machines, network changes, more storage, and so on. What if you, the vSphere Admin, could just assign them a sandbox and give them a way to self-service all these requests? That’s what Kubernetes brings to the table for YOU! THAT is, IMHO, the paradigm shift for vSphere Administrators. Let me explain.

With Kubernetes the developer can declare what they want via a YAML file and send that to Kubernetes using the kubectl command. They can say “Give me a TKG cluster running 3 worker nodes, an ingress network, and encrypt all persistent data.” Assuming they have the right permissions in their namespace, those actions are all done for them by the infrastructure.

Note: At no time did they need to learn the vSphere or NSX API’s! They just declared what they wanted and got it. Just like in K8s instances in the public cloud.

If others are running workloads there’s no worry. Workloads are isolated via Namespaces and NSX networking and of course, vSphere virtual machine isolation. Traditionally we’d call this multitenancy! I’ll dive more into Namespaces in another blog. You’re gonna love them, I promise.

Why should TKG clusters be important to me?

vSphere with Kubernetes is affording the vSphere admin the opportunity to maintain their status as the providers of stable, secure, proven infrastructure for these new workloads. In your meetings with developers you can show them that you can provide a fully conformant Kubernetes cluster running on a platform that already meets the company’s business needs and requirements. Little things like backups, compliance mandates, security scans, disaster recovery and customer support that some “forget” about when rushing to implement the new hotness. There’s now no reason to have folks go off and build Shadow IT copies of K8s. Shadow IT is a huge concern of every customer I talk to.

What kind of visibility do I get?

This is a great question. For the longest time I’ve heard customers talk about the inability to map resources to people. They’ll use tags or creative names of VMs, but it really doesn’t give them that warm and fuzzy feeling. If your developer spins up a TKG cluster, you’ll see that in the vSphere client. Each TKG cluster has in it an agent that provides certain centralized management features. The guest agent allows the contents of the cluster to be visible in the vCenter UI. It also allows the vCenter administrator to set policies that govern clusters by installing admission controllers, policy objects, and scanners in the guest cluster.

This is how a namespace looks in the vSphere Client (click on the image to zoom in):

This image shows the Demo Namespace, assigned to user Fred. In that namespace Fred has created a TKG cluster called “dev-cluster.” There are three cluster worker node virtual machines and three cluster control plane virtual machines. Running on these worker nodes are three instances of nginx.

In addition, you’ll see the storage policies assigned. These map directly to vSphere Storage Based Policy Management values, e.g. “high-performance-ssd.”

Let’s look at how the cluster looks from a compute standpoint regarding the virtual machines:

You can see in the image the six VMs that make up the “dev-cluster” TKG cluster, what VM image they are running, their creation time, and the VM class they came from.

The next image shows the same cluster but from the standpoint of Tanzu Kubernetes:

Here you can see the name, “dev-cluster,” the number of worker nodes, the Kubernetes version of the cluster and the IP address for the Control Plane. This is the IP address used by the developer to connect via the kubectl command.

The next image shows the networking for the TKG Cluster:

In the networking pane you can see all the networking involved in the “dev-cluster” TKG Cluster. You have the name of the Control Plane AND the workloads running on the cluster. You also have the IP Addresses and whether then are the addresses of a Load Balancer, or not. You can also see the ports that are opened and if this cluster was using an external IP Address, an Ingress, or Endpoint controller you’d see those as well.

Of course, because all this information is in the vSphere Client it also means it’s available via the vSphere APIs, too.

Wrap Up

There you have it. vSphere with Kubernetes running TKG Clusters with full visibility for the vSphere Administrator in the tool of their choice. As this blog was just an introduction that means I’ll be sharing more with you throughout the coming months. These will be starting out as introduction-level blogs and diving deeper as we get closer to the VMworld timeframe.

I hope you find this information helpful and if you have questions on vSphere with Kubernetes for the vSphere Administrator or topics that you’d like me to cover, please sent me a DM on Twitter: @mikefoley

Thanks for reading!

– mike

 


We are excited about vSphere 7 and what it means for our customers and the future. Watch the vSphere 7 Launch Event replay, an event designed for vSphere Admins, hosted by theCUBE. We will continue posting new technical and product information about vSphere 7 and vSphere with Kubernetes Monday through Thursdays into May 2020. Join us by following the blog directly using the RSS feed, on Facebook, and on Twitter. Thank you, and please stay safe.