This post is part of a series that examines some of the fundamentals of creating, utilizing, and managing Tanzu Kubernetes clusters with the Tanzu Kubernetes Grid (TKG) Service for vSphere. If you need a primer to understand the basic concepts, make sure you read vSphere 7 — Introduction to Tanzu Kubernetes Grid Clusters. And if you haven’t already, this would also be a good time to read An Elevated View of the Tanzu Kubernetes Grid Service Architecture, the first post in the series.
In this post, we will focus on deploying a TKG Service cluster using a simple, customized specification. This yaml spec will be an object of kind TanzuKubernetesCluster. The spec that is used is an adaptation of what’s available in the public VMware documentation.
Let’s go through the steps line by line:
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: tkg-demo-cluster-01
namespace: demo-tkg-ns
spec:
distribution:
version: v1.17.4+vmware.1-tkg.1.057f6be #The full image name is specified
topology:
controlPlane:
count: 1 #1 control plane nodes
class: guaranteed-xsmall #extra small size VM
storageClass: demo-tkg-storagepolicy #Specific storage class for control plane
workers:
count: 3 #3 worker nodes
class: best-effort-xsmall #extra small size VM
storageClass: demo-tkg-storagepolicy #Specific storage class for workers
settings:
network:
cni:
name: calico
services:
cidrBlocks: ["198.51.100.0/12"] #Cannot overlap with Supervisor Cluster
pods:
cidrBlocks: ["192.0.2.0/16"] #Cannot overlap with Supervisor Cluster
storage:
classes: ["demo-tkg-storagepolicy"] #Named PVC storage classes
defaultClass: demo-tkg-storagepolicy #Default PVC storage class
In the metadata section, the name is what the Tanzu Kubernetes cluster will be called when it’s deployed, and will be recognized in the vCenter UI. Another required step is specifying the vSphere namespace where this TKG cluster will be provisioned.
Within the spec, the version is a direct mapping of what’s available in the subscribed content catalog. This version correlates to a template with a Kubernetes version as well.
The topology is where the ability for customization comes into play. A Kubernetes topology consists of at least one Kubernetes control plane image that runs all the Kubernetes services (such as the API server and control loops), with one or more Kubernetes workers that are responsible for running the workloads. In each section, there is a count, which is analogous to a replica count in Kubernetes denoting the number of each kind. For a TKG cluster, these are represented as virtual machines (VMs).
The class for each type has a few different variations. The first denotes guaranteed vs. best effort. If a machine class has the guaranteed label, vSphere reservations for CPU and memory are set for the VM. Best effort, on the other hand, will not have any reservations. There are also pre-defined VM sizes available that determine the amount of vCPU and RAM; they range from extra small to extra large. The VM classes can be retrieved using the kubectl command as well. For this example, to keep resources light, the extra-small variant will be utilized.
Next is defining the storage class to be applied to the VMs for datastore placement. The storage class maps to a storage policy that is defined in the vSphere namespace.
The sections that cover advanced settings are completely optional. The network settings define the type of CNI. For this deployment, Calico will be the default CNI. The services and pod CIDrs are taken from the documentation and have not been adjusted. As seen above, these IP ranges cannot overlap with the Supervisor Cluster.
The storage section defines the storage policies that will be assigned as storage classes to the Tanzu Kubernetes cluster. Multiple storage policies can be added in a comma-separated array. At the same time, the default storage class can be specified for any persistent volume claim created inside the Tanzu Kubernetes cluster.
Now the TanzuKubernetesCluster object can be applied to the Supervisor Cluster using kubectl. Once the cluster has been deployed, it will be available as part of the inventory using `kubectl get tanzukubernetesclusters`.
The cluster can be accessed directly by authenticating with kubectl vSphere using the namespace and cluster flags. This step adds the token to the KUBECONFIG file, where the context can be used for the cluster. After setting the context to the Tanzu Kubernetes cluster, it will function just like a normal Kubernetes cluster.
For a complete step-by-step of the process, be sure to watch this video:
Stay tuned to catch the next blog post and video in the series. For more information in the meantime, check out the Tanzu Kubernetes Grid site.