Watch Full VMware Telco Cloud Platform 5G Demo - Simplifying CNF Operations with Automation
Telco Cloud 5G and 6G

Simplifying Kubernetes Operations Series with Telco Cloud Platform: Part 2- Deployment of Kubernetes cluster.

Watch Full VMware Telco Cloud Platform 5G Demo – Simplifying CNF Operations with Automation

In Part 1 we discussed how to create master and workload cluster template; in this Part 2, let’s deploy our new cluster on the infrastructure. 

To do so, click on Cluster Instances inventory and select the option to Deploy a Kubernetes Cluster with five simple steps.

  1. First, select Infrastructure for the cluster to be deployed.
  2. Second, select Cluster template that we created earlier.
  3. Third, deploy Kubernetes Cluster
  4. Fourth, configure Master Node
  5. Fifth, configure Worker Node 

Let’s start with selecting where to deploy the cluster by using the template we just created. 

The automation and auto-discovery capabilities of VMware Telco Cloud Platform provide appropriate values to be selected for all the cluster resources by correlating the template requirements and the available resources of the selected infrastructure. This dramatically simplifies the configuration process.  

We have defined both the master node and worker node when we created the template.

First select the CaaS Infrastructure for the cluster to be deployed.

Next select the Cluster template for the management cluster that we created earlier.

Next define the Kubernetes cluster details including the network assignments.

For both master node and worker nodes the only elements remaining at this moment are the network assignments.

Here’s our worker node’s configuration. 

In the same way we did for template creation, we can review the configuration before deploying the cluster.  

All good, let’s deploy. We repeat the similar steps to deploy a Workload cluster. 

We can observe the instantiation process from the Cluster Instances inventory and validate if the cluster is deployed successfully. 

Once the cluster is instantiated, we can drill-down and see its configuration and the nodes inside the cluster. 

The Cluster Configuration tab displays the Kubernetes version, upgrade history, CNI and CSI configurations, syslog server details, any tools associated with the cluster, and the Harbor repository details. (Harbor is the name of the container image repository.)

The Master Nodes tab displays the details of the Master node, the labels attached to the node, and the network labels.

The Worker Nodes tab displays the existing node pools of a Kubernetes cluster. 

The Tasks tab displays the progress of the cluster-level tasks and their status. 

Management Cluster displays the progress of its tasks and all the Workload cluster tasks that are managed by this cluster. It also displays the node pool tasks of all the Workload clusters.

Workload Cluster displays the progress of cluster tasks and node pool tasks.

Let us take a look at where the cluster is deployed from vSphere client. As we can see in the vSphere client, Kubernetes clusters are deployed on the vSphere compute cluster, and the Workload cluster consists of three worker nodes with vSAN enabled.

vSANDatastore has native container storage capabilities to allow workloads to mount persistent volumes.

For networking, we are using NSX-T Standard for management plane traffic and Antrea for container data plane networking, allowing communications between different CNFs instantiated inside the worker nodes.

This concludes the Part 2 of deploying the Kubernetes cluster.

Stay tuned next week and we will proceed with the next blog in this series Late binding capability with CNF onboarding.