posted

0 Comments

Joseph Griffiths, VMware Solutions Architect Office of the CTO, Ambassador

Out of the box, the open-source version of Kubernetes struggles to provide secure multi-tenant ingress to clusters, which can make it a challenge to create the Kubernetes cluster API and worker nodes with all the required networking. You can, however, radically simplify many operational aspects of running Kubernetes in production by using VMware PKS, and automating networking when a cluster is created serves as a primary example.

To illustrate how VMware PKS automatically sets up networking, this blog post provides a deeper dive into the networks that are created when you issue the following command:

 

This command creates a new Kubernetes cluster with the name my-cluster with an external name of my-cluster.corp.local using the small plan. Plans are defined as part of the VMware PKS installation, and the plans can be resized at any time. The plan includes:

  • The number master/etcd nodes and their size
  • The number of worker nodes and their size
  • The availability zone to use

You can see the small plan inputs in the following screenshot:

You can see the status of the create-cluster command with the cluster command:

 

Once you issue the command, the etcd and worker nodes are deployed along with all the required networking. Several networks are created during  cluster creation. All the networks include the cluster’s UUID so it’s simple to track in NSX-T.  Searching in NSX-T for the UUID provides the following information:

As you can see, the operation has created several logical routers to handle VMware PKS traffic:

  • One T1 router for the Kubernetes master node (pks-UUID-cluster-router)
  • One T1 router for the load balancer (lb-pks-UUID-cluster-router)
  • Four T1 routers, one per namespace, which can be found by using the following command:

 

To locate what is running inside each namespace, you can run the following command:

 

Here’s a description of what each namespace is used for:

Namespace What is it used for
default Default namespace for containers
kube-public Used by cluster communications
kube-system Heapster, kube-dns, kubernetes-dashboard, metrics-server, monitoring-influxdb, telemetry-agent
pks-system Fluent, sink-controller

 

When you add additional namespaces to the Kubernetes cluster, additional T1 routers are deployed. With VMware PKS, all of this is handled automatically, making it simple and easy to deploy a Kubernetes cluster with integrated networking.   This is best illustrated by adding a namespace to our cluster called new-namespace using this command:

 

You can see the new namespace by using the following command:

 

In NSX, you can use the UUID to check that a new T1 router has been deployed for the new namespace:

Removal of the namespace also cleans up all the networking constructs, making the experience seamless for end users:

 

In NSX, you can see that the T1 router for new-namespace has been removed:

As illustrated, the tight integration between Kubernetes and NSX-T built into VMware PKS allows for easier administration of container-based environments.

Interested in finding out more about how VMware PKS automates networking for Kubernetes clusters? Check out the following videos:

* Establishing multi-tenancy by creating Tier 0 routers with VMware PKS and NSX

* Automating networking of Kubernetes clusters with NSX

* Establishing persistent security controls with NSX by using a Kubernetes policy spec