(By Michael West, Technical Product Manager, VMware)
vSphere 7 with Kubernetes enables operations teams to deliver both infrastructure and application services as part of the core platform. The Network service provides automation of software defined networking to both the Kubernetes clusters embedded in vSphere and Tanzu Kubernetes clusters deployed through the Tanzu Kubernetes Grid Service for vSphere. The term “service” has become somewhat overloaded. In this context, I am not referring to a Kubernetes service specifically, but to the more generic term that describes a particular capability that is technically composed of several technologies across products. This blog and demonstration video are the first of two parts and will explore automated networking of the Supervisor cluster through the vSphere Network service. A follow-on blog will dive into the Tanzu Kubernetes cluster networking.
vSphere 7 with Kubernetes Services
As a starting point lets briefly explore the services exposed through vSphere 7 with Kubernetes. Operations teams enable the Supervisor Kubernetes cluster on vSphere clusters through a simple wizard in the vSphere Client. That Supervisor cluster provides the Kubernetes backbone onto which we have built services that can be consumed by both Operations and DevOps teams. The first service exposed in the Tanzu Runtime Services is the Tanzu Kubernetes Grid Service for vSphere. The TKG service allows DevOps teams to lifecycle Kubernetes clusters on demand. The TKG service leverages the Hybrid Infrastructure Services to create VMs, configure networking and storage, provide container registries and even to deploy pods directly to ESXi hosts. Our focus is the Network Service.
The network service provides an abstraction on the underlying software defined networking used in the Supervisor cluster. The current version of vSphere 7 with Kubernetes includes support for NSX-T as the provider of networking services. Operations teams deploy NSX T 3.0 and the vSphere Network Service works through the NSX Container Plugin (NCP) to automate the lifecycle of networking resources consumed by the Kubernetes clusters. For a technical overview on vSphere 7 with Kubernetes services, try this link:
vSphere Network Service and NSX
NSX can be configured to use port groups created directly on a version 7.0 vSphere Distributed Switch that connects non-NSX workloads. There is no need for a VDS dedicated to NSX. In my lab, the NSX overlay network is configured on the vDS-Transport port group. The vCenter management network is used for communication between VC and NSX Manager, as well as the Supervisor Control plane nodes. The vDS -Uplink port group will carry traffic connecting to non-NSX networks.
vSphere 7.0 Distributed Switch Configuration
When the vSphere 7 with Kubernetes Supervisor cluster is enabled, the network service creates segment port groups on the VDS. Corresponding network segments are created in NSX, along with a Tier-1 gateway to provide connectivity between the network segments. Notice that NSX segment port groups are not created on a separate switch but are now sharing an existing VDS. NSX segments provide network connections to which you can attach virtual machines or pods. The VMs or Pods can then communicate with each other over tunnels between hypervisors if they are connected to the same segment. Traffic is routed through the Tier-1 gateway to connect other segments. Segments were called logical switches in earlier versions of NSX.
vSphere 7.0 Distributed Switch with NSX segment port groups
NSX Container Plugin (NCP)
NCP is a controller, running as a Kubernetes pod in the Supervisor cluster control plane. It watches for network resources added to etcd through the Kubernetes API and orchestrates the creation of corresponding objects in NSX. Each VDS segment port group gets a corresponding NSX segment and each of those segments is assigned a network subnet from the pod CIDR defined in the Supervisor Cluster deployment. The segments are connected to a Tier-1 gateway. The segment containing the Supervisor Control Plane is attached to an external Load Balancer.
NSX Manager Network Topology
VI Admins create namespaces in vCenter. This creates both a vCenter namespace object and a Kubernetes namespace in the Supervisor Kubernetes Cluster. We refer to this as a vSphere Namespace. A VDS port group and NSX segment are also created for each namespace. The NSX segment is isolated through rules created in the NSX Distributed Firewall to deny ingress and egress traffic by default. DevOps users can leverage Kubernetes Network Policy integration to create granular access for the applications they deploy in the namespace.
Banking Namespace NSX Segment
Pod and Load Balancer Type Service Creation
Users have the choice of creating Tanzu Kubernetes clusters using the TKG Service for vSphere and deploying pods into those clusters, or using the vSphere Pod Service to directly deploy them onto the ESXi hosts through the Supervisor cluster. In the case of Supervisor cluster deployment of pods, they are connected to the NSX segment for their namespace and acquire an IP from the subnet range assigned to that namespace. Kubernetes services provide grouping and service discovery for pods. Load Balancer type Kubernetes services enable Ingress to pods from outside the cluster. The creation of a Load Balancer type service causes NCP to orchestrate the creation of NSX Virtual Servers associated with the Load Balancer created in the initial Supervisor Cluster deployment. The virtual server is assigned an IP and port that is used to access the service.
Pods Run on ESXi – connected to Namespace NSX Segment
Let’s see it in action
The following demonstration shows a high level architecture of network components in the Supervisor cluster and how the vSphere network service automates the creation of those components in vCenter and NSX. For more information on vSphere 7 with Kubernetes, check out our product page: https://www.vmware.com/products/vsphere.html