When it comes to tenancy within the Kubernetes platform, there are multiple ways to slice and dice the platform depending on how app-dev teams are structured, the projects they work on, or the different production or development environments needed. Tenancy within Kubernetes can be drawn all the way from the Infrastructure hosting the Kubernetes platform to establishing tenancy within a cluster by using Kubernetes namespaces.
With model 1 above, a single Kubernetes cluster is sourcing infrastructure resources from a shared or dedicated infrastructure environment. Multiple teams would be accessing the cluster using namespaces assigned to each team. With this model, each tenant or team gets a logically separate space to deploy pods, persistent volume claims, cluster services, etc. However, certain Kubernetes resource types need to be shared, like the ingress controller or DNS mappings. For example, an ingress controller helps route HTTP and HTTPS traffic between external network and services within a Kubernetes cluster. When an ingress service is defined to route traffic, the ingress controller within Kubernetes will look up for pods to map to the service globally and not on a specific namespace. This means different teams using the same cluster will have to coordinate on naming services, mapping ingress labels, etc.
In model 2, rather than sharing a single Kubernetes cluster, multiple teams can have one or more dedicated Kubernetes clusters. This model offers better flexibility to app-dev teams—each team can have full control of resources within Kubernetes, including ingress, admission controllers and DNS. Applications that need to talk across clusters can use Kubernetes service or ingress controllers on each cluster to route traffic. Infrastructure can be shared or dedicated.
Supporting Tenancy Models
VMware Enterprise PKS supports both the above models. However, certain environments, such as those of service providers or platform operators, may need deeper levels of tenancy to support multiple customers or teams or to provide common operator services. VMware Enterprise PKS helps these requirements by being the single control plane that service providers can use to manage tenancy and provide a set of shared services. With version 1.3, VMware Enterprise PKS allows for a deeper tenancy model by creating tenant isolation in the network stack used to support Kubernetes clusters. The tenant isolation can provide isolated network paths per tenant, the ability to provide overlapping IP addresses, route filtering, etc.
VMware Enterprise PKS derives infrastructure resources like compute and storage from VMware vSphere and networks from VMware NSX-T to provision Kubernetes clusters. Specifically, it can consume a vCenter cluster with multiple ESXi hosts for compute and storage needed to support Kubernetes nodes as well as application pods or containers within the nodes. VMware Enterprise PKS creates a hierarchical virtual network structure in NSX-T to support pod networking and Kubernetes cluster networking as well as service and load balancing needs for application pods within a cluster. All the virtual machines provisioned for cluster nodes are connected to logical switches created using NSX-T; all the pods within the cluster nodes are also on their own logical switches. These logical switches are uplinked to a logical T1 router. The logical T1 router is then uplinked to a logical T0 router. The T0 routers serve as a gateway between logical and physical networks.
Apart from providing switching and routing to cluster nodes and pods, NSX-T also provisions logical load balancers when an application comprising multiple pods needs to be exposed to external networks. By default, pods within the Kubernetes nodes are on a private pod network not accessible externally. To expose any pods externally, Kubernetes has a ‘Service’ type object. One of the ways to expose a pod is using Service Type: Load Balancer. Whenever a Service Type: Load Balancer is requested using the Kubernetes API with VMware Enterprise PKS, a VIP is assigned to the NSX-T logical Load balancer assigned to the Kubernetes cluster.
Using a Dedicated T0 Router for Each Tenant
With VMware Enterprise PKS version 1.3, instead of just having a single T0 Router to back all the Kubernetes clusters, a dedicated T0 Router can be used per tenant, giving service providers more flexibility to manage multiple tenants more efficiently while providing access to common services. This model also gives more flexibility to tenants by enabling them to reuse the CIDR blocks they need while providing more room to scale out.
In addition, the Tenant T0 routers can be connected to a shared T0 router that can talk to the control plane components of VMware Enterprise PKS. Individual firewall rules can be applied at each T0 to deny east-west traffic originating from other tenant networks.
Isolating Tenants and Preserving Independence
In a multi-tenant environment, especially for service providers, the data traffic between each of the service provider's tenants needs to be isolated. In addition, the tenants need to be able to bring in their own IP address ranges to assign them to Kubernetes pods, services, and SNATs. More importantly, the tenants should have autonomy over the IP address ranges that they would like to specify without having to worry about overlaps with other tenants. VMware Enterprise PKS 1.3 is able to offer this level of isolation and allow tenants to operate independently with IP address overlaps with the support of multiple Tier 0 routers.
Kubernetes clusters can be deployed on targeted Tier 0 routers using the Network profile feature. By specifying the Tier 0 router ID in the Network Profile, a cluster can be deployed on that particular Tier 0 router.
With the multi-T0 model, platform operators and service providers can provide a more isolated environment per tenant. Tenant admins get more flexibility in determining IP address scopes and get better scale.
Resources
- A detailed walkthrough of the Multi-T0 feature: [Video] VMware Enterprise PKS 1.3 Demo—Support for Multi Tier 0 Router with NSX-T
- Multi T0 documentation and setup.