Container Service Extension 3.1.2 is now GA! The Container Service Extension introduced Tanzu Kubernetes Grid Cluster support in 3.0.4 and has been evolving rapidly. Release 3.1.1 introduced support for automated Ingress Load Balancing with NSXT Advanced Load balancer with Cloud Provider Interface(CPI) plugin. The CPI plugin with VMware Cloud Director as an endpoint allows for Secure ingress access to Services on TKG clusters. The Cloud Storage Interface(CSI) plugin to create and manage Persistent Volumes per TKG clusters dynamically allows for Stateful Applications volume persistence. This blog post reviews new features available from Container Service Extension for Tanzu Kubernetes Grid.
Proxy Configuration for TKG Clusters:
The proxy configuration allows Users to use proxy for outbound traffic. Today, after the TKG cluster is booted for the first time, TKG Cluster requires internet access to download CPI, CSI, and CNI plugins. This configuration will enable customers to provision Tanzu Kubernetes Clusters using CSE with restricted internet environments. The proxy configuration is a global setting for VMware Cloud Director and Container Service Extension. The Provider admin can configure proxy settings while setting up the CSE server configuration file as follows:
1 2 3 4 5 |
#extra_options: # tkgm_http_proxy: [http proxy url with port] # tkgm_https_proxy: [https proxy url with port] # tkgm_no_proxy: [comma separated list of IP addresses] |
When the customers or their TKG cluster authors launch a new cluster, each node of the cluster(Worker and Control Plane VM) will receive the configured proxy settings in ‘http-proxy.conf’ file. The following Figure showcases outbound traffic flow for TKG Clusters in VMware Cloud Director.
Considerations
The existing TKG clusters from the previous release continue using legacy settings for outbound internet access. However, when the user scales the TKG cluster deployed from release 3.1.1, the new worker node will have a proxy configuration.
Container Service Extensions supports proxy configuration for TKG Clusters only.
Customer org admin needs to create applicable routed access to the proxy server.
Force Delete TKG Clusters
Starting 3.1.2, the TKG Cluster author role can delete stranded TKG clusters using the following CLI: “cse cluster delete <cluster name> -f”. When this command is executed successfully as shown in the example below, the associated CPI plugin resources such as LB Service, Service Engine group, and NAT rules, will be removed with the referenced cluster.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
vcd cse cluster list Name Org Owner VDC K8s Runtime K8s Version Status ------------- ----- ------------- ----- ------------- --------------------- ------------------ myk8s1 org1 administrator ovdc1 TKGm TKGm v1.21.2+vmware.1 CREATE:IN_PROGRESS myk8scluster5 org1 cseadmin ovdc1 TKGm TKGm v1.21.2+vmware.1 DELETE:IN_PROGRESS vcd cse cluster delete myk8scluster5 -f Are you sure you want to delete the cluster? [y/N]: y Run the following command to track the status of the cluster: vcd task wait d4d63ff5-a6df-4c3e-b891-d696b7402b0b vcd cse cluster list Name Org Owner VDC K8s Runtime K8s Version Status ------ ----- ------------- ----- ------------- --------------------- ------------------ myk8s1 org1 administrator ovdc1 TKGm TKGm v1.21.2+vmware.1 CREATE:IN_PROGRESS |
Further Reading: