Contributions from: Alka Gupta, Prasad Kalpurekkal
Pivotal Container Service (PKS) aims to simplify how enterprises deploy, run and manage Kubernetes clusters on any cloud. For detailed documentation of PKS installation and configuration, go here. For details on configuring PKS with NSX-T Data Center, go here.
Every enterprise wants to run containers in production. However, primary questions being asked are: “How do I get Kubernetes to work in my data center? How do I simplify deployment of Kubernets clusters? What about Networking and Security? ”
PKS answers these questions with a feature set tailored to the requirements of the enterprises. Read about PKS features in detail here.
One of the features PKS offers is tight integration with NSX-T Data Center, enabling advanced networking and security for container based emerging application architectures, just as it does for traditional 3-tier apps. In these environments, NSX-T Data Center provides Layer 3 container networking and advanced networking services such as built-in load balancing, micro-segmentation, multi-tenancy, central visibility with a central SDN controller, network topology choices and more. We demonstrated this at the Network Field Day 17 which you can see here.
In this blog, we call out the work done at VMware Global Solutions Partner lab on configuring Pivotal Container Service (PKS) with NSX-T. We specify some of the basic must have requirements that often get skipped or overlooked.
Understanding the physical layout of your corporate network is a must, as it lays the concrete foundation of any design. Below is the simple wire map of our POC lab
- Minimum of 2 Physical NICs on each host is a must
- To support NSX-T Data Center (NSX-T) overlay traffic, minimum of 1600 MTU size to be configured from ESXi hosts to upstream switches
- BGP or at least Static protocol needs to be configured
- Enough compute and storage capacity is required
Note: We are using Dell Poweredge R640 vSAN Ready Nodes for simplicity, agility, manageability, and cost savings experience from VMware Virtual SAN
- VMware vSphere 6.5 GA Enterprise Plus or vSphere with Operations Management Enterprise Plus or latest
- PKS and NSX-T components can be downloaded from here
Other Recommended Software:
- VMware Harbor Container Registry for PCF – To securely access container registry within Datacenter
- Wavefront by VMware – For monitoring and providing deeper metrics of Kubernetes (K8s)
- VMware vRealize Operations Manager (vROps) – For PKS infrastructure monitoring and alerting. vROPS also comes with vROPs Management Pack for Container Monitoring for monitoring the K8s Cluster (Namespaces, Clusters, Replica Sets, Nodes, Pods, and Containers)
- VMware vRealize Log Insight (vRLI) for analyzing logs from vSphere, NSX-T, BOSH and PKS Control Plane to external syslog servers – Deployment:
See below a checklist with all the required information to complete PKS with NSX-T deployment successfully.
Following are the deployment options that are currently available:
1. Preparing vSphere Environment:
Deploy and configure your vSphere environment using the installation guide available at VMware.
- All the servers have at least 1 free VMNIC available on all the hosts for overlay traffic
- The MTU for each Virtual Switch is set to 1600 or higher
- vSphere components are synced with NTP
- FQDN are resolvable
- Additionally, also recommend having a separate VLANs for system traffic
- vCenter, NSX-T components, and ESXi hosts must be able to communicate with each other
- Port-groups created for VTEP & LB MGMT MUST be configured to TRUNKED port
- 2 resource pools need to be pre-created and it depends upon the deployment option that will be chosen:
Note: In case if you are going with deployment option 1, then create both the resources pools (PKS MGMT (for PKS control plane VMs) & PKS-COMP (for K8s workload) on compute cluster itself), otherwise (deployment option-2) use Management cluster for PKS-MGMT & Computer cluster for PKS-COMP
2. Deploying & Configuring NSX-T Components:
For details on installing and configuring NSX-T components with PKS, refer to the documentation here.
- After deploying the NSX-T components you won’t be able to change the VMs IP setting, so please make sure all the settings are correct
- DNS server and NTP server should be reachable
- Minimum of 3 NSX Controllers are recommended
- NSX-edge node MUST be deployed in Active-Standby mode
- NSX-edge node MUST be deployed using LARGE configuration (8vCPUs, 16GB of RAM and 120GB of storage)
- UPLINK profile MUST have MTU configured to at least 1600
- While using the above topology, it’s a MUST to add all the hosts from both the clusters to the Transport nodes and prepped with NSX-T
- T0 peering with upstream physical switches using Static or Dynamic routing
- NAT rules are appropriately configured as per your environment
On NSX-T Data Center Components –
- Successful communication with NSX-T Manager, Controller and Edge Nodes via SSH
- Check NSX-T components are reachable to DNS & NTP servers
- On NSX Manager check – ESXi hosts showing as healthy Transport Nodes
- Verify the connectivity status between T0 and physical infra
- Verify VMs communicating successfully over the NSX-T L2 and L3 logical networks
On vCenter –
- Physical adapters view of vCenter, a new hostswitch MUST be visible on vmnicX that was used to configure for N-VDS
On ESXi –
- Open a SSH session on the hosts and verify new N-VDS that was created while configuring NSX-T using command: esxcfg-vswitch -l
- Verify the New VMKernel Interface for TEP (Tunnel Endpoint) details using the command: esxcfg-vmknic -l
- Note the IP Address been allocated are from the VTEP IP pool that been created while configuring NSX-T
- Do a vmkping test and confirm the communication of TEPs on the other Transport Nodes using the below command: vmkping ++netstack=vxlan <vmknic IP>
- Check the MTU size been set to 1600 or more, also verify if TEPs interface are communicating using larger packet size with below command: vmkping ++netstack=vxlan <destination VTEP IP> -d -s <packet size>
- HyperBus interface vmk50 might be missing on ESXi hosts and it will lead to Container deployment failure, so please verify this on the hosts where your K8s containers will be deployed using the following command: esxcfg-vmknic -l
In case if hyperbus interface vmk50 is missing on the ESXi hosts, below is the workaround to create the interface manually –
- Retrieve the vmk50 port ID using CLI on vSphere ESXi net-dvs | grep vmk50 -C 10
2. Create the vmk50 interface on vSphere ESXi esxcli network ip interface add -P <port-id from step-1> -s DvsPortset-0 -i vmk50 -N hyperbus
3. Assign an IP address to the vmk50 interface. esxcfg-vmknic -i 169.254.1.1 -n 255.255.0.0 -s DvsPortset-0 -v <port-id from step-1> -N hyperbus
3. Deploying & Configuring PKS components (Ops Manager, BOSH VM, PKS VM, PKS CLI):
For installing and configuring PKS VMs, VMWare already created a good documentation and is available here.
- Ops Manager
a. When deploying Ops Manager using the deployment option-1, please make sure to select the vDS Portgroup instead of logical switch, or else deployment would fail with the below error.
Once OVA is successfully deployed, before powering on the Ops Manager VM make sure to change the portgroup to logical switch
b. In case of deployment option-2, while configuring OpsManager for multiple cluster with NON-SHARED datastore, specify both the non-shared datastore information with comma separated on Ephemeral as well as Persistent disk Datastore placement –
c. Also, Ops Manager configuration select Standard vCenter Networking instead of NSX as NSX-T configuration MUST be selected while configuring the PKS tile.
- PKS Tile –
a. While configuring networking for PKS tile, make sure to select the NSX-T and provide the details NSX-T components