VMware NSX-T 2.5 was released last week and now we also have the new VMware NSX Container Plugin 2.5 on the market. VMworld US 2019 is also over with announcements of huge innovations like VMware NSX Intelligence, enhanced security, and improved operability and observability. You can find more information in the NSX-T 2.5 release blog. I would like to take this opportunity to specifically highlight the new containers capabilities we had been working on for this release.

What’s New in NSX Container Plugin (NCP)

Simplified UI (Policy API) Support

With the introduction of the new intent-based Policy API and corresponding Simplified UI, administrators have seen that objects created by NCP for Kubernetes are not visible in the new UI. Instead they had to go to the Advanced Networking & Security tab that corresponds to the old imperative APIs. Furthermore, all related objects like T0, IP Block, IP Pool had to be created using the Advanced Networking & Security tab. I am happy to announce that from NCP 2.5+ we have a parameter in the NCP configmap (ncp.ini) policy_nsxapi that can be set to True. In that case, NCP will start calling the new intent-based API and all objects will be visible in the simplified UI. Please create all pre-required objects, Tier-0/IP Block/IP Pool, in the simplified UI, if you plan to use this new option.

New Topology - Shared Tier-1


New Topology – Shared Tier-1

Please take a careful look on the picture above! We have been provisioning Tier-1 per namespace for long time. We still support it but now we have one more parameter in the NCP configmap (ncp.ini). If we set single_tier_topology to True, NCP will create one common Tier-1 for the entire Kubernetes cluster and will connects a Segment(Logical Switch) per namespace to the same Tier-1. This improves scalability significantly by saving NSX resources. It also allows better performance. NSX servers as SDN layer on multiple Kubernetes cluster. By enabling this option we move the per cluster stateful services on Tier-1, which allows 8-way Active-Active distribution of  Tier-0.

Single YAML File Applied to K8 Clusters


Simpler installation

There are multiple Kubernetes resources that are required in order NCP to work properly. Some of them are Custom Resource Definitions, Service Accounts, Cluster Role Bindings, nsx-system namespace, secrets, config maps, daemon sets, and deployments. We have merged all those definitions in a single YAML file, so once this file is applied to the k8s cluster all requirements will be created. The only step in advance is to edit the two configmaps with details of your deployment. Those two configmaps supply parameters to NCP and nsx-node-agent.

We also have moved Openvswitch user space daemon in one additional container inside nsx-node-agent pod. This pod has now three containers inside:

  1. nsx-node-agent – This container has the same name as the pod it is part of. The code in thus container is triggered when a new pod is created/deleted. It takes IP address, MAC address, and VLAN tag for the pod from hyperbus and wires it to Openvswitch on the node. Hyperbus is a secure communication channel between the hypervisor and nsx-node-agent inside the Kubernetes worker VM.
  2. nsx-kube-proxy – Openvswitch pods bypass the host TCP/IP stack, so the iptables DNAT rules for services(kube-proxy) are not applied. nsx-kube-proxy implements this logic inside Openvswitch.
  3. nsx-ovs – This is the third container inside nsx-node-agent pod. It role is to maintain the Openvswitch user space daemon running. If you SSH into one of the nodes and try to run OVS commands, it won’t be successful as the daemon is not running directly on the node.

In the past, an admin had to login to every k8s node and perform multiple steps to bootstrap it. He had to install nsx cni plugin and Openvswitch, to create OVS bridge and to add one vNic to the bridge. This step is not needed any longer. The new nsx-ncp-bootstrap pod provisioned as a daemon-set takes care for all those step and we don’t need to login to every node. It also manages the lifecycle of OVS and CNI.

New NSX NCP Bootstrap Pod



NSX-T has different categories for security policies in the simplified UI. NCP places the allow rules for health-check in environment category. Good category to place your admin policies is in Infrastructure category.

There is one more new ncp.ini parameter – baseline_policy_type. It accepts three possible values:

  1. allow_cluster – The default policy will be to permit the communication between all pods in all namespaces in the same Kubernetes Cluster.
  2. allow_namespace – The default policy will be to permit the communication between all pods in the same namespace but not across namespaces.
  3. none – This is the default behavior and no baseline rule will be created. It assumes it is manually created on the NSX Manager.

The rules for baseline policy is placed in the Application category. You might want to consider those depending on the connectivity strategy configured in NSX – Allowlisting, Denylisting, or None.

NCP also puts all k8s Network Policy dFW rules in the Application category. NCP creates two policies(nsx sections) as soon as a k8s Network Policy is applied to a service.

  1. First policy(nsx section) contains the dFW rules translated from the k8s Network Policy.
  2. The  second policy(nsx section) contains one or two isolation rules with DROP action for the affected by k8s Network Policy service. If the k8s network policy is in one direction, for example ingress, it will add isolation rule to this service. If k8s Network Policy is two directions, ingress and egress, it will add two isolation rules- one from the service and one to the service.
NSX-T 2.5 Provides Single Pane of Glass


NSX-T 2.5 expands on networking, security, automation, and multi-cloud but in this post I focused on cloud native.  We continue delivering on simplification, usability and security. NSX provides single pane of glass for network, security, and visibility across all Kubernetes clusters as well as traditional Virtual Machines and baremetal servers.