I am sometimes being approached with questions about NSX-T integration details for Openshift. As you may know NSX-T is packaged and integrated with Pivotal Container Service PKS, and also fully integrates Pivotal Application Service (PAS formerly known as PCF) as well as with vanilla Kubernetes, but what you may not know is how NSX-T integrates with Redhat’s Openshift. This post aims to throw some light on the integration with this platform for folks using OpenShift. In the examples below I am using Openshift Origin (aka OKD) but for a supported solution you need to go with Openshift Enterprise Platform. The same NSX-T instance can be used for providing networking, security, and visibility to multiple Openshift clusters.

 

Example Topology

 

In this topology we have a T0 router that connects physical with virtual world. We also have T1 router acting as a default gateway for the Openshift VMs. Those VMs have two vNICs each. One vNIC is connected to Management Logical Switch for accessing the VMs. The second vNIC is connected to a disconnected Logical Switch and is used by nsx-node-agent to uplink the POD networking. The LoadBalancer used for configuring Openshift Routes plus all project’s T1 routers and Logical Switches are created automatically later when we install Openshift. In this topology we use the default Openshift HAProxy Router for all infra components like Grafana, Prometheus, Console, Service Catalog and other. This means that the DNS records for the infra components need to point the infra nodes IP addresses since the HAProxy uses the host network namespace. This works well for infra routes but in order to avoid exposing the infra nodes management IPs to the outside world we will be deploying application specific routes to the NSX-T LoadBalancer. The topology here assumes 3 x Openshift master VMs and 4 x Openshift worker VMs(two for infra and two for compute). Anyway, if you are interested in a POC type of setup with one master and two compute nodes you can refer to the youtube video below.

 

Prerequisites

ESXi hosts requirements

Although , NSX-T supports different kind of hypervisors I will focus on  vSphere. ESXi servers that host Openshift node VMs must be NSX-T Transport Nodes.

Virtual Machine requirements

Openshift node VMs must have two vNICs:

  1. Management vNIC connected to the Logical Switch that is uplinked to the management T1 router.
  2. The second vNIC on all VMs needs to have two Tags in NSX in order nsx-container-plugin(NCP) to know which port needs to be used as a parent VIF for all PODs running in the particular openshift node.

 

Tags need to be as following:

 

 

 

 

 

 

* The order in the UI is reverse than in the API.

The node_name must be exactly as kubelet will see it and the cluster name must be the same as specified as nsx_openshift_cluster_name in the ansible hosts file shown below.

We need to make sure that the proper tags are applied on the second vNIC on every node.

NSX-T requirements

The following objects need to be pre-created in NSX in order later to be referred in the ansible hosts file:

  1. T0 Router
  2. Overlay Transport Zone
  3. IP Block for POD networking
  4. IP Block for routed(NoNAT) POD networking – optional
  5. IP Pool for SNAT – by default the subnet given per Project from the IP Block in point 3 is routable only inside NSX. NCP uses this IP Pool in order to provide connectivity to the outside.
  6. Top and Bottom FW sections (optional) in dFW. NCP will be placing k8s Network Policy rules between those two sections.
  7. Openvswitch and CNI plugin RPMs need to be hosted on a HTTP server reachable from the Openshift Node VMs (http://1.1.1.1 in this example).

Installation

Below is an example of ansible hosts file. This is focused on the NSX-T integration part. For full production deployment we recommend to refer to Openshift documentation. We also assume you will run the Openshift ansible installation from the first master node.

Run on all node VMs:

Run on the master node:

Once the above playbook finish do the following on all nodes:

Last step is to deploy the Openshift cluster:

This step will take around 40 minutes  depending on the options, number of hosts, and other.

Once it is done you can validate that the NCP and nsx-node-agent PODs are running:

Check NSX-T routing section:

 

 

 

 

 

 

 

 

 

Check NSX-T Switching section:

 

 

You can follow the entire installation process in the following 45 min. video:

DNS records

If you didn’t disable any of the default infrastructure services you would have the following default openshift routes.

You need to add DNS A records for those routes pointing the ip addresses of your infra nodes (in my example 10.0.0.21 and 10.0.0.22). You also need a wildcard DNS record for your domain pointing to the NSX-T Load Balancer VS(Virtual IP). In my example it is *.demo.corp.local.

 

Deploy a test Application

 

The video below shows deploying a test application and gives an overview how NSX-T provides networking, security, and visibility in an Openshift environment.

Additional Resources: