I am sometimes being approached with questions about NSX-T integration details for Openshift. As you may know NSX-T is packaged and integrated with Pivotal Container Service PKS, and also fully integrates Pivotal Application Service (PAS formerly known as PCF) as well as with vanilla Kubernetes, but what you may not know is how NSX-T integrates with Redhat’s Openshift. This post aims to throw some light on the integration with this platform for folks using OpenShift. In the examples below I am using Openshift Origin (aka OKD) but for a supported solution you need to go with Openshift Enterprise Platform. The same NSX-T instance can be used for providing networking, security, and visibility to multiple Openshift clusters.
Example Topology
In this topology we have a T0 router that connects physical with virtual world. We also have T1 router acting as a default gateway for the Openshift VMs. Those VMs have two vNICs each. One vNIC is connected to Management Logical Switch for accessing the VMs. The second vNIC is connected to a disconnected Logical Switch and is used by nsx-node-agent to uplink the POD networking. The LoadBalancer used for configuring Openshift Routes plus all project’s T1 routers and Logical Switches are created automatically later when we install Openshift. In this topology we use the default Openshift HAProxy Router for all infra components like Grafana, Prometheus, Console, Service Catalog and other. This means that the DNS records for the infra components need to point the infra nodes IP addresses since the HAProxy uses the host network namespace. This works well for infra routes but in order to avoid exposing the infra nodes management IPs to the outside world we will be deploying application specific routes to the NSX-T LoadBalancer. The topology here assumes 3 x Openshift master VMs and 4 x Openshift worker VMs(two for infra and two for compute). Anyway, if you are interested in a POC type of setup with one master and two compute nodes you can refer to the youtube video below.
Prerequisites
ESXi hosts requirements
Although , NSX-T supports different kind of hypervisors I will focus on vSphere. ESXi servers that host Openshift node VMs must be NSX-T Transport Nodes.
Virtual Machine requirements
Openshift node VMs must have two vNICs:
- Management vNIC connected to the Logical Switch that is uplinked to the management T1 router.
- The second vNIC on all VMs needs to have two Tags in NSX in order nsx-container-plugin(NCP) to know which port needs to be used as a parent VIF for all PODs running in the particular openshift node.
Tags need to be as following:
1 2 |
{'ncp/node_name': 'node_name'} {'ncp/cluster': 'cluster_name'} |
* The order in the UI is reverse than in the API.
The node_name must be exactly as kubelet will see it and the cluster name must be the same as specified as nsx_openshift_cluster_name in the ansible hosts file shown below.
We need to make sure that the proper tags are applied on the second vNIC on every node.
NSX-T requirements
The following objects need to be pre-created in NSX in order later to be referred in the ansible hosts file:
- T0 Router
- Overlay Transport Zone
- IP Block for POD networking
- IP Block for routed(NoNAT) POD networking – optional
- IP Pool for SNAT – by default the subnet given per Project from the IP Block in point 3 is routable only inside NSX. NCP uses this IP Pool in order to provide connectivity to the outside.
- Top and Bottom FW sections (optional) in dFW. NCP will be placing k8s Network Policy rules between those two sections.
- Openvswitch and CNI plugin RPMs need to be hosted on a HTTP server reachable from the Openshift Node VMs (http://1.1.1.1 in this example).
Installation
Below is an example of ansible hosts file. This is focused on the NSX-T integration part. For full production deployment we recommend to refer to Openshift documentation. We also assume you will run the Openshift ansible installation from the first master node.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
[OSEv3:children] masters nodes etcd [OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=origin openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] openshift_master_htpasswd_users={'yasen' : '$apr1$dNJrJ/ZX$VvO7eGjJcYbufQkY6nc4x/'} openshift_master_default_subdomain=demo.corp.local openshift_use_nsx=true os_sdn_network_plugin_name=cni openshift_use_openshift_sdn=false openshift_node_sdn_mtu=1500 openshift_master_cluster_method=native openshift_master_cluster_hostname=master1.corp.local openshift_master_cluster_public_hostname=master1.corp.local # NSX specific configuration #nsx_use_loadbalancer=false nsx_openshift_cluster_name='cl1' nsx_api_managers='192.168.110.202' nsx_api_user='admin' nsx_api_password='VMware1!VMware1!' nsx_tier0_router='DefaultT0' nsx_overlay_transport_zone='overlay-tz' nsx_container_ip_block='pod-networking' nsx_no_snat_ip_block='nonat-pod-networking' nsx_external_ip_pool='external-pool' nsx_top_fw_section='containers-top' nsx_bottom_fw_section='containers-bottom' nsx_ovs_uplink_port='ens224' nsx_cni_url='http://1.1.1.1/nsx-cni-2.4.0.x86_64.rpm' nsx_ovs_url='http://1.1.1.1/openvswitch-2.10.2.rhel76-1.x86_64.rpm' nsx_kmod_ovs_url='http://1.1.1.1/kmod-openvswitch-2.10.2.rhel76-1.el7.x86_64.rpm' [masters] master1.corp.local master2.corp.local master3.corp.local [etcd] master1.corp.local master2.corp.local master3.corp.local [nodes] master1.corp.local ansible_ssh_host=10.0.0.11 openshift_node_group_name='node-config-master' master2.corp.local ansible_ssh_host=10.0.0.12 openshift_node_group_name='node-config-master' master3.corp.local ansible_ssh_host=10.0.0.13 openshift_node_group_name='node-config-master' node1.corp.local ansible_ssh_host=10.0.0.21 openshift_node_group_name='node-config-infra' node2.corp.local ansible_ssh_host=10.0.0.22 openshift_node_group_name='node-config-infra' node3.corp.local ansible_ssh_host=10.0.0.23 openshift_node_group_name='node-config-compute' node4.corp.local ansible_ssh_host=10.0.0.24 openshift_node_group_name='node-config-compute' |
Run on all node VMs:
1 2 3 |
yum -y install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo |
Run on the master node:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
ssh-keygen ssh-copy-id -i ~/.ssh/id_rsa.pub master1 ssh-copy-id -i ~/.ssh/id_rsa.pub master2 ssh-copy-id -i ~/.ssh/id_rsa.pub master3 ssh-copy-id -i ~/.ssh/id_rsa.pub node1 ssh-copy-id -i ~/.ssh/id_rsa.pub node2 ssh-copy-id -i ~/.ssh/id_rsa.pub node3 ssh-copy-id -i ~/.ssh/id_rsa.pub node4 yum -y --enablerepo=epel install ansible pyOpenSSL git clone https://github.com/openshift/openshift-ansible cd openshift-ansible/ git checkout release-3.11 cd ansible-playbook -i hosts openshift-ansible/playbooks/prerequisites.yml |
Once the above playbook finish do the following on all nodes:
1 2 3 4 5 |
# Assuming NCP Container image is downloaded locally on all nodes docker load -i nsx-ncp-rhel-xxx.tar # Get the image name docker images docker image tag registry.local/xxxxx/nsx-ncp-rhel nsx-ncp |
Last step is to deploy the Openshift cluster:
1 |
ansible-playbook -i hosts openshift-ansible/playbooks/deploy_cluster.yml |
This step will take around 40 minutes depending on the options, number of hosts, and other.
Once it is done you can validate that the NCP and nsx-node-agent PODs are running:
1 |
oc get pods -o wide -n nsx-system |
Check NSX-T routing section:
Check NSX-T Switching section:
You can follow the entire installation process in the following 45 min. video:
DNS records
If you didn’t disable any of the default infrastructure services you would have the following default openshift routes.
1 2 3 4 5 6 7 8 |
docker-registry-default.demo.corp.local registry-console-default.demo.corp.local grafana-openshift-monitoring.demo.corp.local prometheus-k8s-openshift-monitoring.demo.corp.local alertmanager-main-openshift-monitoring.demo.corp.local console.demo.corp.local apiserver-kube-service-catalog.demo.corp.local asb-1338-openshift-ansible-service-broker.demo.corp.local |
You need to add DNS A records for those routes pointing the ip addresses of your infra nodes (in my example 10.0.0.21 and 10.0.0.22). You also need a wildcard DNS record for your domain pointing to the NSX-T Load Balancer VS(Virtual IP). In my example it is *.demo.corp.local.
Deploy a test Application
The video below shows deploying a test application and gives an overview how NSX-T provides networking, security, and visibility in an Openshift environment.
Additional Resources:
- NCP release Notes: https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3.2/rn/VMware-NSX-T-Data-Center-232-Release-Notes.html
- Administering NCP: https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.ncp_kubernetes.doc/GUID-7D35C9FD-813B-43C0-ADA8-C5C82596E1C9.html
- VMware NSX-T Documentation: https://docs.vmware.com/en/VMware-NSX-T/index.html
- All VMware Documentation: https://docs.vmware.com
- VMware NSX YouTube Channel: https://www.youtube.com/VMwareNSX
- PKS: https://blogs.vmware.com/cloudnative/2019/01/16/vmware-pks-1-3/
Comments
0 Comments have been added so far