Architecture

OpenShift on the VMware SDDC – Architectural Overview

In this blog, we explore in-depth how VMware and Red Hat are collaborating to better integrate OpenShift Container Platform and VMware’s software-defined data center (SDDC) infrastructure stack per the recent announcement.  We have many mutual customers looking to leverage the combination of their technology investments to their fullest. And with VMware and Red Hat both embracing Kubernetes as the core platform supporting their modern applications, it is only logical that together we focus on enabling success for customers deploying OpenShift on the VMware SDDC.

The first step on this journey is communication and sharing what we already have in common.

VMware vSphere and Red Hat Enterprise Linux already work well together; however, architectural alignment to deliver better storage and software-defined networking experience is often overlooked by IT Teams and OpenShift Administrators. To address this, this blog post outlines the mutually created updates to the Red Hat OpenShift Container Platform 3.11 core documentation with the latest guidance for SDN and Storage integrations alongside dedicated VMware documents to support SDN (NSX-T/NCP) and Kubernetes Storage.

Let’s dig deeper into these two areas.

Storage

To support the persistent storage requirements of containers, VMware developed the vSphere Cloud Provider and its corresponding volume plugin. These can be delivered up to the OpenShift platform either backed by VMware vSAN or any supported vSphere Datastore. While the capabilities of each storage backend vary, the power of this integration remains.

These storage offerings are exposed as VMFS, NFS or vSAN datastores. Enterprise-grade features like Storage Policy Based Management (SPBM) provide automated provisioning and management enabling customers to guarantee QoS requested by their business-critical applications and enforce SLAs.

SPBM provides a single unified control plane across a broad range of data services and storage solutions. SPBM enables vSphere administrators to overcome upfront storage provisioning challenges, such as capacity planning, differentiated service levels, and managing capacity headroom.

Kubernetes StorageClasses allows the creation of PersistentVolumes on-demand without having to create storage and mount it into OpenShift nodes upfront. StorageClasses specify a provisioner and parameters that are used to define the intended policy for a PersistentVolume, which will be dynamically provisioned.

Using a combination of SPBM and vSphere datastores as an abstraction we hide intricate storage details and provide a uniform interface for storing persistent data (PVs) from OpenShift environments.

Depending on the backend storage used, the datastores can be either vSAN, VMFS or NFS:

  • vSAN powers hyperconverged infrastructure solutions which provides excellent performance as well as reliability. The vSAN advantage is simplified storage management with features like storage policies driven at the vSphere IaaS layer.
  • VMFS (Virtual Machine File System) is a clustered file system that allows virtualization to scale beyond a single node for multiple VMware vSphere servers. VMFS increases resource utilization by providing shared access to a pool of storage.
  • NFS (Network File System) is a distributed file protocol to access storage over the network like local storage.

Static & Dynamic provisioning

vSphere Cloud Provider offers two ways to deliver storage up to OCP: static provisioning and dynamic provisioning. The preferred method is to use dynamic provisioning – letting the IaaS platform handle all the complexity. Unlike static provisioning, dynamic provisioning automatically triggers the creation of the PV and its backend VMDK file. It’s a more secure way of doing things and crucial to delivering a reliable OpenShift platform on vSphere.

Dynamic Provisioning

  • Define a default StorageClass for the OpenShift cluster.
  • Create a PersistentVolumeClaim in Kubernetes.

Static Provisioning

  • Creating a virtual disk on vSphere storage and mount to OCP node
  • Creating a persistent volume (PV) for that disk inside OpenShift.
  • Creating a persistent volume claim PVC for the PV.
  • Allow the pod to claim the PVC.

Using the vSphere Cloud provider alongside SPBM allows both vSphere and OpenShift administrators clear visibility into storage and the ability to take advantage of backend storage capabilities without adding complexity to the OpenShift layer.

Network (SDN)

NSX-T Data Center has helped OpenShift customers simplify their networking and network-based security for several years with the NSX Container Plug-in (NCP). NCP provides the interface between OpenShift and the VMware NSX Manager at the IaaS level.

NCP runs on each OpenShift node and connects the networking interface of a container to the NSX overlay network. It monitors container life cycle events and manages networking resources such as load balancers, logical ports, switches, routers, and security groups for the containers by calling the NSX API. This includes programming of the guest vSwitch to tag and forward container traffic between the container interfaces and the virtual network interface card (vNIC).

NCP provides the following functionality:

  • Automatically creates an NSX-T logical topology for an OpenShift cluster, and creates a separate logical network for each OpenShift namespace.
  • Connects OpenShift pods to the logical network, and allocates IP and MAC addresses.
  • Supports network address translation (NAT) and allocates a separate SNAT IP for each OpenShift namespace.
  • Implements OpenShift network policies with NSX-T distributed firewall.
    • Support for ingress and egress network policies.
    • Support for IPBlock selector in network policies.
    • Support for matchLabels and matchExpression when specifying label selectors for network policies.
  • Implements OpenShift Router with NSX-T layer 7 load balancer.
    • Support for HTTP route and HTTPS route with TLS edge termination.
    • Support for routes with alternate backends and wildcard subdomains.
  • Creates tags on the NSX-T logical switch port for the namespace, pod name, and labels of a pod, and allows the administrator to define NSX-T security groups and policies based on the tags.

Micro-Segmentation

NSX-T (via NCP) can apply micro-segmentation to OpenShift pods with predefined tags based rules and Kubernetes network policy per namespace. Predefined tag rules allow you to define firewall policies in advance of deployment based on business logic rather than using less efficient methods such as static IP addresses to craft security policy. With this method, security groups defined in NSX-T with ingress and egress policy and micro-segmented to protect sensitive applications and data down to the pod and container level.

Finally, NSX-T provides OpenShift clusters with full network traceability and visibility. NSX-T has built-in operational tools for Kubernetes, including:

  • Port Connection
  • Traceflow
  • Port Mirroring
  • IPFIX

Allowing DevOps and dedicated network teams better visibility into OpenShift container networks enables network admins and OpenShift admins to speak the same language when diagnosing and troubleshooting issues.

In Summary

VMware SDDC delivers resilient, scalable infrastructure that tightly integrates with VMware’s Kubernetes solutions as well as those of key partners such as Red Hat. Looking forward, both VMware and Red Hat are committed to supporting our mutual customers and the Kubernetes community with the common goal of providing better product integration through a reference architecture that enables improved tooling to deliver and manage cloud-native applications on VMware’s SDDC and Red Hat’s OpenShift Container platform. Today’s updates are only the start of better and exciting things to come.

Further Reading

 

About the Author

Robbie Jerrom is the EMEA Technical Lead for Modern Apps and Cloud Native Platforms at VMware. He works alongside some of VMware’s largest customers in Europe as they focus on bringing Modern and Cloud-Native applications and platforms to their VMware Software-Defined Datacenter.  Robbie is also a member of VMware’s CTO Ambassador community ensuring tight collaboration between VMware’s Engineering organizations and real world customers.  Follow Robbie on Twitter at @robbiej