posted

0 Comments

By Prasad Kalpurekkal, Art Fewell, and Alka Gupta

 

VMware NSX-T Data Center helps simplify networking and security for Kubernetes by automating the implementation of network policies, network object creation, network isolation, and micro-segmentation. NSX-T also provides flexible network topology choices and end-to-end network visibility.

A network policy is defined by Kubernetes as a specification of how groups of pods are allowed to communicate with each other and other network endpoints. These policies are only an intended definition; they are not an implementation of the policies. A network solution like NSX-T is still needed to realize the intended state of these policies.

One significant benefit of using NSX-T Data center with VMware Pivotal Container Service (PKS) and Kubernetes is automation–that is, the dynamic provisioning and association of network objects for unified VM and pod networking. The automation includes the following:

  1. On-demand provisioning of routers and logical switches for each Kubernetes cluster
  2. Allocation of a unique IP address segment per logical switch
  3. Automatic creation of SNAT rules for external connectivity
  4. Dynamic assignment of IP addresses from an IPAM IP block for each pod
  5. On-demand creation of load balancers and associated virtual servers for HTTPS and HTTP
  6. Automatic creation of routers and logical switches per Kubernetes namespace, which can isolate environments for production, development, and test
  7. Support for Kubernetes 1.10 with Multi-AZ support for high availability (HA)

All this happens on-demand and it is automated without any manual intervention, but how?

It’s simple. PKS comes with a UI called Ops Manager that allows a platform reliability engineer (PRE) to configure the networking constructs via a simple web form, as shown in the figure below. The platform reliability engineer then enters configuration variables like IP address ranges and hostnames into the web form (or uses a script or API). PKS then uses the provided variables to automate the configuration, deployment, and lifecycle of NSX-T objects when PKS deploys Kubernetes clusters.

Automated Management of Container Networking

NSX-T Container Plug-in (NCP) plays a prominent role in the automation process. NCP monitors the changes to containers and other resources and manages container networking requirements through API calls to NSX Manager and the Kubernetes control plane.

Here’s an image showing the workflow of the complete process:

 

Imposing Network Security

Another important problem addressed by using NSX-T Data Center with PKS and Kubernetes is network security, specifically:

  • Network isolation at pod, node, and cluster levels, with separate networks for nodes and pods as well as separate segments for each cluster. Here is a diagram that illustrates cluster-level isolation:
  • Micro-segmentation, which applies network and security policy at a granular level with a zero-trust model, which means “trust nothing and verify everything.” This model shields workloads from both internal and external attacks. Here’s a diagram that illustrates micro-segmentation at the levels of pods, nodes, and namespaces:
  • An infrastructure-as-code methodology of deploying security policy, which decouples policy from configuration and enforcement; enables full CI/CD pipeline automation; and enhances application security beyond legacy capabilities.

 

Applying Micro-Segmentation to Pods

With NSX-T, you can apply micro-segmentation to Kubernetes pods with predefined label-based rules and Kubernetes network policy.

Predefined label-based rules allow DevOps and security teams to define firewall policies in advance of deployment, based on business logic rather than using legacy and highly inefficient methods like static IP addresses to craft security policy. With this method, security groups are defined in NSX-T with ingress and egress policy and micro-segmented to protect sensitive applications and data down to the pod and container level. A developer or administrator can then include the corresponding label in the Kubernetes deployment manifest, which ensures the container will be fully protected by the associated security policy at runtime.

Kubernetes network policy is a namespace property through which firewall rules can be defined to allow traffic into and out of a namespace and between pods (by default, the policy is drop). Once the network policy is applied using the command kubectl create –f nsx-policy.yaml, NSX-T dynamically creates the security groups and policy  defined in the YAML file. Here is a sample YAML file:

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

  name: nsx-deny-egress-policy

spec:

  podSelector:

    matchLabels:

      app: web

  policyTypes:

  – Egress

  egress:

  # allow DNS resolution

  – ports:

    – port: 53

      protocol: UDP

    – port: 53

      protocol: TCP

Here are best practices for configuring network policies:

  • Refer to the release notes, compatibility guides, and recommended configuration maximums.
  • Exclude management components, NSX Manager, VMware vCenter, and security tools from the distributed firewall policy to avoid lockout.
  • Choose the policy methodology and rule model to enable optimum groupings and policies for micro-segmentation. These policies may include application, infrastructure, and network policies laid out in the data center.
  • Use the NSX-T tagging and grouping construct to group an application or environment within its natural boundaries.  This approach simplifies policy management.
  • Consider the network policy model to be flexible and simple for day-two operations in order to accommodate any changes in the future toward redefining micro-segmentation of the clusters.
  • Leverage separate distributed firewall sections to group and manage policies based on the chosen rule model–e.g., emergency, infrastructure, environment, application, or tenant sections.
  • Use a white list model: create explicit rules for allowed traffic and the default distributed firewall rule from allow to block.

The third important problem addressed by NSX-T with PKS is lack of full network traceability and visibility. NSX-T has built-in operational tools for Kubernetes, including:

Here is a screenshot that shows the Traceflow tool in action:

Each tool is unique in its way and helps speed up troubleshooting.

For details on installing and configuring NSX-T with PKS, see the documentation.

 

Summary

We hope this blog has helped amplify the power of integrating NSX-T with PKS. With its unique feature sets, NSX-T greatly simplifies problems around networking, microsegmentaion, and security for multiple Kubernetes clusters, pods, and namespaces.  NSX-T automates the implementation of Kubernetes network policies and provides dynamic on-demand provisioning of associated network objects, such as load balancers, routers, and switches.  In addition, it addresses the need to have comprehensive management toolsets for operating and troubleshooting Kubernetes networking.