VMware vRealize Automation Cloud Container Management Containers Kubernetes

Introducing the Tanzu Mission Control Integration for VMware Aria Automation

With the July 2022 release of VMware Aria Automation (formerly known as: vRealize Automation Cloud), Tanzu Kubernetes Clusters deployed through VMware Aria Automation can be automatically added to Tanzu Mission Control Cluster Groups and therefore inherit the associated Access, Security, Quota and Custom policies.

This new functionality builds on the existing integration with Tanzu Kubernetes Grid on vSphere , which already allows Cloud Administrators to create Cluster Plans, make use of the Tag-based placement engine and create on-demand Cloud Templates to deploy Tanzu Kubernetes clusters.

Tanzu Mission Control simplifies Kubernetes management at scale by providing Cluster Groups to apply common policies to Kubernetes clusters – this could reflect the environment type, application, compliance or even access requirements of the Kubernetes clusters being managed. A Kubernetes cluster added to a Cluster Group inherits policies assigned at the group level.

With the new Tanzu Mission Control integration in VMware Aria Automation there are several ways to set the Cluster Group for a deployed Tanzu Kubernetes Cluster:

  • Integration – at the integration level you can specify a default Cluster Group for any Kubernetes Cluster deployed but does not have a Cluster Group specified at the Project or Template level
  • Project – you can also specify a Project-level default Cluster Group, this overrides the Integration level and sets the default Cluster Group for any Kubernetes Cluster deployed within the VMware Aria Automation Project
  • Template – the Cluster Group can also be specified in the YAML of a VMware Cloud Template which overrides the Project and Integration defaults

In addition to this, the Cloud Administrator can manually assign or remove the Kubernetes Cluster to/from a Cluster Group in the Kubernetes Resource management console.

A simple example use case

In order to demonstrate the value of this new integration, it’s helpful to imagine a simple use case.

The Cloud Management team want to allow self-service deployments of Kubernetes Clusters within the constraints configured in VMware Aria Automation. Development clusters should be “best effort” and make use of Service Broker’s lease policies to ensure unused clusters are reclaimed. Production clusters, on the other hand, need to be highly available, have guaranteed resources, and performant storage.

The SRE team have already configured a set of Production and Development policies that restrict certain aspects of the clusters – for example, the Developers user group can access Development clusters but only the deployment CI/CD pipeline user can log onto the Production clusters. They have restricted Production workloads to only allow pods running as non-priviliged, non-root users but Development workloads can run as priviliged and root users.

Configuring vSphere and VMware Aria Automation

I won’t go into too much detail on how to configure VMware Aria Automation to deploy Tanzu Kubernetes Clusters in this post – it’s covered elsewhere, but I have a short checklist here:

  • vSphere Cloud Account is added, and is “Available for Kubernetes deployment”
  • Supervisor Cluster(s) is added under Kubernetes Resources
  • Supervisor Namespace(s) is added Kubernetes Resources
  • Kubernetes Zone is created, with Compute Resources assigned
  • Project is added, with the Kubernetes Zone assigned
  • Cluster Plan is created

To demo my use case I have two Supervisor Namespaces added to VMware Aria Automation and the Kubernetes Zone. Each Supervisor Namespace is tagged with an environment name so that the tag-based placement engine will assign Clusters to the correct namespace. (This could also be used to place clusters in different Supervisor Clusters, or different vCenters). The Kubernetes Zone is also added to my “Field Demo” Project to enable members of that Project to consume resources.

Kubernetes Zone with tagged Supervisor Namespaces to allow placement

I’ve also created two Cluster Plans, one for Development and one for Production. Cluster Plans provide a template for deploying Tanzu Kuberenetes Clusters with a specific layout and settings. The “Development” cluster plan I’ve created specifies a single-node Kubernetes Control Plane, “best-effort” VM class, and a non-redundant storage policy. “Production” specifies a three-node HA Kubernetes Control Plane, “guaranteed” VM class, and fully redundant storage policy.

Production and Development Cluster Plans

Configuring the Tanzu Mission Control Integration

The Tanzu Mission Control Integration is activated by a Cloud Administrator under Infrastructure > Connections > Integrations. Only one integration per VMware Aria Automation Organization is permitted. The integration should be configured with:

  • Name – name the integration
  • Description – description for the integration
  • Tanzu Mission Control URL – the URL of your instance of Tanzu Mission Control Organization, since each instance has a unique URL
  • Token – a CSP API token scoped to Tanzu Mission Control
  • Default cluster group – set the integration-level default cluster group (as described above)
  • Default workspace – set the integration-level default workspace (not used yet)
Configuring a Tanzu Mission Control Integration in VMware Aria Automation

Once the URL and Token have been added, validate the configuration to populate the cluster groups and workspaces. Select the desired options and save the Integration.

Within Tanzu Mission Control, I’ve created two Cluster Groups that have a basic Access policy assigned, and a Custom policy to meet the “run as” requirements for the demo use case.

Production and Development Cluster Group
Development Security Policy allowing privileged containers and RunAsAny permissions
Production Security Policy denying privileged containers, and MustRunAsNonRoot permissions

Creating a VMware Cloud Template

Creating a VMware Cloud Template to deploy the Tanzu Kubernetes Clusters self-service on-demand is actually very simple. The example below has three inputs defined, a dropdown to select the number of Kubernetes worker nodes to deploy, an input for a DNS compliant name, and a dropdown to select which Cluster Plan to use.

The Cloud.Tanzu.Cluster object can be dragged from the left-hand palette onto the canvas, or created directly as YAML. The example below configures basic settings from the inputs, but also adds a tag constraint and performs some logic to ensure the correct Tanzu Mission Control Cluster Group name is specified.

formatVersion: 1
inputs:
  workers:
    type: number
    title: Worker Node Count
    default: 3
    enum:
      - 1
      - 2
      - 3
      - 4
      - 5
  name:
    type: string
    title: Cluster Name
    description: DNS-compliant cluster name
  clusterplan:
    type: string
    title: Cluster Plan
    enum:
      - Development
      - Production
resources:
  Cloud_Tanzu_Cluster_1:
    type: Cloud.Tanzu.Cluster
    properties:
      name: ${input.name}
      plan: ${input.clusterplan}
      workers: ${input.workers}
      description: TKGs Cluster
      constraints:
        - tag: env:${input.clusterplan}
      tmcClusterGroupName: ${input.clusterplan == "Development" ? "autotmm-development":"autotmm-production"}

Once the VMware Cloud Template is created, a Version is released to use in the Service Broker Catalog.

Testing the Deployment

I can now run a couple of simple test deployments to validate that my on-demand Kubernetes clusters are deployed as expected – and are integrated with Tanzu Mission Control, and recieve the expected policies. I request a cluster called “tmc-dev-1” with a single worker node and the Development cluster plan, and a cluster called “tmc-prod-1” with three worker nodes and the Production cluster plan.

Requesting a Development Tanzu Kubernetes Cluster from Service Broker

After a few minutes to deploy each cluster, they are available to view in the correct namespace within vSphere, with the correct number of control plane and worker nodes

The clusters are deployed in their respective Supervisor Namespace
The clusters are added to their designated Cluster Group

Accessing the deployed Kubernetes Cluster

When you deploy a Tanzu Kubernetes Cluster using VMware Aria Automation, the generated admin kubeconfig file is available to download via the Deployment (for the requesting user) or through the Kubernetes Resources page (for the Cloud Administrator).

When Tanzu Mission Control is configured and the Kubernetes Cluster is added to a Cluster Group, the link will open a modal popup with instructions and links to access the cluster via Tanzu Mission Control’s kubectl plugin. The Cloud Administrator can view the modal or download the administrative kubeconfig file via the Kubernetes Resources page.

Instructions to access the Kuberentes Cluster using the Tanzu Mission Control plugin

Testing Tanzu Mission Control’s Policy is Applied

Once you’re authenticated through Tanzu Mission Control, you can test whether the policies are running as expected, for example in my Production cluster the policy prohibits running the pod as root, so a simple busybox pod configured with runAsUser: 0 can trigger the policy:

apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo
spec:
  containers:
  - name: sec-ctx-demo
    image: busybox:1.28
    command: [ "sh", "-c", "sleep 1h" ]
    securityContext:
      runAsUser: 0
Tanzu Mission Control policy blocking the pod executing as root

Next Steps

Hopefully this simple example demonstrates the power of the new Tanzu Mission Control Integration for VMware Aria Automation, bringing togther the best of VMware Aria Automation’s self-service, on-demand Kubernetes cluster deployments with Tanzu Mission Control’s powerful fleet management and policy tools.

If you want to find out more about VMware Aria Automation please visit our website, or to learn more about our features, and explore vVMware Aria Automation get started with a free 45-day trial! You can find out more about Tanzu Mission Control here.