Aria Automation Kubernetes Kubernetes

On-demand Workload Clusters on vSphere with Tanzu using Cloud Assembly

The capability to manage vSphere 7’s Supervisor Namespaces and on-demand creation of Supervisor Namespaces through Cloud Assembly has been part of vRealize Automation Cloud since the July 2020 release (or the 8.2 release premises).

With the launch of vRealize Automation Cloud (Nov 2021/8.6.1 on-premises) we’re excited to announce the capability to create on-demand Workload Clusters on vSphere 7 using Cloud Assembly’s VMware Cloud Templates.

The new functionality leverages the existing vSphere 7 Cloud Accounts type and adds “Cluster Plans” and a Tanzu Kubernetes Cluster object to the Resource Types available.

To begin deploying Tanzu Kubernetes Cluster the infrastructure needs to be configured within Cloud Assembly – the good news is that steps 1-4 here are not new, if you’re already using a vSphere 7 with Tanzu endpoint you’ve most likely already done them and only need steps 5-6.

  1. Add a vSphere 7 Cloud Account
  2. Add a Supervisor Cluster and Supervisor Namespace
  3. Create a Kubernetes Zone and add the Supervisor Namespace(s) to the Zone
  4. Add the Kubernetes Zone to a Project
  5. Create a Cluster Plan
  6. Create a Cloud Template

Configuring the Infrastructure

Since not everyone will have completed steps 1-4, I’ll run through them here to show a basic setup.

1 – Add a vSphere 7 Cloud Account

Add a vCenter Cloud Account (Infrastructure > Connections > Cloud Accounts > Add Cloud Account) and enter the details of your vCenter Server. So long as you have configured Workload Management to enable the Tanzu Kubernetes Grid Service, the Status section will show “Available for Kubernetes deployment”

vSphere Cloud Account with Kubernetes Workload Management enabled

2 – Add a Supervisor Cluster and Supervisor Namespace

To bring a Supervisor Cluster under management of vRealize Automation it needs to be added under the Infrastructure > Kubernetes > Supervisor Cluster tab. If you have multiple vSphere endpoints, or multiple Supervisor Clusters in a vSphere endpoint, you will need to add each one to the Supervisor Clusters.

A Supervisor Cluster added to the Kubernetes infrastructure tab

Once the Supervisor Clusters are added, the Supervisor Namespaces can be created (or existing Supervisor Namespaces brought under management) under Infrastructure > Kubernetes > Supervisor Namespaces

Supervisor Namespaces under vRealize Automation management

Supervisor Namespaces will also need at least one Storage Policy configured to be able to consume through the Cloud Templates.

Storage Policies configured under Supervisor Namespaces

3 – Create a Kubernetes Zone and add the Supervisor Namespace(s) to the Zone

To enable policy and tag-based placement of Workload Clusters into Supervisor Namespaces we need to create a Kubernetes Zone (Infrastructure > Configure > Kubernetes Zones) and to assign the Supervisor Namespaces to the Zone.

Kubernetes Zone creation

Under the Provisioning tab the Supervisor Namespaces are added, assigned priority and tagged.

Assigning Supervisor Namespaces to a Kubernetes Zone, including Tags

4 – Add the Kubernetes Zone to a Project

Lastly, the Kubernetes Zone needs to be assigned to the Project(s) that will be allowed to consume the resources managed under that Zone.

Assigning a Kubernetes Zone to a Project

5 – Create a Cluster Plan

Now that the infrastructure is set up, there’s one last task to complete before we can start creating Cloud Templates and consuming them, and this is where the new concept of a Cluster Plan comes in. Cluster Plans define a standard template for deployed Workload Clusters to simplify the number options and configuration that are required in a Cloud Template, so things like Storage Policies and Network configuration can be defined once by the Cloud Admin. Cloud Template authors can then select a Cluster Plan in the Cloud Template and override a few of the inputs that are user define-able.

For the sake of this blog post I’m going to create two Cluster Plans, “Production Cluster” and “Development Cluster”. A Cluster Plan can define the Kubernetes Version, number of Control Plane nodes and Worker nodes, Machine Class and Storage Class for both, the default Storage Class for Persistent Volume Claims and which Storage Classes are passed through from the Supervisor Namespace to Kubernetes, as well as the Network Settings for the Cluster Plan.

My Production Cluster Plan will use a HA control plane (3 nodes), guaranteed Machine Classes (i.e. reserved resources) and a more performant Storage Class. My Development Cluster plan will feature a single Control Plane node, best-effort Machine Classes and thin provisioned Storage Classes.

My new Production Cluster Plan

It’s worth noting that the Network settings will allow you to override the default CNI (antrea), Pod and Services networks, Domain, Proxy settings and CA trust. The specification is the same as you’d use to create a cluster natively with the Tanzu Kubernetes Grid Service.

Overriding the default cluster network settings

Creating a Tanzu Workload Cluster Cloud Template

With everything set up, we can now start consuming our Cluster Plans, get to writing Cloud Templates, and DEPLOY A WORKLOAD CLUSTER!

And it’s actually a very simple object on the design canvas, drag-and-drop or just write the YAML spec. The previous “K8S Cluster” object related to Tanzu Kubernetes Grid Integrated (formerly PKS) and has been renamed to allow us to use K8S Cluster for Tanzu Workload Clusters.

The Cloud_Tanzu_Cluster_1 object created actually only needs two properties defined to be functional – name, and plan.

  Cloud_Tanzu_Cluster_1:
    type: Cloud.Tanzu.Cluster
    properties:
      name: 'mycluster'
      plan: 'Production Cluster'

However, to really make use of the Cluster Plans, Tag-based placement engine and to give the requestor some interesting options, I’ve created some inputs to allow the user to enter a custom name, select the Cluster Plan from a dropdown, and to customize the number of Worker nodes deployed. With those inputs, and some variable references, the Cloud Template looks like this:

formatVersion: 1
inputs:
  name:
    type: string
    title: Cluster Name
    description: DNS-compliant cluster name
  clusterplan:
    type: string
    title: Cluster Plan
    enum:
      - Development Cluster
      - Production Cluster
  workers:
    type: number
    title: Worker Node Count
    default: 1
    enum:
      - 1
      - 2
      - 3
      - 4
      - 5
resources:
  Cloud_Tanzu_Cluster_1:
    type: Cloud.Tanzu.Cluster
    properties:
      name: '${input.name}'
      plan: '${input.clusterplan}'
      workers: '${input.workers}'
      constraints:
        - tag: 'cloud:vsphere'
        - tag: 'env:${input.clusterplan == "Development Cluster" ? "dev" : "prod"}'

Note the use of constraint tags in the Cloud Template – if you check back up in step 3 I defined some tags in the Kubernetes Zone compute resources – env:prod will deploy to the field-demo-clusters Namespace, and env:dev to the moad-dev Supervisor Namespace. So, when the user selects the Production Cluster plan, the env:prod tag will be applied, and env:dev for the Development Cluster plan.

To test this out, I’ve requested on Development and Production cluster using the Service Broker Catalog, three Worker nodes for Production and one for Development. The Control plane in the cluster plan for Production is three nodes, so blog-demo-prod should have six nodes in total. The Control plane in the cluster plan for Development is one node, so we should have two nodes for blog-demo-dev. The Production cluster should be deployed into the field-demo-clusters Supervisor Namespace, because of the env:prod tag, and the Development to the moad-dev Supervisor Namespace due to the env:dev tag.

Requesting a Production Cluster

Once the deployment has completed, we can see the desired results!

Six nodes of the Production Cluster in the field-demo-clusters Supervisor Namespace
Two nodes of the Development Cluster in the moad-dev Supervisor Namespace