The following blog post demonstrates using vRealize Automation to deploy Tanzu Kubernetes Grid (TKG) management and workload clusters.

VMware Tanzu Kubernetes Grid provides a consistent, upstream-compatible implementation of Kubernetes, that is tested, signed, and supported by VMware. Tanzu Kubernetes Grid is central to many of the offerings in the VMware Tanzu portfolio.

This example touches almost every aspect of the vRealize Automation platform – Code Stream pipelines are called from Service Broker custom forms that use vRealize Orchestrator actions to populate the drop downs. The deployed TKG clusters are on-boarded into Cloud Assembly as a deployment by vRealize Orchestrator workflows, which allows for lifecycle management and future day-2 use cases through Action Based Extensibility (ABX) and vRealize Orchestrator. The deploy TKG workload clusters are also imported as Kubernetes endpoints for Cloud Assembly and Code Stream.

It’s intended as “the art of what’s possible”, and to allow vRealize Automation customers who don’t yet have access to VMware Cloud Foundation 4 (and therefore vSphere 7 with Kubernetes), or Tanzu Kubernetes Grid Integrated (formerly Enterprise PKS), to test-drive TKG. The VMware Tanzu portfolio has several offerings designed to provide a range of solutions for customers bringing cloud-native application workloads to production – Cormac Hogan has written a great blog post on understanding the Tanzu portfolio.

All the code used in this example is available in the VMwareCMBUTMM/vra-code-examples repository.

Update: As of 22nd June 2020, the vmw-cli project used in this pipeline has stopped working due to changes on the VMware authentication side. To work around this, please manually download the components and make them available to the CI container to download (e.g. over HTTP using wget, or SCP from an SSH host). The issue is tracked on the vmw-cli project here. I’ll update the code when the updated version of vmw-cli is published.

Pre-requisites

The following components need to be in place to support the deployment:

  • vRealize Automation 8.1 or vRealize Automation Cloud  (Free 45-day trial)
  • vSphere 6.7u3 endpoint with Enterprise Plus licensing
    • Note: Deployment of TKG to vSphere 7.0 instances on which the vSphere with Kubernetes feature is not enabled is possible but is not supported
    • Your vSphere environment needs to meet the TKG requirements
  • MyVMware Credentials to download the TKG binaries
  • Create a new private git repository – used as the “source of truth” to store the TKG and Kubernetes config files
    • This example uses GitHub, but any git repository you can access over SSH should be fine
    • Configure an SSH key as a Deploy Key, with read/write permissions
  • A Docker host
    • Accessible via SSH from vRealize Automation
    • Added to Code Stream as a Docker endpoint – pipeline CI tasks will run on this host
    • At least 6GB RAM – the initial TKG bootstrap will need 6GB to run
    • Access to Docker Hub for my codestream-ci-tkg image (or your own comparable image)
    • kubectl installed
    • The private key used to access the Git repository should be in the ~/.ssh/ folder, prefixed with id_ to ensure it’s used by the git clone command

vRealize Automation Configuration

Configure vRealize Orchestrator

Download and import the latest com.vmware.cmbu.tkg package from our GitHub repository. The package consists of two workflows and 23 actions to support the onboarding of the TKG clusters, and can be imported to Orchestrator under the Assets > Packages page. The workflows must be tagged with CODESTREAM to ensure they are available to the Code Stream vRO task.

The vRealize Orchestrator Encryption Plugin is used for the base64 decode functionality when working with certificates. Import the plugin by logging into the vRealize Orchestrator Control Center (https://vra-url/vco-controlcenter).

Finally, the vSphere endpoint needs to be added to Orchestrator, if it’s not already, using the Library > vCenter > Configuration > Add a vCenter Server instance

Configure Cloud Assembly

Within Cloud Assembly we need to have the vSphere Cloud Account, a Cloud Zone and a Project configured to allow the onboarding of the TKG VMs.

Configure Code Stream

Endpoints

Add the local vRealize Orchestrator instance, and the Docker host to the Code Stream endpoints. This enables CI and vRO tasks in pipelines.

Variables

Variables within Code Stream allow us store configuration that can be re-used in the pipelines, including secret information such as passwords or SSH private keys. Variables are scoped on a per-Project basis, so you can have different configurations for different projects. Variables can be created using the API, so I’ve included an example JSON file in our repository to facilitate, and an example curl API call below.

Remember to update the example values to reflect your own Docker host, Git repository, MyVMware, vCenter and vRealize Automation details.

Pipelines

Modify the two pipeline YAML files to update the project and endpoint names (search for “endpoint” in each file and review the names), if you don’t do this you may have to reconfigure the pipeline later. Import the pipeline using Pipelines > Import. Once imported, the pipelines will be in a disabled state, and will require configuring, enabling and releasing.

Open the Management Cluster pipeline and select the Workspace tab to check the docker host and container image for the CI tasks. If you’re using a private docker repository (such as Harbor) you can add that as an endpoint and select it here, but it’s not required to use Docker Hub unless you’re using a private repository.

On the Input tab you can see the various inputs and some default values. These inputs will be presented later using a Service Broker custom form, and the “advanced” inputs (those with defaults) will be hidden behind an “advanced” option checkbox.

The Model tab shows the various pipeline stages and tasks. In this case we only have a single stage and five tasks.

  • The Export VC Password and Export MyVMware Password CI tasks are simply creating environment variables to be consumed in the later tasks.
  • The Get TKG Templates task uses the GOVC CLI to create VM folders as required, then check for the two TKG OVA templates. If those templates don’t exist it will use vmw-cli to download the two TKG virtual appliances (pre-installed kubernetes and haproxy load balancer) and deploy them to vCenter using GOVC again.
  • The Deploy TKG Management task is an SSH task that connects to the Docker host directly and bootstraps the TKG management cluster. It downloads the TKG CLI using vmw-cli again and creates the configuration file for TKG to allow TKG to bootstrap the management cluster. Finally, the task clones the private GIT repository and pushes the TKG conf and management cluster kubeconf files to the repository.
  • The final task Onboard TKG Management Cluster is a vRO task and executes the “Onboard TKG Cluster” vRealize Orchestrator workflow we imported earlier, which imports the deployed TKG node VMs into a Cloud Assembly deployment, and tags each VM with metadata to help with day 2 actions at a later date.

Once the Pipeline has been configured and saved, it can be enabled and released so that Service Broker can consume it as a catalog item.

The pipeline does not produce any output, so we can ignore the Output tab.

The TKG Workload Cluster pipeline is configured in the same way as the TKG Mangement CLuster pipeline, first by configuring the Workspace tab and then by reviewing the Input tab (which has far fewer inputs required). The Model tab shows a slightly longer pipeline, but works in a very similar way:

  • The Create SSH Private Key task creates a private key file in the CI container, which is used to access the git repository
  • The Install TKG CLI task downloads and installs the TKG CLI binary from MyVMware using vmw-cli.
  • Get Configs clones the git repository into the CI container and uses the TKG and kubeconf files from the previously deployed mangement cluster to configure TKG
  • The Deploy Workload task sets some environment variables based on inputs such as cluster type and VM sizing, and then creates a new workload cluster using the TKG CLI.
  • The Onboard TKG Management Cluster vRO task executes the “Onboard TKG Cluster” vRealize Orchestrator workflow again, and imports the deployed TKG node VMs into a Cloud Assembly deployment, and tags each VM with metadata to help with day 2 actions at a later date.
  • Export Kube Conf extracts the kubernetes connection details from the kubeconf file for use in the endpoint creation workflow.
  • Finally, the Create Kubernetes Endpoints vRO task executes the “Add TKG Cluster to vRA” workflow, which creates a Cloud Assembly Kubernetes endpoint, and a Code Stream Kubernetes endpoint.

Note: Ensure that both the Management and Workload pipelines are both Enabled and Released from the actions menu.

Configure Service Broker

Configure the Service Broker to provide catalog items with custom forms for the pipelines, which provides a slicker user experience when requesting the TKG clusters.

Add Code Stream Content Source

Under Content & Policies > Content Sources, add a new Code Stream Pipeline source and configure it to import from the same project you imported the pipelines to:

Once you save and import the source, it should show 2/2 items imported (assuming you’ve only released the two TKG pipelines).

Content Sharing

Under Content & Policies > Content Sharing, select the Project used for TKG, click ADD ITEMS and share the content source configured in the previous step.

Import Custom Forms

Under Content & Policies > Content you should now see the two TKG pipelines. Click on the three vertical dots and select “Customize form”.

The custom forms created for the pipelines use several vRealize Orchestrator actions (imported as part of the package earlier) to lookup information from the vCenter endpoint and populate dropdown lists based on previous choices (e.g. only display networks that are available in the selected vSphere cluster). There are also validation rules on the TKG cluster name, and the “advanced” inputs are hidden in an advanced tab, accessible by selecting the “Advanced” checkbox, and dropdown options configured to describe production and development clusters, and t-shirt sizing for TKG VM nodes.

To import the custom form, in the form editor, select “Import form” from the actions menu, import the custom form JSON file, enable the custom form, then save the form. Do this for both pipelines with the relevant custom form.

Deploying TKG from vRealize Automation

Finally, we should be ready to test the TKG deployment!

Deploy a Management Cluster

We can’t deploy a workload cluster without a management cluster, so lets deploy one!

This results in a deployment being on-boarded into Cloud Assembly.

Now with the TKG management cluster available, we can deploy workload clusters as required:

The TKG workload cluster is deployed and imported into a Cloud Assembly deployment:

It’s also added as a Kubernetes endpoint for consumption in both Cloud Assembly and Code Stream – note the link to download the kubeconfig file from the Kubernetes endpoint.

Kubernetes endpoint in vRealize Automation Cloud Assembly

 

Kubernetes endpoint in vRealize Automation Code Stream

 

From this point we can consume the Kubernetes cluster in Cloud Assembly, or in Code Stream and begin deploying applications to the clusters. We can also download the kubeconfig file from the endpoint, or from the git repository and use that to manage Kubernetes.

I hope you’ve enjoyed this post showing the art of the possible with vRealize Automation!