Cloud Management Platform Cloud Automation vRealize Automation

vSphere with Kubernetes, vRealize Automation, and Tanzu…A Perfect Match!


In March of 2020 VMware introduced an amazing capability to vSphere by adding Kubernetes directly into the hypervisor. This allows you to run modern applications on the tried and true vSphere platform. It just makes sense that vRealize Automation should join in to this coolness by providing a platform that allows virtual administrators the ability to manage Kubernetes the same way they have traditional managed virtual machine workloads. In this blog we are going to walk through how you can use vRealize Automation’s Code Stream service to provide an on-demand Kubernetes cluster on top of vSphere with Kubernetes. That, in itself, is cool, but we are also going to add some governance by adding an approval task so we can ensure that Kubernetes sprawl is controlled.


In detail this walkthrough will setup a vRealize Automation Cloud Code Stream pipeline that will create a Supervisor Namespace with an associated storage policy, deploy a Tanzu Kubernetes cluster in the namespace, and automatically set it up in Code Stream as an endpoint for use by other pipelines. All of this will then be added to Service Broker as a self-service catalog item with an approval policy attached. Shew….that’s a mouth full, but it’s really not that hard at all!!


Demo Video:


Let’s get started!!


Things you will need in order to complete this walkthrough:

  • vSphere 7 environment (preferably VMware Cloud Foundation based) with the Kubernetes service enabled.
  • vRealize Automation Cloud subscription with vSphere 7 setup as a Cloud Account (this can also be done with vRealize Automation 8 on-premises with slight modifications).
  • Knowledge of the vRealize Automation Code Stream service.
  • A Docker host (for use with Code Stream Workspace) and access to a Docker repository (I am using DockerHub for this walkthrough).
  • The ability to copy and paste…because I am giving you the code to make this all work!!


Setting up Code Stream Variables:

First thing we are going to do is setup a few variables that will be needed through the pipeline.


  • api_token = This is the api token that you use to access vRealize Automation Cloud through the API.
  • my-vcf-password = The password for your vSphere access. This will be used when logging in to the Supervisor Namespace using the vSphere with Kubernetes CLI.
  • my-vcf-username = The username with rights in vSphere to create supervisor namespaces and log in and interact with them.


Setting up a Code Stream Custom Integration Script:

Currently vRealize Automation Cloud does not provide the ability to add a Storage Policy to a Supervisor Namespace as a part of the blueprint. This is coming as a configurable property in a coming release. To add the Storage Policy to the Supervisor Namespace we are going to create a custom integration script that will add the policy once the namespace has ben created. This script is written in python and the script code is below. You can simply copy and past this in to a new Custom Integration.


The Code:

Click this link to get to the Custom integration Code snippet.


Once you paste the code in to the newly created Custom Integration you will notice the there are several inputs required for this script to function. You can hard code these in or make them selectable options at request time. If you watch the video in this blog you will notice I only have the requestor input two items, Supervisor Namespace name and Kubernetes Cluster name. This means I hard coded most of the below input to make it easy for the requestor. It’s your choice on how you want to implement.


Now that you have the code in place, save the custom integration and create a version and release it so it can be used in the pipeline.



Setup the Docker Container used by Code Stream Workspace:

As stated in the “Things you will need” section I mentioned that you will be using the Code Stream Workspace (read more on the Workspace in this Doc) feature in the pipeline. Workspace uses a combination of a Docker host and a container with tools baked in that are needed during the pipeline. Assuming you have the docker host as an endpoint (When you create the endpoint name it “Docker Host” and you won’t need to modify the pipeline on import) for use with Code Stream CI tasks. We now need a docker container that has the necessary runtimes, and CLIs, we will use during the pipeline. The below link is the Dockerfile you can use to build the container for this pipeline to work: (Note: you will need to make some modifications to point to your vSphere environment Cluster Control Plane IP as well as the location of the TKG CLI)


Click here to get the code to build the container.


You can use the publicly available container on DockerHub used for this example linked here.


Creating the Supervisor Namespace Blueprint in Cloud Assembly:

In vRealize Automation Cloud’s July release a feature was introduced that allows you to create Supervisor Namespaces in vSphere 7 using Infrastructure as Code blueprints. We will be using this blueprint as a part of the pipeline in Code Stream. Follow this blog to create the Supervisor Namespace blueprint. Your blueprint will look similar to the one below. You might have a different constraint tag for your environment.



Importing the Pipeline in Code Stream:

For the sake of making this blog somewhat small I am providing a 95% complete pipeline for you to import into Code Stream. To import the pipeline copy the YAML code from the GitHub repository, select the import pipeline button from the pipeline screen in Code Stream, and past the YAML code in the provided code area. Before selecting IMPORT, change line number two in the YAML code and enter the exact name of the project in vRealize Automation that this pipeline will be associated.



Let’s walk through each stage and task of the pipeline:

Stage 1:

In stage one we create the Supervisor Namespace using the Cloud Assembly Blueprint and run the Custom Integration Script that adds the storage policy to the namespace.


  1. Task 1: Uses the Supervisor Namespace blueprint from Cloud Assembly to create the Supervisor Namespace in vSphere
  2. Task 2: Uses the Custom Integration script we created early to add the Storage Policy to the namespace. Ensure to enter the information for each input of the Custo integration, or you can make the inputs something required for the user to enter at request time.


Stage 2:

In stage two we will create a login script to access the newly created supervisor namespace, log in using the script, create a yaml describing the cluster definition, execute the YAML to build the cluster, then start a loop to query the status of the cluster until all nodes are in the running state.


  1. Task 1: This task create a login script that uses expect to populate the password for the user in which you are using to log in to the Supervisor Namespace and has rights to build Kubernetes clusters. This uses the Code Stream variables you created earlier to populate the username and password.
    • Example Command: kubectl vsphere login –server –insecure-skip-tls-verify –vsphere-username ${}
  2. Task 2: Uses the logon script created in the previous task to login to vSphere using the vSphere Kubernetes CLI, creates a yaml file that describes the cluster configuration, and then executes the kubectl command to apply the YAML.
    • Notice in the cluster YAML created that I have hard coded the number of control nodes to 1, the number of worker nodes to 3, and the storage policy. You could make any of these configurable inputs but I chose to make this a hardcoded configuration for this blog and demo.
  3. Task 3: Gets the Tanzu Kubernetes Cluster information and loops through the command every 20 seconds until all nodes in the cluster are in the “running” state


Stage 3:

In stage 3 we again create a login script, but this time we log in to vSphere using the context of the cluster we just created. then we create a service account and bind that newly created service account to the cluster admin role.


  1. Task 1: This task create a login script that uses expect to populate the password for the user in which you are using to log in to the Supervisor Namespace and has rights to build Kubernetes clusters. Much like the task in stage 2 with the addition we login to the cluster we just created using some additional switches to the login command.
    • Example Command: kubectl vsphere login –server –insecure-skip-tls-verify –vsphere-username ${} –tanzu-kubernetes-cluster-name ${input.k8s_cluster_name} –tanzu-kubernetes-cluster-namespace ${input.super_ns}
  2. Task 2: Uses the script created in the previous task to login into vSphere using the vSphere Kubernetes CLI, creates a YAML file that describes the service account to create, and then executes the kubectl command to apply the YAML.
    • Notice in the service account YAML I have hard coded the service account name as “dev-admin” but this could have been an input option allowing the user to enter the service account name
  3. Task 3: This task uses the script created in task 1 to login to vSphere using the vSphere Kubernetes CLI, creates a YAML file that describes the role to bind to the newly created service account, and then executes the kubectl command to apply the YAML.
    • Notice that I, again, hard coded the role to the cluster admin, but this could be an optional input the user could select at request time if you so chose to do that.


Stage 4:

In this stage we collect the cluster information and use that information to create a Code Stream Endpoint that utilizes the new Kubernetes cluster.


  1. Task 1: This task collects all the necessary information needed to create the Kubernetes endpoint in Code Stream. Specifically the Cluster API IP address, the access token of the service account, and the cluster certificate fingerprint.
  2. Task 2: This task makes the REST call to vRealize Automation Cloud to get the access token for subsequent REST calls.
  3. Task 3: This task makes the POST REST call to create the Code Stream Kubernetes endpoint associated to the project you specify
    • Note: You will need to modify the REST call to specify the name of the project. Replace project”: <ENTER PROJECT NAME> with you project name. There is also a conditional statement on this task that allows you to disable adding the cluster to Code Stream as an endpoint.



Finally, let’s add a notification to the requestor and provide the information they would need to access the cluster. This notification will come as an email to the requestor and provide the information they can use to populate their KubeConfig. Click on the name of the pipeline at the top of the design canvas and the select notification to create a new notification.


Now we can create a simple HTML formatted email body that provides the information to the requestor with the cluster information they will need:

Email Body


Creating a Self-Service Catalog Item and Attaching an Approval Policy:

Now that we have the pipeline in place, we are ready to present this in the Service Broker catalog for your customers to consume through self-service. Obviously we don’t want just anyone to be able to deploy Kubernetes clusters, or maybe you gave them additional options they can select for cluster size or different role bindings. To control this we are going to add an approval policy to the catalog item so we can have some governance to the process….also it makes for a good demo!!! 😉

First thing to do is enable and release the pipeline win Code Stream:

enable-pipe release-pipe

Now in Service broker, create a content source for Code Stream pipelines (if you don’t already have one) or go in to your existing one and validate the content source to trigger Service Broker to inventory for new released pipelines.


Create a Custom Form in Service Broker. To make this easy I have the custom form located here which you can import.



Lastly, create you Approval policy and assign it to the newly create catalog item:




As you can see you can do a just about anything with Code Stream; even offering self-service Kubernetes clusters using the new vSphere with Kubernetes capabilities. Plus, you can add governance and a similar operating model your business customers have enjoyed from IT for virtual machines now with Kubernetes. I hope you enjoyed this blog and can us it to help move your business forward.



Other Blogs to Check Out:


Deploying Tanzu Kubernetes Grid with vRealize Automation

vRealize Automation provides self-service Supervisor Namespaces

Yes! Code Stream Pipelines as Catalog Item









5 comments have been added so far

  1. Hi Chris,
    Followed the steps but the Update supervisor names failed Create Tanzu Cluster Fails
    + python3
    /workingDir/Deploy__vSphere__K8s__Cluster__in__Namespace/1/Create__Supervisor__Namespace.Update__Supervisor__Namespace/ line 1: python3: command not found

    And Create Tanzu Cluster Fails
    + set -e
    + ./
    /workingDir/ ./ /usr/bin/expect: bad interpreter: No such file or directory

    I have

    Can you suggest some ideas

Leave a Reply

Your email address will not be published. Required fields are marked *