One feature that has been consistently requested by customers using Code Stream is the ability to use Kubernetes instead of a stand-alone Docker host to execute CI tasks in Pipelines. Well, today I’m pleased to announce that we now support Kubernetes Workspaces with Code Stream!
CI tasks in Code Stream have historically used a Docker Endpoint to execute the task. When a Pipeline is executed, the container image specified in the Pipeline’s workspace is started and the CI Agent installed to communicate with Code Stream. The CI tasks are then executed on the container image and the results returned to the Pipeline execution. You can read more about Pipeline configuration here.
With Kubernetes workspaces not much changes, except that the container image is running on a Kubernetes Endpoint rather than the Docker Endpoint. The Kubernetes Endpoint will run the container image as a Pod and can use a Persistent Volume Claim to store the workspace artefacts rather than a shared folder as it does with the Docker host. There are a few additional configuration options that can be used with the Kubernetes workspace:
Namespace – you can specify a Kubernetes Namespace in which the Pod running the container image will be created. If the Namespace does not already exist, then Code Stream will automatically create it.
Node Port – Code Stream communicates tasks with the CI Agent running on the container image Pod via a NodePort in the Namespace. You can leave this blank to use an ephemeral port number, or specify a port between 30000-32767 (for example if you are using a managed Kubernetes cluster or are in an environment where you need to open a firewall port). Any Pipeline with its workspace configured using the same Namespace and Node Port will re-use the Node Port, or you can specify a different combination to use a different port.
Persistent Volume Claim – you can specify the name of a Kubernetes Persistent Volume Claim to use for the Pipeline workspace. If you do not specify a Persistent Volume Claim, the Pod will use an emptyDir Volume on the Kubernetes node to store the artefacts created by the Pipeline workspace. By specifying a Persistent Volume Claim you can ensure that the Pipeline artefacts are persisted beyond the life of the Pod – you could, for example, ensure all the logs are stored on an NFS share.
Configuring a Kubenetes Workspace
Create a Kubernetes Endpoint
To use a Kubernetes Endpoint to execute the Pipeline tasks, you’ll first need to have a Kubernetes cluster available, and configured as a Kubernetes Endpoint. For this example, I’ve got a Tanzu Kubernetes Grid cluster set up and configured as an Endpoint:
(Optional) Create the Namespace and Persistent Volume Claim
Persistent Volume Claims exist within a Namespace, so if you wish to use one you need to create both the Namespace and the Persistent Volume Claim.
kubectl create ns codestream-workspace
kubectl config set-context --current --namespace=codestream-workspace
Now in my new Namespace, I can find the appropriate Storage Class:
And create a new YAML file describing the Persistent Volume Claim using the Storage Class and specifying the size I wish to claim, and applying it using kubectl apply -f codestream-workspace-pvc.yaml
I now have a new Namespace, and an as yet unclaimed Persistent Storage Claim available on my Kubernetes Cluster using kubectl get pvc -n codestream-workspace
Configuring a Pipeline to use a Kubernetes Workspace
When you configure a Pipeline Workspace the default option will be to use Docker, so existing Pipelines with no configured workspace type will default to Docker. To use the Kubernetes Workspace you simply select the Type, and select a Kubernetes Endpoint – you can see I’ve selected my autotmm-services
endpoint
The Builder image URL and Image registry settings have not changed from the Docker workspace – you need to specify a container image that’s publicly available or, if it requires credentials to download, you can specify an Image registry Endpoint that contains the credentials. In my example below I’m using a Harbor instance that provides a proxy and cache for Docker Hub. What’s important is that your Kubernetes Endpoint must be able to pull the container image – otherwise the execution will fail!
I’ve specified my newly created Namespace, a Node Port and the Persistent Volume Claim I created earlier. As I mentioned before the Node Port and Persistent Volume Claim settings are optional as Code Stream will use an ephemeral port and an emptyDir
if not specified.
Now that the workspace is configured, I can create a simple “hello world” CI task to test the execution – this is simply a shell script to echo “hello world”, and nothing changes in comparison to using a Docker workspace.
Testing the Kubernetes Workspace
Now that we have the Kubernetes workspace configured we can execute the workflow and look at the Kubernetes resources created.
While the Pipeline is in the PREPARING_WORKSPACE
stage, it’s deploying the Pod and Service using the workspace settings we configured.
To verify the configuration of the workspace, we could dig into the details using kubctl describe pod/ingress-pod-30001
, kubectl describe service/ingress-svc-3001
and kubectl describe persistentvolumeclaim/codestream-workspace-pvc
.
And finally…one other little enhancement that has come with this update – you can resolve Code Stream Variables within the Workspace form, so you can update a whole range of Pipelines simply by updating a variable!
Next Steps
I hope you agree that this is a fantastic and much sought for enhancement for vRealize Automation Code Stream, and will allow you to use existing Kubernetes clusters rather than maintaining a stand-alone Docker host. I know that I’ll be switching over as quickly as I can! If you want to find out more about vRealize Automation please visit our website, or to learn more about our features, vRealize Automation Code Stream and explore vRealize Automation Cloud get started with a free 45-day trial!