What is a Blue/Green Deployment?
Wikipedia describes Blue/Green deploment methodology as
…a method of installing changes to a web, app, or database server by swapping alternating production and staging servers.
It provides a way to deploy software upgrades with minimal downtime, and an easy and rapid rollback if anything goes wrong. The Blue deployment is the existing (live) version of the application, traffic is routed to this deployment via a load-balancer, DNS name or another traffic steering method (e.g. Ingress or HTTPProxy in Kubernetes). In the diagram below the URL demo-app.cmbu.local is directed to the Blue deployment of three tiers – these could be VMs, Containers (Pods) or services running on a server.
When a new version of the code is released, an identical version of the application is deployed using the new code – this is the Green deployment. At this point an alternative URL is published to allow for the final testing of the new version before it’s made live. The Green and Blue deployments are running in parallel on identically configured servers/VMs/containers. The diagram below shows the Green application code is deployed and available on the test URL demo-app-test.cmbu.local, but the Blue deployment is still live.
Once testing is complete and confidence is high in the Green deployment, the live URL is re-routed to the Green instance, and users are live on the new code. At this point the Blue deployment is idle, but users can be rolled back if an issue arises. After a settling in period, assuming all is well, the Blue deployment can be decommissioned, leaving the Green deployment live.
The next time the code is updated process is repeated, with the new code being deployed in parallel.
There are a few things to consider around this deployment methodology – for example, how do you manage state and sessions within your application? It might be that the methodology to cut over to the new version is adjusted to something more like a Canary deployment, where a small amount of traffic is directed to a “canary” deployment to test the new code before a decision is made to move all traffic to the new deployment. With a Canary deployment you might decide to move a small percentage of users to the Canary deployment, or maybe a geographic group, or some other subset of users (depending on your traffic steering method and it’s capabilities). And of course the Blue/Green methodology could be used to upgrade individual services, rather than the whole application, so that each tier is routed as in the diagram below.
Configuring Code Stream to automate Blue/Green Deployments in Kubernetes
To demonstrate using vRealize Automation Code Stream to automate the Blue/Green deployment methodology into a Kubernetes cluster, I’ve created a deliberately simple Kubernetes application – it’s simply an NGINX webserver with a LoadBalancer service.
When new YAML configuration code is committed to a Git repository (in this case GitHub, but other flavours are available), a webhook triggers the Blue/Green deployment pipeline in Code Stream
The Blue/Green pipeline is executed and deploys the newly committed YAML to a Kubernetes cluster, using the commit ID to create unique objects. The Green deployment is then published on port 8443 and Code Stream pauses for an approval task. The Green deployment can be checked, then assuming all is well the approval task will continue and configure the Blue “live” load-balancer to direct traffic to the Green deployment. Again, the pipeline will pause for testing, and on approval will continue to delete the Blue deployment.
All the code used in this example is available in our VMwareCMBUTMM/vra-code-examples repository, under the Blue-Green Deployment folder. You can use the example YAML files to recreate this demonstration, but you will need to create your own Git repository to do so.
Add a Git Endpoint to Code Stream
In order to configure a Git webhook, I need a Git endpoint – this maps to a specific repository on my Git server – in this case Github. I’ve created a Personal Access Token in my GitHub account to authenticate with the GitHub API without using my password.
Note that the Repo URL uses “api.github.com” not just “github.com” and I’ve created a Code Stream Variable to hold my GitHub Personal Access Token.
Add a Kubernetes endpoint to Code Stream
In order to perform tasks on a Kubernetes cluster, we first need to add the cluster as an endpoint – this is reasonably straight forward as the values will match the contents of your clusters kubeconf file. Below I’m using Certificate authentication, but you can use a token or username and password too. Once you’ve completed the form, click Accept Certificate to view and accept the certificate details for the cluster, then Validate the configuration.
Import the Code Stream Pipeline
My example pipeline is included in the repository and can be imported into your Code Stream instance – but it will require configuring for your environment.
Configure the Pipeline Workspace
Under the Workspace tab, the pipeline must be configured to use a Docker host (see Creating a Docker host for vRealize Automation Code Stream) to execute the CI tasks. The Builder image should be one that is configured for Code Stream, you can use my sammcgeown/codestream-ci:latest image which is publicly available on Docker Hub, or your own image. Finally, you must check the “Git clone” checkbox to ensure the repository is cloned when triggered by the Git webhook.
Configure the Pipeline Inputs
Under the Input tab, the pipeline must be configured with the Git auto-inject parameters to ensure that the Git repository is cloned into the CI tasks. You should see the GIT_* parameters listed in your pipeline.
Configure the Kubernetes tasks
Unless your Kubernetes endpoint is named identically to mine, you’ll need to update the Kubernetes tasks under the Model tab by selecting each task and updating the Kubernetes Cluster value. You could also update the name by manually editing the pipeline YAML file before importing it (search/replace is going to be quicker than updating each Kubernetes task!)
Configuring the Git webhook
With the components in place, it’s now possible to configure the Git webhook to trigger the pipeline when a Push is made to the repository.
Things to note when creating a webhook – make sure you use the right branch name (GitHub now defaults to “main” rather than “master”). Generate a secret token to use with GitHub configuration. I’ve not configured any file inclusions/exclusions, but it’s worth doing to avoid accidental triggers. The API Token under the Trigger section is the API token your Git webhook will use to access vRealize Automation – it is *not* the API token used to access your Git server!
When the webhook is saved, log into your Git repository and look at the Webhook settings – it should be created for you:
Now when new code is pushed to the Git repository a new execution should be triggered under the Git > Activity page:
Understanding the Pipeline Stages and Tasks
The pipeline itself is made up of Stages and Tasks.
The Kubernetes tasks use specific files from the Git repository, with the GIT_COMMIT_ID input passed as a parameter “COMMITID” to the template. When Code Stream executes the task, the parameter replaces the variable in the template – allowing us to create a unique name for objects created from a commit:
The commit ID of the previous commit is exported from the first Task in the first Stage (Export Commit ID) so that we are able to identify components from the existing (Blue) deployment. This means that they can be deleted later in the Pipeline, after a successful deployment of the Green code.
Walking through the stages:
The Common stage performs generic setup and ensures the namespace and SSL secret exist.
- Export Commit ID – this CI task is executed on the Docker host in a container, and is used to extract the previous Commit ID from the Git repository.
- create-namespace – this Kubernetes task and creates the namespace (if it doesn’t exist already)
- create-ssl-secret – this Kubernetes task that creates a secret containing the SSL certificate used by the application (if it doesn’t exist already)
The Deploy Green Application stage deploys the Green instance of the application based on the Commit ID
- create-configmap-nginx/html – these two parallel Kubernetes tasks create the configMaps that contain the NGINX configuration and a HTML file
- create-deployment – this Kubernetes task creates a new deployment using the configMaps and secrets created earlier
- create-test-service – this Kubernetes task creates a LoadBalancer service on port 443, mapping to the Green deployment, which allows us to test the Green deployment before making it live
The Test and Migrate stage prompts the user to test, then publish the Green application
- Test Green Application – this User Operation stage prompts the user to test the test service URL to ensure the Green deployment is working and continues on approval
- update-public-service – this Kubernetes task updates the public LoadBalancer to point to the Green deployment
- Test Green Application – this User Operation stage prompts the user to test the public service URL to ensure the Green deployment is live and continues on approval
The Delete Blue Application stage removes the old (Blue) deployment
- delete-test-service – this Kubernetes task removes the test LoadBalancer service, which is no longer needed
- delete-deployment – this Kubernetes task deletes the old (Blue) deployment using the previous commit ID to identify it
- delete-configmap-nginx/html – these two parallel Kubernetes tasks delete the old (Blue) configMaps using the previous commit ID to identify them
If you want to find out more about vRealize Automation please visit our website, or to learn more about our features, vRealize Automation Code Stream and explore vRealize Automation Cloud get started with a free 45-day trial!