posted

0 Comments

By James Wirth, Consulting Architect

Feature developers are concerned with reliably bringing application features from idea to production as fast as possible. If this process occurs too slowly or inconsistently, it can have many significant negative impacts on a business, including but not limited to:

  1. Inability to react to and capitalize on new market opportunities
  2. Damage to brand reputation due to dated application look and feel
  3. Extended outages due to failed upgrades

The purpose of this post is to provide a conceptual design overview for a base-level application build and deploy pipeline based on VMware and Pivotal tools. The goal of implementing such a solution is to improve both the speed and reliability of builds and deployments through automation. This type of solution is often referred to as a continuous integration/continuous deployment (CI/CD) pipeline. If you are interested in checking out some demonstrative code there are many examples of concourse pipelines on GitHub, this repo is the one I used while testing and verifying this concept.

The purpose of this post is not to recommend specific tools or software products. Product and technology names are included only to aid in communicating the concept. The intent is to describe an overall solution that can provide measurable benefit to feature developers. Replacement tools or components could be substituted based on the specific requirements of the users of the platform.

Design Decisions

There are many possible ways to configure such automation, so to guide the design of this demonstrative pipeline several design decisions were made.

  1. Design must be able to operate entirely within a private cloud i.e. internet connectivity must not be a requirement.
  2. Where possible proprietary capabilities should be avoided to enable alternate tools or deployment methodologies to be selected as necessary.
  3. Solution should form a base-level design that can be built upon, but the initial design should be as simple as possible to assist with ease of adoption.
  4. Solution should be guided by VMware DevOps guiding principles and models for DevOps solutions as described in the VMware white paper DevOps and Agile Development.

Conceptual Design – Build, Test and Archive

The below conceptual diagram represents a workflow that could be used to build a container, store that container in a registry.

Container Build

This pipeline shows components grouped into three main areas labelled as GitLab, Concourse and Harbor. A commit to the GitLab triggers the Concourse image build process which is finishes with the newly built image being pushed to the Harbor registry. If we were to match up these components, and other supporting tools to what has become affectionately known as The 10 Stacks of DevOps as described in the VMware white paper DevOps and Agile Development. It would look something like the following:

Stack Component
Plan Stack Trello
Coding Stack Sublime Text
Commit Stack GitLab
Continuous Integration Stack Concourse
Test Stack Concourse*
Artifact Stack Harbor
Continuous Deployment Stack Concourse
Configuration Stack PKS
Control Stack Concourse
Feedback Stack Git Issues

 

*While the underlying pipeline code remains the same, applying different configuration at run time to a particular pipeline instance allows pipelines to address and deploy to dev/test/prod environments. This allows developers and SRE’s to use the same code base to stand up and operate the pipelines, but allows SRE’s to apply configuration that allows access to penetration testing/load testing/production environments.

Note: Since this is a base-level container build pipeline, not every component is addressed by the diagram, but the list above provides some ideas as to what a typical set of integrated tools could be.

Conceptual Design – Subsequent Deploy

The following conceptual diagram shows the next stage of a pipeline which automates the deployment of a container from the container registry into a subsequent environment, which is VMware PKS in this case. There could be many such subsequent deployments depending on the number of environments a particular solution employs.

Subsequent Deployment(s)

The important differences to note in this diagram are as follows:

No Feature Code

The feature code component is not listed in the diagram as it was already bundled up into a container image and pushed to the Harbor registry. The Infrastructure code will still be required though, so the pipeline will draw it in as a resource. An example of infrastructure code in this example would be the Kubernetes manifest.

External Trigger

This pipeline is triggered by something external rather than a code commit. It could be a manual trigger or perhaps an automated trigger after a successful automated test process completes.

Multiple Pipelines

There could be N number of these pipelines in a given solution. Each process being essentially the same. The container passes through various testing and analysis in the various environments as it progressively makes its way to deployment in the production environment.

Summary

This post has described a conceptual design for a container build pipeline. To reiterate, it is meant as an example rather than a specific recommendation of the tools that should be used. An alternate or additional solution could use the following:

 

Stack Component
Plan Stack Trello
Coding Stack Sublime Text
Commit Stack GitHub
Continuous Integration Stack Jenkins
Test Stack Selenium
Artifact Stack Artifactory
Continuous Deployment Stack vRealize Orchestrator
Configuration Stack Ansible
Control Stack vRealize Automation
Feedback Stack Git Issues

 

Remember to checkout the code repository linked to at the top of the post for an example showing how a solution that follows this conceptual model could be implemented.

 

About the Author
James Wirth works in the Professional Service Engineering Team designing services solutions for VMware customers. He is a proven cloud computing and virtualization industry veteran with over 10 years’ experience leading customers in Asia-Pacific and North America through their cloud computing journey. @jameswwirth