In my first blog post and webinar for this DevOps for Infrastructure series, we talked about how the overall DevOps cycle and how vRealize Automation can facilitate DevOps for Infrastructure. I’ve also blogged previously about Infrastructure as Code and vRealize Automation and how you can get started with the vRealize Automation Terraform Provider. In this second part (and the related webinar) we focus in on the Infrastructure as Code aspect of DevOps, how vRealize Automation implements Infrastructure as Code, and how you can use Infrastructure as Code to improve the reliability, increase the speed of development, reduce failures and fix problems more quickly.
The Mechanics of Infrastructure As Code
Let’s start with what I’ve termed the mechanics of infrastructure as code – how Infrastructure as Code actually works – what are the components, and how do they work together to achieve the desired outcome.
In the diagram below there are two blocks in the “Infrastructure as Code” section – this is typical of any IaC implementation – code that defines variables, and a definition of the desired infrastructure state.
- Variables – values that change on a per-deployment basis, like the name of a VM, or the network port to load balance on. These can be provided at execution time as inputs, environment variables, or in an answer file or API call.
- Definition – this is a code description of a generic desired end state, and uses the variables provided to customize each deployment of the end state. It doesn’t typically describe how to get to the desired state, that’s determined by what I’ve termed the Fulfillment Engine.
Together, the Variables and Definition describe the desired Infrastructure State in a well-defined format – whether that’s a Domain Specific Language (e.g. HCL for Terraform) or a more generic language such as JSON or YAML (e.g. AWS CloudFormation).
Defining the infrastructure as code enables these definitions to be managed using techniques that traditional developers use to manage their code. They can be edited with any text editor, and managed within a Source Control system such as Git, with all of the advantages of tightly controlled versioning – including, but not limited to:
- The ability to roll back to a previous configuration with confidence
- Maintaining environments a different versions
- Self documenting – commit messages document what’s changed, who changed it and why (ideally!)
The code is processed by a Fulfillment Engine, which is responsible for taking the code definition and implementing it. The Fulfillment Engine knows how to communicate with the various infrastructure endpoints – typically IaaS, CaaS or PaaS. The Fulfillment Engine translates the generic definition provided by the code into a specific infrastructure deployment and state.
Changes to the Variables or Definition will be processed by the Fulfillment Engine to update the Infrastructure State – depending on the mutability of the infrastructure and the type of change, that might mean re-deploying or modifying the deployment, for example, changing the number of web front end servers in a cluster would likely just deploy or remove a front-end web server. But changing the underlying OS would require the server to re-deploy.
At this point, it’s worth re-iterating that Application State is not necessarily the same as Infrastructure State, so it may be necessary to employ methods such as Green/Blue deployments, migrations, or retrieving remote Application State from within the deployment to ensure that changes do not affect availability.
Infrastructure as Code in vRealize Automation
The “Low Code” Approach
vRealize Automation offers a “low code” approach to getting started with Infrastructure as Code – that is, the Blueprint Designer within the platform provides a palette of components that can be dragged onto a design canvas. The visual representation that is shown in the design canvas is actually rendered from the code – the blueprint itself is YAML code – in the code pane to the right hand side of the canvas. The “Low Code” approach allows blueprint designers to quickly build out the YAML required and learn the structure of the code until they’re more familiar with it. Everything on the canvas can be modified by editing the code, and while the graphical interface can configure most of the options, some more advanced blueprint features will need to be created using the code editor.
As more components are added to the blueprint, the YAML code is built out. When you edit the YAML directly, there’s intellisense-like code completion to help complete the properties of an object – the screenshot below shows the properties of a load balancer being build out, with suggestions in a dropdown.
Through a combination of dragging-and-dropping components onto the canvas, editing the code directly, and using the properties and inputs editors, you can build out more complex multi-tiered applications, such as the one shown below.
For now, I will grab a copy of the YAML code and delete this manually created blueprint. The YAML will be used to add the blueprint to the Git repository later.
Git Integration with vRealize Automation Blueprints
vRealize Automation Projects can be configured to provide one-way synchronization from a Git repository, allowing the release of new blueprint code directly from source control. Projects can be mapped to a specific branch within a repository, allowing for the promotion of code through environments using GitOps methodology.
Once a Git Integration endpoint has been added, Projects can be configured to synchronise with the intended branch – for example, in the image below the Development project is synchronising with the development branch of my vra-cloud-blueprints repository.
Different versions of the same blueprint are mapped to their respective projects from their associated git branches, as the Blueprints view below shows:
Adding a blueprint using Git
With the Git integration configured, you can now manage your blueprints as code using your editor of choice, and everyday git commands. Below I’m using Visual Studio Code with it’s inbuilt Git integrations to add the blueprint YAML we created earlier in the blueprint editor. I’ve created a new folder in my local copy of the git repository, and created a new file called “blueprint.yaml” with the code.
I can manage the git repository using the Visual Studio Code plugins, or by using git commands. For example, I used “git status” to see that there were untracked files in my git repository – these are my IaC Webinar folder and the blueprint.yaml file. I then use “git add .” to add all of the untracked files (or I could swap the “.” for the path to the files I want to add). I then used the “git commit -m “IaC Webinar”” command to commit those changes to my local git repository with a commit message of “IaC Webinar”. Finally, I use “git push” to synchronise those changes with my remote (GitHub) repository. All of this is done on the staging branch of my repository, which maps to the Staging project in Cloud Assembly.
If you’re unfamiliar with the git command line, you can also achieve the same thing using the Source Control tab of Visual Studio Code.
Once the repository has synchronised with Cloud Assembly, the new Blueprint is available.
Thanks for reading this, I hope it gives you a good introduction into some of the basics of infrastructure as code, and how you can use vRealize Automation to start managing your infrastructure using those principles.
You can go back and watch the Part 2: Demystifying Infrastructure-as-Code webinar that this post is based on here https://bit.ly/vRAIaC