There are lots of opinions out there about what DevOps is, and probably even more about what it isn’t, but one thing that tends to be agreed on is that it involves the coming together of development and operations functions within a business, and having them collaborate in order to ship software more frequently and efficiently. It is a methodology that often demands changes to company culture, increases in communication, and more often than not, a whole load more automation. The number of companies operating in a DevOps fashion, or at least looking at how they could do so in the near future, is rising steadily. It isn’t surprising that one of VMware’s key initiatives for 2016 is to enable “DevOps-Ready IT”, which essentially means making sure that companies that are running VMware virtualization software are able to provide the services that DevOps teams need. To satisfy such teams, IT need to be able to automate the delivery of infrastructure, middleware and applications quickly and securely, and developers need to be able to request and control these services using APIs. For the IT teams providing these services, keeping up with the requirements of a fast-paced DevOps team can be challenging. In this post I’ll focus on how IT teams that are running vRealize Automation can achieve the following:
- Remove the time spent replicating Blueprints and their dependencies manually in each vRealize Automation environment.
- Reduce the time spent manually testing each update to a Blueprint by automating common test cases.
- Prevent downtime caused by manual errors and changing Blueprints directly in production.
- Synchronize Blueprints between multiple tenants with ease.
- Store previous versions in a repository and rollback quickly if a change causes problems.
- Provide a catalog of services that is always relevant, reducing the need for developers to go directly to external services to get what they need.
The Importance of IT Staying Relevant
Once the decision is taken to provide developers with a self-service catalog of shared infrastructure services, it is common for IT teams to spend a significant amount of time designing and testing that catalog of services. The services offered are perfectly relevant when the system first goes live, yet how the catalog will be maintained and updated once the service is in production often gets overlooked. Teams often forget to think about how they will keep that catalog relevant and useful for the development teams that are using it. A key question to ask is “How quickly after a new OS or middleware version gets released can we have that show up as a consumable service in the catalog?”. If it takes too long and the self-service catalog doesn’t give developers access to what they need when they need it, they will likely either build it themselves, or look elsewhere to get the functionality they need. In the day and age where internal IT teams are often competing with public cloud providers and acting as service providers, this can lead to user frustration and in some cases, can even spell the demise of an internal catalog service. It is, therefore, imperative that IT teams make sure that they have a streamlined process (aka Release Pipeline) for getting new and updated services from development to production.
DevOps is Not Just for Applications
DevOps teams tend to be able to push changes to production much more frequently than those using traditional practices, with teams often pushing changes and bug fixes multiple times a day. This frequency of change is usually achieved by automating the Release Pipeline – for a software application, this means that when a developer commits a code change it triggers an automated build and deployment to a test system where a suite of automated tests is run against the application, and only if the tests pass, is the change deployed to production – when this Release Pipeline is automatically triggered each time there is a change, it is known as Continuous Delivery (the CD in CI/CD). Unfortunately, many IT teams invest very little time in automation and as a result spend a long time creating services in their development environments (if they have one), then re-create it all in their test environments, and then spend just as long re-creating them again by hand in production! This causes all sorts of problems due to manual errors, lack of version control, not to mention the man power required. A good way to solve this issue is to embrace DevOps and develop, test and release these services to the catalog just like a new feature or an update to an application would be handled by a DevOps team – automate it!
Pre-Built Release Pipelines for vRealize Automation Blueprints
The vRealize Code Stream Management Pack for IT DevOps allows IT teams to setup Continuous Delivery of vRealize Automation Blueprints, and a host of other types such as vRealize Operations dashboards and vSphere templates. It allows you to design your Blueprint in a development environment and then automate the process of lifting and shifting that Blueprint to a test environment, then run some tests against it, then move it to production if the tests pass. The management pack can also identify all of the Blueprint dependencies, such as other Blueprints, XaaS Blueprints, Software components, Property Groups, Property Definitions, vRealize Orchestrator Workflows and Actions, and the catalog item icon, and deploy these along with the Blueprint to the next environment. This process can be triggered automatically each time a Blueprint is saved, or initiated by the Blueprint author once they are ready. The management pack can also be used to move Blueprints between different tenants, as a tenant is treated just like another instance of vRealize Automation.
Even though it is called a vRealize Code Stream Management Pack, it requires no prior knowledge of vRealize Code Stream. The management pack comes with a pre-built, extensible, Code Stream release pipeline that will handle the flow of content from development to production. It also allows you to easily define a set of tests to be run against a Blueprint in the test environment.
Maintaining Control
The test framework is written in vRealize Orchestrator so that service authors can write tests on a familiar platform and can leverage a large number of plugins, including the vRealize Automation plugin with all of its pre-built workflows and script actions. A common test workflow would request the Blueprint that has just been imported into the test vRealize Automation environment, check the resulting provisioned service is operating as desired, then tear down the service.
If any test workflows fail in the test environment, the pipeline stops and the Blueprint is not pushed to production, meaning that production remains stable as you are developing content. Additionally, the management pack supports approvals via Code Stream gating rules, so that admins can approve or reject Blueprint releases to particular environments if they are flagged for approval.
Grouping Related Content into Releases
A big issue for service authors is often how to package up a group of Blueprints, resource actions and their underlying workflows into a consumable package and then test and release them as a unit – the management pack allows you to create groups of different types of content that can be pushed through the release pipeline together e.g. 2 vRealize Automation Blueprints, 5 resource actions, 10 vRealize Orchestrator workflows, and their script actions might make up a particular release version. Alternatively, you can select every Blueprint in your development environment and in one request move them all to test and through to production!
Ditch the GUI (If you want to…)
All of the above functionality can be driven via a GUI, or alternatively the management pack ships with a vRealize CloudClient CLI plugin that can be used to script or interactively run content transfer and testing commands via CLI. The management pack itself can also be driven from Jenkins by using the vRealize Automation plugin.
How to Get Started
The management pack is free to vRealize Automation customers, and additionally requires a vRealize Code Stream license – download an evaluation of vRealize Code Stream today and take the first step to speeding up your time to production for vRealize Automation Blueprints!
Up Next
Look out for more blog posts from me and other members of the VMware Customer Success Engineering team, where we will be digging into some technical details of the IT DevOps management pack, and how to configure it for common use cases.