Best practices for code management

We introduced the concepts behind managing the SDDC as code in the first blog of this series, and now we want to take this concept down a few layers. In our first blog, we introduced the following basics:

  • Model your infrastructure as code.
  • Centrally manage configuration data.
  • Develop, test, then deploy.

This is a simple description of an SDDC software model, however the details of each of these elements can be overwhelming as you begin your adventure. In this blog, we want to break these concepts down further and in subsequent blogs we will demonstrate tactically how VMware tools are making this movement to SDDC as code possible.

Tips for modeling your infrastructure as code

One key to getting started is to begin to think of your infrastructure model as code, described by discrete parameters which uniquely define each data center infrastructure element. How many ports are needed? What range of computing power is needed? How much memory is needed to run the application? How much memory for storage? How much bandwidth will the application use? Does it need encryption?

To manage the SDDC as software, you need to:

  • Parameterize the variables for each element.
  • Create reusable storage, compute and networking elements.
  • Validate each configuration as a unit (i.e. unit test).
  • Combine your compute, networking and storage configurations into the broader application service models.
  • Present the specific parameters to the end user as they create a service request.

Remember, when infrastructure is defined by software, it becomes possible to centrally manage all of the corresponding data center configuration information. Although you might end up with tens or hundreds of variables, the beauty is that with all of the data described parametrically, the SDDC as a whole can now function like any software application. You can snapshot and version each iteration of a change and track changes back to specific user and to dates and times. However, this requires that you employ state-of-the-art code management practices, just like any software development organization.

Here are three of the most basic code management processes to employ:

  • Configuration Management
  • Development and Testing
  • Continuous Integration and Deployment

Managing Configuration Data

A core requirement for any configuration automation solution to function properly is that you have to maintain a central record of the configuration of everything that gets deployed. With a functioning configuration management solution, you can reduce the proliferation of snowflake systems during the validation and deployment process. This is achieved by comparing a proposed system configuration design to an approved (i.e. tested) reference architecture. Differences, or drift, between the proposed design and the reference design can be captured and can subsequently be disallowed, or approved. Automation can manage the entire process. Once you have a record of a system design, changes over time can be tracked and reconciled against the master copy, ensuring that you always have a master view of the infrastructure.

This data, together with virtual infrastructure described as code, makes it possible to test future changes against every configuration in your infrastructure. From this you will be able to make good choices about the impact of deploying any pending change. This will eliminate problems before they ever become an issue.

Development and Testing of your Code

In large software development teams, different individuals work independently on different elements of the code and then bring those elements together to build the larger application. Team members “check out” code from a central library or repository while they are working on changes. This ensures that no one else makes a competing change to a piece of code while someone else is working on it. The same should be true of your organization as you deploy software development best practices. Having every element versioned and tracked in a code repository will reduce errors later in the integration and testing phase. This is one more advantage of moving to a SDDC.

Continuous Integration, what does it really mean?

It might be said that continuous integration (CI) and continuous deployment (CD) are not basic processes. The promise of CI and CD is that elements of a service model can be updated as needed and automatically rolled out to all of the specific implementations which require the change. Automation takes care of all of the testing, re-building and re-deployment of the changes. If anything fails along the way, the errors are logged and can be traced to help resolve the issue.

Testing automation ensures that everything impacted by a change is revalidated before it’s pushed to production. Using configuration information, every impacted infrastructure model can be tested safely in a test environment without impacting end users or production systems. When implemented thoroughly, you can be assured that anything that might fail will be caught before going into production. However, this is where you see the return on investment of the additional effort. Test scripts and use cases have to be setup (and tested) before you can flip the switch on any automated test process. The time invested here is leveraged in knowing that the tests will always run and catch any errors in the future.

The benefits

At scale, automated integration and deployment provide many benefits:

  • Saves hundreds of hours of time
  • Ensures higher quality
  • Reduces troubleshooting time
  • Decreases deployment times

The best news is that when the data center is managed as code, it then becomes possible to borrow the same best practices that have evolved over the last two decades for your application development teams.

Other Blogs In This Series

  1. Managing the SDDC as Code (Intro)
  2. Infrastructure as Code – The Developer’s Point of View


Learn more


One comment has been added so far

Leave a Reply

Your email address will not be published. Required fields are marked *