We are very excited to introduce the initial release of the Terraform Provider for VMware Cloud on AWS. This provider is the result of a lot of feedback from developers and automation specialists plus some amazing collaboration with our friends at HashiCorp! We have been so excited about the potential of this provider that William Lam and I couldn’t hold it in and gave a technical preview of it during VMworld in our Advanced Automation Techniques session.
Just to level set, Terraform is an infrastructure provisioning tool, from HashiCorp, which has become synonymous with “Infrastructure as Code.” This tool allows us to define the desired state of our infrastructure by way of text-based configuration files. From that point, we can manage the entire lifecycle of our infrastructure by modifying those files and running a couple commands.
Terraform Provider for VMware Cloud on AWS
There are a couple items we should cover before we dive into provisioning our SDDC. This is an initial release of the Terraform Provider for VMware Cloud on AWS. That means, while we’re going through the process of adding it to the Terraform Registry, we have made the provider available in a repository within VMware’s GitHub organization. The provider gives us the ability to perform the main tasks of managing an SDDC’s lifecycle. These are the standard CRUD based actions: create, retrieve, update, and delete.
Let’s check out the Terraform Provider for VMware Cloud on AWS in action!
Setting Up Our Environment
HashiCorp Terraform can be run in two ways, either locally or through their hosted offering, Terraform Cloud. For the following examples, I’ll be using a MacOS based system with the local offering of Terraform. However I should note that only the first couple steps will be MacOS focused. Once we get to the point of actually using Terraform, the process should be identical regardless of which operating system (OS) you’re using.
In order to get this provider up and running, there are a couple things we need to have installed and available through our local OS. We need to have Go, specifically version 1.13, and Terraform version 0.12. It also helps to have git installed locally as well.
On MacOS, we can use the HomeBrew package manager to install these prerequisites in just two commands. For other OSes, you can use the method you most prefer. This code would look like:
1 2 |
brew install go brew install terraform |
Once those are installed, we’re ready to clone the provider’s repository locally. This provider happens to use the beta release of the vSphere Automation SDK for Go. In order to alleviate some of the dependencies of that SDK, we’re going to clone the provider’s repo inside the location specified in our GOPATH variable.
We can clone the repo locally and build out our provider with the following code:
1 2 3 4 5 6 |
mkdir -p $GOPATH/src/github.com/provider/ cd $GOPATH/src/github.com/provider/ git clone https://github.com/vmware/terraform-provider-vmc.git cd $GOPATH/src/github.com/provider/terraform-provider-vmc go get go build -o terraform-provider-vmc |
We have one last prerequisite to perform. In order to have Terraform recognize the provider we just built, we have to move it to the appropriate directory. On our MacOS system, this is the following location: $HOME/.terraform.d/plugins/darwin_amd64
Of note, this directory tree didn’t already exist on my system so I had to create it before moving it over. We can create the directory tree and move the provider over with the following code:
1 2 |
mkdir -p $HOME/.terraform.d/plugins/darwin_amd64 mv terraform-provider-vmc $HOME/.terraform.d/plugins/darwin_amd64 |
Terraform File Walkthrough
The nice part about having the GitHub repo already available locally, there’s an “examples” folder which gives us a preconfigured set of Terraform configuration files to start with. There are two main files we’ll be using with Terraform.
The first file we’ll be using is main.tf
. This is the Terraform file that will be used to configure our SDDC. If you open that in a text editor of choice, you’ll see some different blocks of text. The top block of three lines are establishing out provider configuration to use the VMC provider. This is the provider we moved in the last step. The next couple blocks are known as Data Sources. These blocks can be used to obtain information from other areas of the system or through some other programmatic way. An example would be the second data block. This block takes input about our Organization ID and our AWS account ID and allows us to reference it later in our configuration file. Lastly, we have the resource block. This is where we declare what our SDDC should look like. We can see some common SDDC parameters like name, number of hosts, and so forth.
Here’s an example of what the main.tf
file looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
provider "vmc" { refresh_token = var.api_token } data "vmc_org" "my_org" { id = var.org_id } data "vmc_connected_accounts" "my_accounts" { org_id = data.vmc_org.my_org.id account_number = var.aws_account_number } data "vmc_customer_subnets" "my_subnets" { org_id = data.vmc_org.my_org.id connected_account_id = data.vmc_connected_accounts.my_accounts.ids[0] region = var.sddc_region } resource "vmc_sddc" "sddc_1" { org_id = data.vmc_org.my_org.id sddc_name = var.sddc_name vpc_cidr = var.vpc_cidr num_host = 3 provider_type = "AWS" region = data.vmc_customer_subnets.my_subnets.region vxlan_subnet = var.vxlan_subnet delay_account_link = false skip_creating_vxlan = false deployment_type = "SingleAZ" account_link_sddc_config { customer_subnet_ids = [data.vmc_customer_subnets.my_subnets.ids[0]] connected_account_id = data.vmc_connected_accounts.my_accounts.ids[0] } } |
The second file we’ll be using is the variables.tf
. This is a standard variables file, where we can define all the information we’ll need in order to create our SDDC. Speaking of, there are a number of items we will need in order to create our SDDC. These include:
- RefreshToken
- Organization ID
- AWS Account Number
- Desired SDDC Name
- Desired SDDC Region
- Desired AWS based VPC CIDR to use
- Desired VXLAN subnet CIDR to be used by the compute gateway
A majority of these items can be found through the API Explorer, or by way of a language or automation tool of your choice.
Here’s an example of what my variables.tf
file looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
variable "api_token" { description = "API token used to authenticate when calling the VMware Cloud Services API." default = "insertYourRefreshTokenHere" } variable "org_id" { description = "Organization Identifier." default = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx" } variable "aws_account_number" { description = "The AWS account number." default = "xxxxxxxxxxxx" } variable "sddc_name" { description = "Name of SDDC." default = "Terraform-SDDC" } variable "sddc_region" { description = "The AWS region." default = "US_WEST_2" } variable "vpc_cidr" { description = "AWS VPC IP range. Only prefix of 16 or 20 is currently supported." default = "172.31.48.0/20" } variable "vxlan_subnet" { description = "VXLAN IP subnet in CIDR for compute gateway." default = "192.168.1.0/24" } |
After populating those items in the variables.tf
file, we’re ready to provision an SDDC!
Provisioning an SDDC
At this point we have our provider built and located in the proper directory, we have our Terraform files updated, and we are ready to start letting Terraform do all the hard work for us!
We will start by changing our terminal session over to the examples folder, which contains the files we updated. Then we’ll want to initialize Terraform within this folder. This process allows Terraform to perform some pre-checks against our files, as well as downloading any dependencies. We can do this with the following command:
1 |
terraform init |
The next step is to have Terraform create the execution plan for our configuration files. The output of this action is what’s known as your infrastructure’s “state.” It’s recommended that you save this state as output, which can help optimize the process, especially when your configuration file gets larger and more complex. However, this isn’t a requirement. We can create this plan with the following command:
1 |
terraform plan -out TFSDDC.tfplan |
The output from this command, shown above, tells us what Terraform will be required to do in order to bring the infrastructure to the desired state. In our example above, we can see that there’s a “create” action which will take place. This will create our “sddc_1” resource and populate those properties listed as “known after apply” for our SDDC.
Our final step, assuming that everything listed in the output above was correct, will be to instruct Terraform to create our SDDC. We can do that with the following command:
1 |
terraform apply TFSDDC.tfplan |
Once the process has completed, you should be greeted with a brand new SDDC in the Cloud Console!
Scaling Up an SDDC
We created an SDDC with a single host in the last section. This is generally where I start off all of my SDDCs. However, there are some occasions where my needs for an SDDC grow and I need to acquire some new hosts. VMware Cloud on AWS makes this process extremely easy. We can even continue using Terraform to perform the task of scaling up our SDDC.
If we open up the main.tf
file we used in the prior section and change the ‘num_hosts’ property to be 3, we can then save the file. Afterwards, we’re all set to return to our terminal session and have Terraform perform the required tasks for us.
Similarly to the last section, we run the same commands to create our updated plan then apply that plan to provision the additional hosts. This can be performed with the following code:
1 2 |
terraform plan -out TFSDDC.tfplan terraform apply TFSDDC.tfplan |
In the above image, we can see the indication that our sddc_1 resource should specify an action of “update in-place” and the “num_host” property being updated from 1 to 3. Then, the next command performs the required tasks to add those additional hosts.
Once the process has completed, we should see our SDDC now has 3 hosts!
Removing an SDDC
We are now to the point where we no longer need our SDDC. Terraform makes this task incredibly easy. With our terminal session back in the directory containing our configuration files, we only have to run one command:
1 |
terraform destroy |
Once the process has complete, we can check our Cloud Console and see that our SDDC has been removed.
Summary
Today, we introduce the Terraform Provider for VMware Cloud on AWS. VMware Cloud on AWS is a fantastic service which allows us to create software defined datacenters (SDDC) within select AWS regions. This new Terraform provider allows us to manage SDDCs in a more modern, Infrastructure as Code, manner. Managing our SDDCs in this way gives us the capability to manage our SDDCs faster and more reliably while also allowing us to easily document our provisioned infrastructure. This provider’s initial release is currently available within VMware’s GitHub organization. Look for it in the Terraform Registry at some point in the future.
Let us know in the comments how you’re using the Terraform Provider to VMware Cloud on AWS in your environment!