Cloud Security Migration Optimization Tips

DevSecOps: How to Automate Continuous Cloud Security with Jenkins and Terraform

One of the most effective ways an organization can help prevent cloud security violations from occurring is by integrating security checks and best practices directly into their continuous integration (CI) and continuous delivery (CD) pipelines—a practice that is often associated with “DevSecOps” or “shifting left.” In this article, we show you how you can use Jenkins and Terraform to help automate cloud security best practices into your delivery pipelines.

One of the most effective ways an organization can help prevent security violations from occurring is by integrating security checks and best practices directly into their continuous integration (CI) and continuous delivery (CD) pipelinesa practice that is often associated with “DevSecOps” or “shifting left.” There are plenty of tools out there that can help with this process, but it can be confusing to navigate which to use, in what context, and how they work with your existing systems.

In this article, I’ll show you how you can use the tools, Jenkins and Terraform, to help automate cloud security best practices into your delivery pipelines. We’ll start by deploying AWS infrastructure with Terraform, and then we’ll check if any of those resources violate policy constraints. If they did violate, we will either redeploy the infrastructure to fix the violations, or alternatively, we’ll choose to terminate the infrastructure.

  • Pipeline 1: How to deploy using Terraform (with S3 as Terraform backend)
  • Pipeline 2: Check for violations (validate against policy)
  • Pipeline 3: Redeploy using Terraform (trigger if a policy was violated)
  • Pipeline 4: Terminate/destroy infrastructure using Terraform

Brief introduction to Jenkins and Terraform

Jenkins and Terraform are both tools that help accelerate and simplify the deployment process. More specifically, Jenkins is an open-source automation server that helps automate the parts of the software development process related to building, testing, and deploying. With Jenkins, you can create Pipelines that consist of various steps/stages described in the form of code for continuous integration and continuous delivery (CI/CD), consisting of instructions needed for each step of the build process. 

Closely related, Terraform is an open-source infrastructure as code software tool that enables developers to programmatically provision the resources a workload needs to run. If you’re familiar with AWS, it’s similar to tools like CloudFormation, which you would use to automate your AWS infrastructure, but you can only use that on AWS. With Terraform, you can use it with other cloud platforms as well, such as Azure and GCP.

Prerequisites

Before we can get started with our first Pipeline, we need to make sure we have a few plugins installed. We won’t be using the Terraform Jenkins Plugin because it isn’t kept up-to-date, and it doesn’t have additional configuration parameters for using Terraform with backends like S3. We’ll be using JenkinsFile instead. 

Here is a list of Jenkins Plugins that need to be installed before starting the configuration:

Mandatory

Optional

Once these are installed, we’re ready to get started configuring our Pipelines! 

Pipeline 1: Deploy Using Terraform

There are four primary steps in our first Pipeline: preparation, installing dependencies, deployment, and post actions.

jenkins pipeline to deploy using terraform

Stage 1: Preparation

The first stage involves sending a message (via Slack) that notifies the user that the Pipeline has been triggered. In the same step, we also clone the repository that contains the Terraform files. 

stage('Preparation') {       steps {
              slackSend color: "good", message: "Status: DEPLOYING CLOUD INFRA | Job: ${env.JOB_NAME} | Build number ${env.BUILD_NUMBER}"
              git 'https://github.com/ishrivatsa/demo-secure-state.git'
      }
}

Stage 2: Install dependencies

The second stage involves installing all the dependencies for Terraform.

  stage('Install TF Dependencies') {
      steps{
        sh "sudo apt install wget zip python-pip -y"
        sh "curl -o terraform.zip https://releases.hashicorp.com/terraform/0.12.5/terraform_0.12.5_linux_amd64.zip"
        sh "unzip terraform.zip"
        sh "sudo mv terraform /usr/bin"
        sh "rm -rf terraform.zip"
      }     
}

Stage 3: Deployment

The third stage is when the Terraform code is deployed (applies command).

   stage('Apply') {        environment {             TF_VAR_option_5_aws_ssh_key_name = "adminKey"
            TF_VAR_option_6_aws_ssh_key_name = "adminKey"
            TF_VAR_option_1_aws_access_key = credentials('ACCESS_KEY_ID')
            TF_VAR_option_2_aws_secret_key = credentials('SECRET_KEY')
            AWS_ACCESS_KEY_ID= credentials('ACCESS_KEY_ID')
            AWS_SECRET_ACCESS_KEY= credentials('SECRET_KEY')
            AWS_DEFAULT_REGION="us-west-1"         }     steps {          sh "cd fitcycle_terraform/ && terraform init --backend-config=\"bucket=BUCKET_NAME\" --backend-config=\"key=terraform.tfstate\" --backend-config=. \"region=us-east-1\" -lock=false && terraform apply --input=false --var-file=example_vars_files/us_west_1_mysql.tfvars --auto-approve"          sh "cd fitcycle_terraform && terraform output --json > Terraform_Output.json"
      }
  }

The variables needed to be passed to Terraform files can either be statically defined in “.tfvars” file or they can be set as env variables or as a switch (-var) with Terraform apply command

Use the Jenkins credentials plugin to set the Access and Secret Key.

jenkins credentials demo

Notice that in the Terraform command, I’m using “backend-configuration.” The backend configuration instructs Terraform to store the “.tfstate” at another location (e.g. S3 bucket, consul, etc.), which acts as the source of truth. It also enables teams to collaborate on the same infrastructure.

Another advantage of this method is that it allows us to pass/copy “.tfstate” to another Pipeline that can use it to either modify existing infrastructure or terminate it entirely. 

In this stage, we also store the details about the resources that were deployed by Terraform, such as the Object ID, Name, etc., to a file in .json format. An example would look like: Terraform_Output.json. Use the artifact plugin to store this file as an artifact.

Post actions

The final stage of our first pipeline is to send a status update via a Slack notification. The final pipeline script would look like this: 

pipeline {
    agent any
   stages {
     stage('Preparation') {
      steps {
      slackSend color: "good", message: "Status: DEPLOYING CLOUD INFRA | Job: ${env.JOB_NAME} | Build number ${env.BUILD_NUMBER} "
      git 'https://github.com/ishrivatsa/demo-secure-state.git'
     }
   }     stage('Install TF Dependencies') {
      steps{
        sh "sudo apt install wget zip python-pip -y"
        sh "curl -o terraform.zip https://releases.hashicorp.com/terraform/0.12.5/terraform_0.12.5_linux_amd64.zip"
        sh "unzip terraform.zip"
        sh "sudo mv terraform /usr/bin"
        sh "rm -rf terraform.zip"
      }       
    }

    stage('Apply') {
           environment {
              TF_VAR_option_5_aws_ssh_key_name = "adminKey"
            TF_VAR_option_6_aws_ssh_key_name = "adminKey"
            AWS_ACCESS_KEY_ID= credentials('ACCESS_KEY_ID')
            AWS_SECRET_ACCESS_KEY= credentials('SECRET_KEY')
        }
    steps {
        sh "cd fitcycle_terraform/ && terraform init --backend-config=\"bucket=BUCKET_NAME\" --backend-config=\"key=terraform.tfstate\" --backend-config=\"region=us-east-1\" -lock=false && terraform apply --input=false --var-file=example_vars_files/us_west_1_mysql.tfvars --auto-approve"
        sh "cd fitcycle_terraform && terraform output --json > Terraform_Output.json"
    }
  }
 }
       post {
        success {
            slackSend color: "good", message: "Status: PIPELINE ${currentBuild.result} | Job: ${env.JOB_NAME} | Build number ${env.BUILD_NUMBER}"
            archiveArtifacts artifacts: 'fitcycle_terraform/Terraform_Output.json', fingerprint: true
            archiveArtifacts artifacts: 'violations_using_api.py', fingerprint: true
        }
        failure {
            slackSend color: "danger", message: "Status: PIPELINE ${currentBuild.result} | Job: ${env.JOB_NAME} | Build number ${env.BUILD_NUMBER}"
        }
        aborted {
            slackSend color: "warning", message: "Status: PIPELINE ${currentBuild.result} | Job: ${env.JOB_NAME} | Build number ${env.BUILD_NUMBER}"
        }
    }
}

After the execution is completed successfully, you should expect a similar state to the one shown below.

successful completion of jenkins pipeline

This Terraform template deploys 10 instances—two of which are private and have “EC2Admin” IAM Profile attached to them.

terraform template deploying aws instances

Pipeline 2: Check for violations

The next Jenkins Pipeline will check for any violations that are generated by the security and compliance tool against the resources that were deployed in the first Pipeline (look for Terraform_Output.json from the previous stage).

There are four primary steps in this Pipeline: Install dependencies, copy artifacts, check for violations, and verify.

jenkins pipeline to check for security violations

Stage 1: Install dependencies

The first stage involves installing the CLI/SDKs for your security and compliance tool of choice. In our example, we use CloudHealth Secure State, an intelligent cloud security and compliance monitoring platform with real-time detection and remediation capabilities so organizations can proactively reduce risk and protect resources at cloud speed. 

Stage 2: Copy artifacts

The second stage requires you to copy artifacts from the Terraform output file from the deployment Pipeline, as well as any other scripts that you may have to check for violations. 

Stage 3: Check for violations

In the third stage, I use a custom script, “violations.py,” which is a simple python script that checks for violations of all resources deployed by Terraform against the data from CloudHealth Secure State. You may have to modify this script or write your own for additional policies. 

You can see the complete script for this pipeline below.

Stage 4: Verify

If any violating objects are found, then the output is set to True else False. This flag can be used to send a Slack notification and also to automatically trigger a de-deployment Pipeline. 

pipeline
{    agent any

  stages {
 
<   stage('Copy Artifacts') {
    steps {
        step ([$class: 'CopyArtifact',
         projectName: 'Continuous Security_Deploy',
         filter: 'fitcycle_terraform/Terraform_Output.json']);
        
         step ([$class: 'CopyArtifact',
         projectName: 'Continuous Security_Deploy',
         filter: 'violations_using_api.py']);
    }     
   }
   stage('Check for Violations') {
         environment {
           REFRESH_TOKEN = credentials('SS_CSP_REFRESH_TOKEN')
       }
       steps {
           slackSend color: "good", message: "Status: CHECKING FOR VIOLATIONS | Job: ${env.JOB_NAME} | Build number ${env.BUILD_NUMBER}"
           sh "pip install requests"
           sh "mv fitcycle_terraform/Terraform_Output.json ."
           sh "python violations_using_api.py"
       }
   }
   }

Both CloudHealth Secure State and Jenkins will send a Slack notification with details about the violations and the Pipeline status. The connected graph in the image below shows the resources involved in the violation within the Secure State platform. The highlighted private instances have IAM Profile attached to them that has an Administrator Access policy (AWS policy) attached to it.

shared ssh key in aws iam profile

The highlighted instance is publicly accessible and shares the same SSH Key as the two instances with Admin Policy attached to them.

ssh key security violation in context within cloudhealth secure state

Since these instances share SSH Key, the best possible corrective action is to redeploy them with new and distinct SSH Key pairs. 

Pipeline 3: Redeployment using Terraform

Our third Pipeline is triggered only if the output of the previous Pipeline is “True.” This Pipeline is similar to the deployment Pipeline, except for one change: the SSH Key pair name(s) that are passed as variables to Terraform are distinct, in order to fix the violation detected in the previous step.

redeployment of jenkins pipeline demo

It’s important to note that this process may need an approval step, where someone from your security or DevOps team evaluates and approves before any modifications are made to the production environment. 

You may add another step to this stage where the DevOps engineer can either provide you updated Key Pair names or push a new “.tfvars” file to the repository, which can trigger the deployment. In this scenario, you may have to trigger the Pipeline manually. 

If you’re using a security and compliance solution like CloudHealth Secure State, you can also write your script to trigger a redeployment Pipeline based on risk. Secure State provides “Risk Scores,” so you can prioritize security vulnerabilities and violations based on blast radius and quantified risk severity. If the risk score is high, then it’s always a better option to take an automated action on it.

successful redeployment terraform pipeline with aws

Successful execution of this Pipeline will result in the termination of instances with incorrect (shared) SSH Key and reinstating them with new keys. 

Pipeline 4: Destroy infrastructure using Terraform

The final Pipeline will destroy/terminate the infrastructure by fetching the “.tfstate” file from S3, and then executing the command. The final Pipeline script would look like this: 

pipeline {
    agent any 
   stages { 
   stage('Preparation') { 
      steps {
      slackSend color: "good", message: "Status: TERMINATING CLOUD INFRA | Job: ${env.JOB_NAME} | Build number ${env.BUILD_NUMBER} "
      git 'https://github.com/ishrivatsa/demo-secure-state.git'
      }
   }
   
    stage('Install Dependencies') {
      steps{
        sh "sudo apt install wget zip python-pip -y"
        sh "curl -o terraform.zip https://releases.hashicorp.com/terraform/0.12.5/terraform_0.12.5_linux_amd64.zip"
        sh "unzip terraform.zip"
        sh "sudo mv terraform /usr/bin"
        sh "rm -rf terraform.zip"
      }       
    }
  
    stage('Destroy') {
   
           environment {
            
            TF_VAR_option_1_aws_access_key = credentials('ACCESS_KEY_ID')
            TF_VAR_option_2_aws_secret_key = credentials('SECRET_KEY')
            AWS_ACCESS_KEY_ID= credentials('ACCESS_KEY_ID')
            AWS_SECRET_ACCESS_KEY= credentials('SECRET_KEY')
            
        }
   
    steps {
        
        sh "cd fitcycle_terraform/ && terraform init --backend-config=\"bucket=secure-state-demo\" --backend-config=\"key=terraform.tfstate\" --backend-config=\"region=us-west-1\" -lock=false && terraform state pull &&terraform destroy --var-file=example_vars_files/us_west_1_mysql.tfvars --auto-approve"
       
    }
  }
 }
 
 post {
     
     success {
         slackSend color: "good", message: "Status: ${currentBuild.result} | Job: ${env.JOB_NAME} | Build number ${env.BUILD_NUMBER} "
     }
     failure {
         slackSend color: "danger", message: "Status: ${currentBuild.result} | Job: ${env.JOB_NAME} | Build number ${env.BUILD_NUMBER} "
     }
    aborted {
            slackSend color: "warning", message: "Status: PIPELINE ${currentBuild.result} | Job: ${env.JOB_NAME} | Build number ${env.BUILD_NUMBER}"
    }
 }
}

Additional resources

In this article, I’ve shown you how you can “shift left” by integrating security checks into your deployment pipelines using Jenkins and Terraform. While these are great steps to take to ensure the security of your infrastructure, they should be within the context of a holistic cloud security and compliance practice.

To learn more about how to establish a complete cloud security practice, see our whitepaper: Building a Successful Cloud Infrastructure Security and Compliance Practice. Additional resources that may be useful for you include: