Case Studies Customers devops security Tanzu Application Catalog Tanzu Build Service Tanzu Labs

Implementing DevSecOps in a Federal Agency with VMware Tanzu

Unifying three distinct teams—development, security, and operations—around a common approach to get application releases to production is challenging. This post explores how VMware Tanzu Labs partnered with a major branch of the United States Department of Defense (DoD) to build an automated DevSecOps process using VMware Tanzu products and services and several open source tools.

Mission: Protect DoD apps in the cloud

At the DoD, building applications in cloud environments has become the norm due to the numerous benefits it provides, namely self-service and speed. But the remote access nature of cloud environments also means that if you can get to them, so can your adversaries (if they try hard enough). Indeed, there are more ways for attackers to gain access to customer environments and data than ever before. This is particularly concerning for the federal government, since vulnerable DoD applications can lead to lost lives, whereas in the enterprise, losses are typically limited to revenue.

However, failing to take advantage of the multitude of benefits the cloud offers “because security” simply isn’t an option when it comes to building modern applications, which are increasingly cloud native from the start. By not using the cloud, you’re not leveraging the self-service and speed that it provides. And that means you may not be developing software capabilities as fast as your adversaries, which itself is a security concern. The question is no longer, “Should I use the public cloud?” but rather, “How should I use the public cloud with my private cloud?” This approach is referred to as hybrid cloud or multi-cloud.

All of which leads to the question: How can my organization build applications consistently and securely regardless of which cloud it’s on? That’s the question we set out to answer with our DoD customer, by taking the following three steps: 

  1. We worked with the customer’s security leaders to identify a minimum viable product (MVP) that had buy-in from the organization’s development, security, and operations (DevSecOps) teams. 

  2. We built the MVP, which consisted of an automation tool, two purposefully chosen Tanzu products, and a handful of security solutions. 

  3. We pushed an application release through the secure supply chain we built that the customer’s security team subsequently approved.

As a result, the customer now has a consistent, repeatable process for getting application releases to production that its security team has validated as being “secure enough.” But only “secure enough,” because things can always be made more secure. It’s also critical to decide what “good enough” looks like.

Building a GitOps practice to govern delivery

The customer’s first challenge was to build a disciplined GitOps practice with a product mindset. Since the DevOps community doesn’t appear to have settled on a standard definition for GitOps, we’ll use the following: GitOps is the practice of using version control (Git) to govern your entire operational software delivery capability. Put another way, it means “automating all the things.” 

To create its GitOps practice, the customer ops team building the DevSecOps automation partnered with a project manager (PM) and a few site reliability engineers from Tanzu Labs, which consults with organizations to help them build modern platforms and applications. The PM helped them organize and prioritize the flood of incoming feature requests and incorporate learning and metrics into everything they do. To help customer team members learn by doing, Tanzu Labs engineers paired with them for 8 hours a day to complete user stories.

Once the team felt confident in these processes, it moved on to building an automated continuous integration/continuous delivery (CI/CD) capability. Up until that point, the enormity of the decision they needed to make and the sheer number of choices they were given made it hard for team members to rally behind a specific automation tool. The lesson they learned here was to not get mired in “analysis paralysis” when trying to choose a single tool from among multiple options. 

Here are some key questions you can ask that will help you decide which tool is right for your team:

  • Does your team already have experience with a specific automation tool? Using a familiar tool can vastly speed up the process of getting started. For example, if a senior engineer has used Concourse to automate tasks in the past and liked it, then it probably makes sense to go with Concourse.

  • How big, active, and mature is the open source community around this tool? You don’t want to choose a tool that’s hot now but could be unsupported a few years down the road. Open source communities create standards that increase interoperability with other industry tooling. Harbor container vulnerability scanning is a good example of this.

  • How important is enterprise support? As the saying goes, open source is free so long as your time is worth nothing. Supporting pure open source projects is time-consuming. And in some cases, it’s advisable to pay someone to ensure it’s ready for you to use out of the box.

  • How extensible is the tool? Choosing a tool that can be extended using languages and technology that your teams are already familiar with is vital to your ability to add new capabilities and maintain its usefulness as your needs evolve.

Securing containers and backing services

Once the customer’s ops team was able to use its GitOps processes to regularly make changes to the underlying platform without breaking anything, it needed to create a CI/CD process for the organization’s application developers that addressed two core steps: How to turn the code those developers write into containers, and how to deploy the backing services that run alongside those containers. 

Those two steps are extremely important because they:

  • Have enormous security implications – Deploying an application container built using a base image that has critical vulnerabilities or using a Helm chart that does not implement security best practices could have devastating consequences, for example.

  • Require developer buy-in – Whether it’s Dockerfiles, buildpacks, or some other solution, developers will have to make changes to their existing development processes in order to properly interface with what the release pipelines expect.

  • Are hard to reverse later – Your developers will adapt their processes to work with the technologies in your CI/CD systems. For example, if you choose buildpacks for building application containers and Helm charts for deploying backing services, the devs will organize their Git repositories to support buildpack automation, use pack CLI to test app changes locally, and learn the Bitnami Helm chart standard so they can understand how their backing services are configured.

There are two popular ways to turn code into containers: Dockerfiles and buildpacks. Both have their tradeoffs. VMware Tanzu Build Service, which our customer selected, favors the buildpack approach due to its consistency, modularity, and maturity in the cloud native application space. It can also be deployed on any Kubernetes distribution that aligns with the Cloud Native Computing Foundation’s (CNCF) standard, which accounts for the overwhelming majority of Kubernetes solutions that exist today. And it generates Open Container Initiative images, which can run on any container runtime. 

Tanzu Build Service conforms to DoD security standards by supporting Federal Information Processing Standards (FIPS) and those found in the Security Technical Implementation Guide (STIG). Out of the box, base images are standardized on Ubuntu Bionic, which incorporates upstream patches from Canonical. Additionally, Tanzu Build Service subscriptions include updates and patching for FIPS-enabled Ubuntu Bionic base images. Tanzu Build Service is built on top of the open source Cloud Native Buildpacks project, which has achieved CNCF incubating status.

Deploying services that back your applications can be challenging if those services can’t make it through the security process to production, or if everyone is using different sets of blueprints. Our customer selected VMware Application Catalog, a collection of production-ready, open source software from the Bitnami library, to help. All container images distributed through VMware Application Catalog come with an associated anti-virus scan, a Common Vulnerabilities and Exposures (CVE) scan, an asset list, and Helm charts configured with security best practices. Together these features automate several of the steps needed for a container to be approved for production use in a classified environment. 

Federal customers can either bring their own base OS images or have VMware Application Catalog container images built using PhotonOS, with FIPS libraries, and hardened using our best practices guide. Each container image also comes with an associated Helm chart that serves as the service’s blueprint. VMware Application Catalog Helm charts are configured out of the box to deploy services using security best practices (e.g., FIPS) and the Bitnami standard, which is widely used across the industry.

Securing the automation pipeline

Once securely building containers and deploying backing services had been solved, the ops team started addressing the security team’s final concerns about the MVP. The ops team began by securing application dependencies using the OWASP Dependency-Check tool during all Maven/Gradle builds.

The next concern was code quality and maintainability, which it tackled by working with the security team to establish SonarQube application code scores it would be comfortable approving. The security team then requested that all vulnerability scans be located in a single, easy-to-reach place, so the ops team leveraged Harbor, which it had already deployed, and integrated it with Trivy. It also enabled a feature in Harbor that prevents developers from accessing images with high/critical vulnerabilities, which earned it even more brownie points with the security team.

Finally, the security team requested a situational awareness capability that would enable its members to easily tell when an application that had already been deployed needed to be patched/updated. The ops team chose Contrast Security as the tool for this particular job. If a vulnerability is found using Contrast, the fix is often to push the same code through the CI/CD process again; patches are made using the latest updates from Tanzu Build Service and VMware Application Catalog.

By bringing together development, security, and operations using the right training and tools, this federal customer was able to build a secure supply chain for delivering apps that it can continue to iterate on and ultimately, improve.

Learn more about Tanzu Build Service, VMware Application Catalog, and Tanzu Application Platform, VMware’s full-stack platform for application modernization. And to learn more about what it means to take a DevSecOps approach to software development in the federal government, watch the video below.