This post was authored by Sebastiano Mariani.
As part of our research, we often found ourselves detonating new types of malicious executables to study their behavior. To perform meaningful analyses, these samples need to be run in a pseudo-realistic environment to elicit as many behaviors as possible. These environments often require a non-trivial setup, which might involve installing a domain controller with various users, deploying a DNS server with various internal domains, and preparing a victim client running vulnerable software. In addition, it is also necessary to set up a logging facility to collect telemetry produced by the malware during the analysis.
To speed up this process, the malware analysis community developed Detection Lab, a virtual laboratory whose primary purpose is to allow the user to quickly build a Windows domain that comes pre-loaded with security tooling and some best practices when it comes to system logging configurations.
At VMware’s Threat Analysis Unit, our analysis environment is (unsurprisingly) heavily based on VMware products. Specifically, every time we create a new experiment, we define our desired networks and virtual machines using the infrastructure-as-code paradigm through vRealize automation. After the deployment, these Networks and VMs are managed by NSX-T and vCenter, respectively.
Unfortunately, Detection Lab does not support our environment, and, therefore, we decided to extend it to make it compatible and to have a fully automated deployment system.
This blog post describes how to set up Detection Lab using NSX-T and vSphere.
Detection Lab builds its virtual machine images using Packer. Packer is a tool for building identical machine images for multiple platforms from a single source configuration. Packer is based on the concept of builders, which are plugins responsible to create VM images for specific backends.
Detection Lab uses the vmware-iso builder, which is not fully compatible with vCenter. For example, one of the problems we encountered was that the builder does not detect virtual switches created in vCenter, but instead it creates new ones with the same name on the ESXi host, resulting in no network connectivity during build time.
Luckily, Packer has another builder called vsphere-iso that is fully compatible with vCenter. Therefore, our first task was to modify the Packer configuration so that it would use the vsphere-iso builder instead of the vmware-iso one.
To have an automatic way to launch Packer, we integrated the whole process in vRealize Automation Code Stream, which is a CI/CD pipeline managed by the vRealize suite. As the base for the CI/CD worker, we built a custom Docker image starting from the official Packer image. We then created a four-stage pipeline as shown in the following picture:
- Clone a git repository containing the Packer configuration ported to the vsphere-iso builder;
- Specialize various variables and secrets needed by the configuration file (e.g., vCenter credentials, network adapter name, etc.);
- Run Packer and build the image;
- Upload the image to our vCenter Content Library so it can be deployed by vRealize Automation.
Detection Lab deploys the infrastructure using Terraform, an open-source infrastructure-as-code software tool created by HashiCorp. At VMware TAU, however, we use vRealize Automation to deploy our infrastructure. Specifically, we use vRealize Automation Cloud Assembly and its templates to build and deploy custom network topologies on NSX.
Although Terraform provides a plugin that allows one to interact with vSphere and NSX environments, this approach has a major shortcoming: it is not possible to create NSX segments on-demand. This violates our functional requirement to be able to create an ephemeral infrastructure that can be easily deployed when the experiment starts, as well as easily deleted when the experiment ends.
When an experiment starts, vRealize Automation sends a request to the NSX-T Manager asking to create a new network segment and a new Tier-1 gateway. Moreover, if the requested segment needs to be able to reach the public internet, the appropriate NAT rules are created on the Tier-1 gateway. The process is summarized in the following picture:
After creating the infrastructure, the last step is to provision the virtual machines. Detection Lab uses Ansible, an open-source tool for provisioning, configuration management, and application deployment.
We decided to integrate Ansible into our workflow as well by adding a dedicated host responsible for receiving provisioning requests from vRealize Automation and executing Ansible playbooks on the appropriate hosts.
The last problem we had to solve, was to remove all the hardcoded values contained in the Detection Lab playbooks and make them parametric. The presence of these fixed values is caused by the fact that Detection Lab assumes a fixed network configuration (i.e., IP, MAC, and default gateway addresses are known beforehand), which doesn’t hold true in our scenarios. To solve this problem, we first substitute all the occurrences of hardcoded values with Ansible variables, and then we use vRealize Automation to extract the information needed from the network topology and pass it as input variables to the Ansible playbooks.
The following diagram capture the whole flow explained in this blog post.
We have submitted a Pull Request to the original project that allows for deployments using vCenter (https://github.com/clong/DetectionLab/pull/810). This modification will allow customers of vSphere to take full advantage of Detection Lab’s functionality.
Being able to flexibly and efficiently deploy sophisticated testing environments for malware analysis is of paramount importance for security analysts.
Detection Lab provides a pre-packaged, easy-to-deploy solution to address this need. We combined the Detection Lab approach with the powerful primitives provided by vSphere and NSX-T to make the deployment of multiple instances of complex testing environments easier and more flexible.