hand-813525_1280Welcome to part 3 of the Micro-Segmentation Defined – NSX Securing “Anywhere” blog series. This installment covers how to operationalize NSX Micro-Segmentation. Be sure to check out Part 1 on the definition of micro-segmentation and Part 2 on securing physical workloads with NSX.

This blog covers the following topics:

  1. Micro-segmentation design patterns
  2. Determining appropriate security groups and policies
  3. Deploying micro-segmentation
  4. Application lifecycle management with vRealize Automation and NSX
  5. Day 2 operations for micro-segmentation

Micro-segmentation design patterns

Micro-segmentation can be implemented based on various design patterns reflecting specific requirements.  The NSX Distributed Firewall (DFW) can be used to provide controlled communication between workloads independent of their network connectivity. These workloads can, for example, all connect to a single VLAN. Distributed logical switches and routers can be leveraged to provide isolation or segmentation between different environments or application tiers, regardless of the underlying physical network, as well as many other benefits.  Furthermore, the NSX Edge Service Gateway (ESG) can provide additional functionality such as NAT or load balancing and the NSX Service Insertion framework enables partner services such as L7 firewalling, agent-less anti-virus or IPS/IDS applied to workloads that need additional security controls.

Picture1
Figure 1: Leveraging the DFW to provide granular control within a single network segment.

Choosing an appropriate design pattern is an important decision when preparing to operationalize micro-segmentation.  The benefits of using overlay-based virtualized networking and the potential need for additional security controls should be considered to make the appropriate design choice.

Picture2
Figure 2: Distributed Logical Routing, firewalling and partner service insertion

Determining appropriate Security Groups

Picture3
Figure 3: Security Groups and Policies

While Security Policies determine how something should be secured, Security Groups determine what is secured.  Security groups can be defined based on many different types of criteria, including network constructs such as IP addresses, infrastructure constructs such as Logical Switches or application constructs such as virtual machines, which can be added to a security group statically or dynamically, for example, based on the presence of a particular security tag.

Security tags are a way to label workloads. Labelling workloads can be done manually by the administrator to identify a particular workload as being part of a PCI environment.  3rd party NSX partner services such as anti-virus or vulnerability management can also tag as particular VM based on a certain condition such as a vulnerability found on the workload.

Screen Shot 2016-07-20 at 3.18.50 PM
Figure 4: Example of Dynamic Security Group membership based on multiple criteria

A Virtual machine can be a member of multiple security groups, which allows for multiple levels of segmentation to be applied to all applications. Security groups can be used to specify if an application is deployed in production or in a development environment, while another security group applied to the same workload determines if it’s connected to a web-tier, application-tier or DB-tier logical switch, and a 3rd security group can be leveraged to specify the application or application instance the workload virtual machine is part of.  The security policies applied to each of these security groups are combined and all applied to the workload that are a member of the security groups.  When a new application is on-boarded, it’s workload can then be added to the appropriate security groups. Instead of using this layered approach to security groups, it is also possible to create security groups specific to an individual application’s tiers.

Determining appropriate Policies

Determining what the appropriate security groups and firewall policies are for numerous complex applications in an organization can be challenging. Applications, both custom or off-the-shelf may not be documented very well, making it hard to determine what communication paths (and relevant firewall rules) need to be opened for the application to function while ensuring all other ports are closed to adhere to a least privilege strategy with a micro-segmentation architecture.

Gathering information about the application and its connectivity requirements by investigating its documentation or working with the application team is one way to perform the necessary application discovery. However, several practices and tools exist to make this application discovery process easier.

One option is to investigate connection logs.  This process consists of creating a catch-all firewall policy with a logging action, applying that policy to the application we are on-boarding and then investigate the firewall connection logs to create the granular rules required for this application and creating a default deny rule applied to this application.


Picture55Figure 5: Using the Log Insight Field Table for application discovery

 

Picture6
Figure 6: Using the Log Insight Field Table for application discovery

vRealize Log Insight can be leveraged for application discovery through connection log investigation, along with enabling the Logging action in the Distributed Firewall.  The use of scripting along with firewall logs also makes it possible to clean up, de-duplicate and parse through the logs and automatically generate recommended firewall policies based on the observed connections.

Another option for determining appropriate security groups and firewall policies is using vRealize Network Insight.  This solution collects IPFIX data from Distributed Virtual Switches in the datacenter and provides network flow assessment and analytics.  The network flow analytics helps to determine what the right security groups and firewall rules to achieve a 0-trust architecture are.

Picture7
Figure 7: vRealize Network Insight Flow Analysis

The vRealize Network Insight micro-segmentation planner organizes virtual machines into logical groups based into logical groups based on compute and network visibility and provides a blueprint to put security groups and firewall rules in place.  The analysis, modeling, and visualization provided by vRNI make the process of operationalizing micro-segmentation with the right security groups and firewall rules very straightforward.

Deploying micro-segmentation

After we have decided on what design pattern fits the requirements for our environment, we can start with the actual NSX installation.  The installation process is covered in detail in the NSX installation guide. Once the NSX manager is installed, clusters can be prepared with NSX. Once hosts are prepared, a default Distributed Firewall policy is applied to all the prepared clusters.

NSX can be deployed in a net-new datacenter (also called a greenfield) or a brownfield datacenter where applications have previously been deployed. The main difference between deploying micro-segmentation in a greenfield environment versus a brownfield environment is that in a brownfield environment we need to ensure existing application connectivity and availability is not compromised when micro-segmentation policies are put in place.  That is why upon deployment of NSX, the Distributed Firewall is configured with a default-allow policy. The next step in deploying micro-segmentation is creating a granular firewall policy and apply it to existing application or to applications as they are being on-boarded in case of a greenfield environment.  At the same time, network overlays can be implemented to provide distributed virtual routing and partner services can be deployed to provide additional security controls.

Application Lifecycle Management with vRealize Automation and NSX

Traditional ticket-based IT can no longer support the increased agility required by lines of business and the dynamic nature of the cloud.  New applications need to be on-boarded quickly, and in an automated self-service way, freeing up time for innovation rather than manual implementation.  While this self-service model has been well understood for provisioning of workloads; configuration of the appropriate networking and security has often been a more manual process.

NSX is fully integrated with vRealize Automation and can be integrated with other Cloud Management Platforms through the NSX RESTful API.  With vRealize Automation, the provisioning of network and security services can be done in lockstep with application on-boarding. Security controls are deployed as part of the automated delivery of an application.  The benefits of automation include:

  • Faster application delivery through a standardized and repeatable process
  • Greater reliability and consistency
  • Reduced Opex by eliminating manual configuration tasks

Picture8
Figure 8: vRealize Automation and NSX

With vRealize Automation and NSX, the administrator can define vRealize Automation application blueprints that specify NSX security policies for each application and application tier.  These security policies include native Distributed Firewall rules, but also partner integration services such as L7 firewalling or agent-less Anti-Virus.

Different options exist for automating application delivery micro-segmentation using vRealize Automation.  One method is use security groups and policies representing application tiers that have been pre-configured in NSX. The pre-created policies should only allow inter-tier communication for specific services in between each tier (for example, allow MSSQL between the App and DB tier).  When creating a vRA blueprint, you can then attach your application’s workloads to their respective tiers.   With this approach you ensure only controlled communication between application tiers is allowed when new applications are deployed from the blueprint.

Another option is to use the App Isolation feature inside of a vRA multi-machine blueprint.  This is a simple checkbox that once checked, will ensure that security groups and policies are automatically created for every instance of the application that gets deployed in order to completely isolate this application from any other applications or application instances.

Picture9
Figure 9: vRealize Automation App Isolation checkbox

Finally, when creating a blueprint, we can choose to use on-demand security groups and rules for each application instance.  In this approach, you define security policies in NSX, but don’t assign those to any particular security group yet.  When you define a multi-machine blueprint in vRA, you can then attach on-demand security groups to our application tiers and select the relevant security policy.  Every time we deploy an application from this blueprint, unique security groups will be created, isolating each application instance from any other instance, while at the same time micro-segmenting each application instance by use of the pre-configured policy.

Day 2 operations for micro-segmentation

Once we have on-boarded our applications and applied the appropriate networking and security controls, we may be required to verify if the correct level of protection has indeed been applied.  The vSphere web client provides visibility into all the firewall rules as well as 3rd party services that have been applied to every workload.

Furthermore, log analytics tools such as vRealize Log Insight with the NSX content pack or the Splunk App for NSX can be used to collect logs on allowed and roped flow and provide visibility into inter- and intra-application flows.

Another option for day 2 micro-segmentation operations is the use of vRealize Network Insight .  vRealize Network Insight provides monitoring, tracking, and auditing of security group memberships and effective firewall rules, enabling rapid troubleshooting and compliance.  It can generate alerts when inconsistencies occur to ensure the actual implementation remains complaint with the design.


Picture10
Figure 10:vRealize Network Insight Events Widget

vRealize Network Insight also provides a timeline feature, which can for example be used to investigate security group membership or effective firewall policies applied to an application in any point in time, to enable the operations team to quickly identify the cause of issues related to application functionality or compliance, for example an application that is no longer functioning or blocked flows between development and test environments.

Conclusion

Operationalizing NSX starts with determining the appropriate design pattern based on network and security requirements.  These design patterns can leverage the Distributed Firewall to control communication as well as overlay-based logical switches, virtual routers and partner service insertion.  Determining the appropriate security groups and policies to implement a zero-trust model through micro-segmentation while not impacting the application functionality is crucial.  Several practices and solutions exist to make that process easier.  vRealize Automation and other Cloud Management Platforms can integrate with NSX to automate application delivery including the appropriate security and networking.