window-1231894_1280Welcome to part 5 of the Micro-Segmentation Defined– NSX Securing “Anywhere”  blog series. Previous topics covered in this series includes

In this post we describe how NSX micro-segmentation enables fundamental changes to security architectures which in turn facilitate the identification of breaches:

  • By increasing visibility throughout the SDDC, eliminating all blind spots
  • By making it feasible and simple to migrate to a whitelisting / least privileges / zero-trust security model
  • By providing rich contextual events and eliminating false positives to SIEMs
  • By providing inherent containment even for Zero Day attacks

Threat analysis is the new trend of the security landscape and established vendors as well as startups are proposing many tools to complement the current perimeter logging approach.  The attraction for these tools is based on the assumption that by correlating flows from different sources within a perimeter, threat contexts will emerge and compromised systems will be uncovered.  Currently, these systems go unnoticed for long periods of times because the suspicious traffic moves laterally inside the perimeter and does not traverse a security device: you can’t protect what you don’t “see”.

blog5-Picture1While these tools are welcomed additions to the security toolkit, what they imply is that the current perimeter approach fails to provide the proper context and visibility to identify and contain successful breaches. Are these tools being leveraged to their full potential by having them sort through ever increasing amounts of data or could there be some basic changes to the security architecture to provide them less, qualified and context-rich information for them to do their work more efficiently?

Context: why should my HVAC control application want to talk to my PoS units?

Security administrators understand the notion of context. They spend a fair amount of time building lists and grouping systems that have common properties: a database or PCI zone, systems that are public facing in a DMZ, users from various groups within a company, etc.  Unfortunately, they almost exclusively leverage physical segmentation and IP ranges to convey the context which can only represent one dimension at the time.  For example, how do you carve out of your network a VLAN, subnet or network area that would represent:

  • all my “Window IIS servers” plus “the ones used by MS Exchange” from
  • all my “Window IIS servers” plus “those used in Horizon View from
  • all my “Window IIS servers” plus “the ones generated dynamically for developers which need to be isolated from other users and the production network”
blog5-Picture2

A VLAN, subnet or some form of endpoint grouping related to physical networking constructs cannot represent adequately the rich context security administrators would like to attach to applications and systems.  Only a layered software perimeter construct that has no dependencies to physical infrastructure can deliver on the compound nature of the context we need to represent.

NSX provides via its Service Composer feature, a completely logical mechanism to group and arrange VM and containers.  Membership to a Security Group (“bubbles”) can be defined with complex logic leveraging multiple conditional statements (if / then), Boolean logic (and / or / not) and most attributes the VM or container may possess.  Policies attached to the Security Groups will generate events based on the ruleset contained in the policies, including the identity of the user as defined in Active Directory.

The immediate benefit of defining software perimeters in this fashion is that each event sent to your SIEM system is 100% representative of the full context associated with the VM or container.  For example, an event could represent a blocked flow from an administrator’s session on one of the IIS servers in MS Exchange located in your DMZ Security Group trying to access an internal system in your PCI Security Group.

That is a lot of context for a single event entry, one I know most security administrator can only dream to get with their current security architecture.

Visibility: getting rid of our blind spots

In order to collect more information, threat analysis tools require installing agents on systems, collect flows from the network switches facing the servers or deploy specialized probes inside perimeters.  If we look at this more closely, we realize that all these approaches require the collection engine to move closer to the source, i.e. they require the creation of micro-segments that have the capability of not only segregating the traffic sources as a PVLAN could do, but to also inspect that traffic in order to report contextual events.

By instantiating the NSX Distributed Firewall on every VM or container, and possibly allowing our partner’s solution to come and attach themselves at the same point, NSX provides the ultimate micro-segmentation solution, providing full visibility for all the traffic originating or bound for a particular VM or container.

blog5-Picture3

If you consider that in the context of “visibility”, endpoint solutions, probes and flow collectors are really logging agents, you can see how by deploying NSX Distributed Firewall in your environment you get full visibility of all traffic in your data center by design, offsetting the expense of deploying and operating a parallel environment just to see the traffic you are currently blind to.

Containment: how to keep the bad guys from moving around too easily

Threat analysis tools are there to alert you if something is going wrong.  Once they figure out the nature of the attack, some tools will go a step further and update security devices with new rules to contain the threat.  But by that time you will have more than one system compromised and possibly already have to deal with data exfiltration.

So it is not sufficient to identify the attack but we also need to contain it as much as possible while the analysis is ongoing.  This is where implementing a whitelisting / least privileges model become a critical element of the architecture.  By enabling whitelisting / least privileges, only the allowed communications between known systems are permitted while other combinations are denied.  Lateral movement between unrelated systems is by default impossible, making the progression of the attack throughout the data center a much harder thing to achieve.

However, this requires you to understand how your applications work and if you ask security administrators if they know all the applications running in their data center, which component of an application should be allowed to talk to other applications, who should be accessing the application, etc. you will 9 times out of 10, get an embarrassed “no” for an answer.  That is the reality.  Applications get deployed and security administrators go ahead and block known “bad” traffic and open up a few ports, then more and more as the application evolves, hoping that everything going through is “good” traffic.

A whitelisting / least privileges model starts with fingerprinting the application to know exactly how and with what it communicates.  Then having this list, you build a ruleset to allow only these communications between the specific elements and deny everything else.  By doing this we know exactly what should be allowed & monitored and a simple “deny” becomes a catch-all statement for all other possibilities that should never happen.

While we all agree on the benefits of transitioning to a whitelisting model, it has been difficult to implement given the segmentation of our physical networks and the way we distributed our applications across different zones leading to commingling internal and external traffic rules for numerous applications across multiple security devices, creating the proper conditions for security holes.

vRealize Network Insight changes this.

blog5-Picture4

vRealize Network Insight can take VMs, objects, groupings and their physical elements and easily fingerprint the application and determine the internal and external flows, the client connections, etc.

Once the information is gathered, we can create the appropriate Security Group to provide the exact context we need and now attach a whitelisted / least privileges policy to it, ensuring that only authorized traffic is flowing and being monitored as opposed to monitoring all the traffic and trying to isolate the flows for a particular application.

False positives: How about generating needles rather than hay?

A blacklisting approach, as we are mostly implementing today, generates a great amount of “false positives”, i.e. events are sent to the SIEM as they “might” be the indication that something bad is happening.  Unless you have a dedicated team sorting through these events, “false positives” will over time be ignored or rarely investigated as security administrators get overwhelmed with their sheer number and quickly realize that the vast majority of them end up being wild-goose chases.  Our current approach is training us to dismiss information because it has no relevance.

Thus the need for analytic tools to sort through the haystack in order to find the proverbial needle.  However, as we have seen in the previous sections, the current proposition to remedy the situation and create more visibility and context leads us down the path of generating more “hay” for the stack.

What if instead of sending more “hay” to the stack we could send “needles”?  This is probably the most interesting side effect of implementing a whitelisting / least privilege approach enabled by NSX.  Let’s walk through an example, using Horizon View’s infrastructure, to illustrate this.

As we can see from the picture below, we have grouped various elements of the View infrastructure in different Security Groups in order to provide context.  Then a specific security policy has been applied to the Security Server Group which:

  • Allow External clients to connect to the security servers
  • Allow Security Servers to connect to the Connection Server Group
  • Allow Security Servers to connect to the vDesktop Group
  • Deny everything else
blog5-Picture5

Remember that this group policy is enforced at each Security Server using the Distributed Firewall feature of NSX, creating a micro-segment for each of them plus providing full visibility and context for every single flows originating or bound for any of the Security Server.

Next, let’s assume one of the Security Server gets breached.  The attacker or the malware will eventually try to expand its footprint but he will eventually realize that the IP configuration of the interface and the open ports on the system provide little to no information to rely on for his next move given the security policy is not relying on physical infrastructure or perimeters.

The “deny all” statement covers all the unwanted scenarios, including those that do not yet exist such as Zero Day attacks, creating a much tighter trap to contain the breach.

Any attempt made to do reconnaissance via ping, a port scan, etc or to jump to another system other than one in the Connection Server or vDesktop groups on the appropriate ports and IP addresses, will trigger a fully qualified and 100% valid event for the SIEM while providing no information to the attacker.

blog5-Picture6

This whitelisting / least privileges approach enabled by NSX Distributed Firewall creates “needles” rather than more “hay” and security administrator can investigate them with the assurance of not wasting efforts each time a “deny” events is triggered.

Furthermore, monitoring tools, threat analysis tools, etc from your favorite vendor can now focus on the “allowed” traffic for deviation and anomalies without having to sort through useless entries generated for the sole purpose of exposing the traffic you could not see because of the way our physical perimeters were established, leaving blind spots for attacker to abuse.

NSX and the Kobayashi Maru

“I changed the conditions of the test.  I got a commendation for original thinking. I don’t like to lose.” – James T. Kirk

One of the reasons why, as security professionals, we are struggling to keep our systems and information secured is the inherent shortcomings and compromises imposed by physical networks and security appliances, leaving wide internal perimeters in our data centers open for abuses should a breach ever get there.

In fact, we are finding that our traditional perimeter design lacks the required visibility, context and containment to prevent company-wide breaches, thus the current focus on probes, agents, etc. feeding threat analysis tools to compensate for the weaknesses in the architecture.

While these tools do bring value and should be part of our toolkit, there is now an opportunity to leverage the unique properties offered by VMware’s NSX in order to fix the inherent flaws found in our security architectures and provide these advanced tools only validated information making them that much more efficient to detect and react to attackers and malware.

Furthermore, by applying vRealize Network Insight to NSX micro-segmentation, we can now fingerprint any application and make it feasible to move from a blacklisting approach to a whitelisted / least privileges model.  The two of them providing full visibility and context for all the flows in the SDDC, eliminating “false positive” and providing only qualified alerts for the security administrators to investigate while making it harder for the attack to escape the breached system and expand inside the data center.

Security administrators are not in the business of losing, and VMware NSX micro-segmentation changes the “conditions of the test” to their advantage.