posted

4 Comments

While external perimeter protection requirements will most likely command hardware acceleration and support for the foreseeable future, the distributed nature of the services inside the data center calls for a totally different set of specifications.

Some vendors have recently claimed they can achieve micro-segmentation at data center scale while maintaining a hardware architecture. As I described in my recent article in Network Computing, this is unlikely because you have to factor in speed and capacity.

To quickly recap the main points describing the model in the article:

  • Our objective is for all security perimeters to have a diameter of one—i.e. deploying one security function for each service or VM in the data center—if we want to granularly apply policies and limit successful attacks from propagating laterally within a perimeter. A larger diameter implies we chose to ignore all inter-service communications within that perimeter.
  • This objective is impossible to achieve with our traditional hardware-based perimeters: The service densities and the network speeds found in current data center designs overrun any hardware-based inline inspection models.
  • The solution resides in “splitting and smearing” security functions across thousands of servers. This requires an operational model capable of managing large scale distributed functions AND to present the security administrator with a decoupled and consolidated view of the security application (ie the administrator deals with the policies and the infrastructure manages where these policies are enforced).

Not all splits provide equal smearing

Now, let’s see how these vendors’ propositions fall short.

First, most vendors now implement a form of SDN model and move the security function out of the core to locate it at the edge of the data center network, off a “leaf” switch typically. Then they also claim to extend segmentation in the virtual space either by configuring VMware’s Virtual Distributed Switch (VDS) or replacing it with their own virtual switch.

We cannot speculate on the capabilities or the implementation of 3rd party vSwitches at this point so we will focus on what is possible on VDS. Similarly, we will forgo the discussion around the operational framework each of these vendors can provide, keeping in mind that it is the foundational piece to any “split and smear” approach. Putting the pieces together, we are left with the following physical and logical topology:

BGermain-Graphic1

In order for the physical firewall instances to provide their service to the VMs in the virtual infrastructure, traffic needs to hairpin through the fabric to the appropriate firewall context.  In an East-West traffic pattern where VMs talk to each other, this means multiple security contexts need to be created on the firewalls.

BGermain-Graphic2

As mentioned in my article, Mr. Stamos original requirement was to inspect all traffic to provide proper visibility and segmentation, anything less meaning we are consciously leaving an attack surface available for abuse.  Considering the 12 hosts on the left of our drawing, this means supporting 120 Gbps of capacity for that rack alone converging on the firewall farm sitting at the edge.

If we go back to Mr. Stamos’ example, how many firewalls would I need to inspect all the traffic for 1,000 servers attached to this data center fabric?

Answer: as many as we had in our traditional approach (i.e. 250 120Gbps Next-Gen firewalls) and to make things worse, this architecture is oversubscribing the leaf switches by creating points of contention and taxing the core by transiting twice as much traffic through it.  We are still dealing with a centralized, inline architecture in disguised with all the limitations Mr. Stamos was referring to.

At this point, not only is it that the hardware approach is not a distributed service, it also still cannot provide a more granular perimeter in the virtual space than a subnet and VLAN.  Micro segmentation cannot be achieved in the virtual infrastructure with hardware firewalls.

This is where the vSwitch comes into the discussion.  Given a plugin, a script or API calls, how could they leverage the VDS to micro segment 2 VMs on the same subnet?  The only solutions the VDS is capable of are Access Control Lists (ACLs) and private VLAN.  Both these constructs do not inspect traffic and have static definitions: you can prevent 2 VMs from talking to each other but you cannot look at the traffic as this requires the help of the firewalls sitting on the other side of the network.

It is interesting to notice that these vendors acknowledge the need to sit at the vSwitch layer to provide micro segmentation but still rely on distant, external firewall appliances to enforce security.  The end result is a two tiers solution that still relies on a centralized, inline firewall architecture that cannot scale to deliver security services.

 “It is easier to move a problem around (for example, by moving the problem to a different part of the overall network architecture) than it is to solve it.” – RFC 1925

Staying in the race with “split and smear”

This is the advantage VMware NSX provides with its Distributed Firewall service.  Furthermore, our partners can leverage the same operational model to distribute their next-generation services in the virtual infrastructure or elsewhere in the data center.

One often forgotten benefit NSX brings to the table is this: you are presented a data center firewall and the infrastructure itself handles the complexity of pushing the appropriate rules to a hypervisor, a partner’s virtual instance or hardware platform plus adapting to changes when VMs move around for example in one or multiple data centers.

If we are serious about inspecting all the traffic in our data center and if we want to leave as small an attack surface as possible for “the bad guys”, we have no choice but to go down the “split and smear” path.  Everything else falls short and wastes time, resources and money.

This is the “nut” the VMware team is cracking with NSX.