posted

0 Comments

In Part 1, I covered traditional segmentation options. Here, I introduce VMware NSX Distributed Firewall for micro-segmentation, showing step-by-step how it can be deployed in an existing vSphere environment.

Now, I have always wanted a distributed firewall. Never understood why I had to allow any more access to my servers than was absolutely necessary. Why have we accepted just network segmentation for so long? I want to narrow down allowed ports and protocols as close to the source/destination as I can.

Which brings me to my new favorite tool – VMware NSX Distributed Firewall.

VMware NSX Distributed Firewall offers control at the vNIC level, which is as close to a guest VM operating system as you can get, without being in the operating system. It doesn’t rely on architecting the network to allow packets to wash all over the enforcement point like a traditional network firewall. Access to packets in a virtualized environment at the vNIC level gives a high degree of control. If the performance impact is minimal, and can manage to the scale of virtualized environments – it is an excellent tool to clamp down on lateral spread.

A firewall with excellent performance characteristics, more control than you have now, integrated with vCenter management, and helps police all those internal flows in your intranet or data center? Who wouldn’t want to check this out?

I get a lot of questions around how to get started with VMware NSX Distributed Firewall. The rest of this post walks through the steps to add VMware NSX to an existing vSphere environment, and verify operation of Distributed Firewall.

At the most basic, one needs:

  • an operational vSphere/vCenter 5.5+ installation with the ability to make vSphere distributed switches (VDS)
  • At least one Cluster defined in vCenter
  • At least one VM in the Cluster attached to a VDS portgroup
  • forward and reverse DNS resolution for hostnames and domains
  • VMware NSX software

This post serves to augment, not replace the official NSX 6.2 for vSphere documentation set. Please refer to the NSX Installation Guide and NSX Administration Guide when in doubt! 

Step 1: Install the NSX Manager Software

Install NSX Manager as an OVF template at the Cluster level in vCenter. To install, have the IP address and password ready, as this will be populated into the NSX Manager VM.

Once installed, there is a new VM in the cluster – the NSX Manager VM.

But the VM is not associated with the vCenter installation – at this point it is just like any other VM.   

Step 2: Register the NSX Manager with vCenter

Wait a few minutes after the VM powers up to log in. Select View Summary.

1_nsx_home_summary

Wait until the VMware NSX Management Service has a state of Running.

jstarr_2_nsx_status

Once running, switch to the Manage vCenter Registrations screen by selecting the Home icon from the upper left; and then selecting Manage vCenter Registration.

Select the Edit button to the right of vCenter Server (1 in the graphic below), and enter in the vCenter hostname and credentials. If all is well, a green light for Status Connected (2 in the graphic below) will appear.

jstarr_3_nsx_register

Step 3: Back to vCenter – Licensing

Only after VMware NSX has registered with vCenter will the VMware NSX license be recognized.

In vCenter, Home >>Administration >>Licensing>>Licenses, Licenses tab to add the license.

In the same location, but the Assets tab >> Solutions, select NSX for vSphere then All Actions >>Assign License.

jstarr_4_vc_license

Step 4:  Install the VIBs

Log out, then back in to vCenter. On return, you’ll see a new icon in Inventories – Networking &  Security.

jstarr_5_vc_home

Select Networking & Security >>Installation, then the Host Preparation tab. All clusters, and the status of VMware NSX installation on each of them are shown. In this case, the first cluster (tempCluster) does not have VMware NSX installed yet.

jstarr_6_vc_nsx_install

Just click Install, give it a few min, and it should change to Installing and In Progress, as in the screenshot below.

jstarr_7_vc_nsx_installing

Then on to a green checkmark with the NSX version and Enabled for the Cluster.

jstarr_8_vc_nsx_installed

Step 5: Is it actually doing anything?

Unlike traditional firewalls, VMware NSX Distributed Firewall installs with two default allow rules – one for Layer 3; and another for Layer 2. The difference between the two: whether or not policy is written explicitly for IP or MAC addresses. Rules in the General (Layer 3) rulebase will ultimately be converted to IP addresses for the kernel to process, whereas the Ethernet (Layer 2) rulebase will ultimately be converted to MAC addresses in the kernel. Always use the General rulebase, unless it’s known for a fact the source and destination will always be in the same Layer 2 network, even if a vMotion occurs. Ethernet rules make more sense for non-routable protocols on the network.

It just doesn’t make sense to take a running VM that had an active network connection, and immediately cut it off from the world just because the firewall was added to the cluster. Once appropriate policy for VMs has been determined, switch the last General (Layer 3) rule to default deny.

Now that Distributed Firewall is running – what’s really going on?

The firewall should be active on our new hosts. To view firewall CLI commands, there are two options: ssh to an ESXi host ; or ssh to the NSX Manager and use the show dfw commands. The latter is a new feature with VMware NSX 6.2 – one-stop shopping with the CLI! In this post, I use the ESXi version of commands, but stay tuned for my next post for the other option.

SSHing into the target ESXi host directly and using the summarize-dvfilter command shows the following:

jstarr_9_esxi1_sdv

The first two entries represent the two NICs on the ESXi host, which are governed by the ESXi firewall, not Distributed Firewall. There is one guest VM on this host – the last entry. Notice Distributed Firewall has a policy of failClosed.

The filter name associated with the GVM’s is nic-1018940-eth0-vmware-sfw.2, and has a policy of failClosed.

To look at this particular filter, use the vsipioctl command with the filter names for the VM in question.

jstarr_10_esxi1_rules

In the above graphic, the filter name for the VM is surrounded by a red box; the Layer 3 General Rulebase enclosed in a yellow box; and the Ethernet Layer 2 Rulebase enclosed in another yellow box.

These rulebases correspond to the default rules, which can be viewed in vCenter by navigating to Home >>Networking & Security >>Firewall, Configuration Tab.

General rulebase, Layer 3:

jstarr_11_vc_l3rules

The Ethernet rulebase, Layer 2:

jstarr_12_vc_l2rules

Notice also the Applied To column in each rulebase, currently set cluster-wide to all distributed firewalls – on ESXi hosts within the Cluster.

Curious about active flows?

jstarr_13_esxi_cli

Hm – there’s at least 1 active ssh session to this host, but no flows are showing up! This is because flow monitoring is disabled by default.

To enable flow monitoring in vCenter, Home >> Networking & Security>>Tools>>Flow Monitoring, then the Configuration tab. Select the Enable button.

jstarr_14_vc_ns_flow

Status is clearly marked Enabled once flow collection has commenced.

jstarr_15_vc_ns_flow2

Running the getflows command on the ESXi host CLI again shows an active FTP and ssh sessions; in addition to ARP requests.

jstarr_16_esxi_cli

The 0800 refers to the Ethertype for IPv4. Policy can be written on a number of different Ethertypes, which is a little different than a typical firewall which is very IP-centric. Since we’re dealing with ANY traffic that a host might send out on the wire, there are a number of other options available that just don’t make sense on a boundary firewall.

IN or OUT refer to which host originated the connection – OUT meaning the VM in question initiated the connection.

The active flows lets you get a peek at the state, or connection table, for each vNic on the system.

To get a similar view from vCenter, back to  vCenter, Home >> Networking & Security>>Tools>>Flow Monitoring, then the Live Flow tab.

Select Browse…

jstarr_17_vc_fm_brow

Select the VM and vNIC to monitor…

jstarr_18_vc_fm_select

Select OK for the VM and vNIC; select Start … and the current flows appear. There was enough time between when the cli screenshot was snapped versus the vCenter version that the flows aren’t quite exact, with ARP entries expiring; and DNS entries coming and going.  

jstarr_19

Want to try your own hand with Distributed Firewall? Head over to the VMware Hands On Labs and check out HOL-SDC-1603 : VMware NSX Introduction, released just last month. They’ve already installed NSX, and given a great environment to get you familiarized with operating and managing NSX. 

Hopefully this post helps bridge the gap between the magic needed to get a functioning vCenter Cluster up and running with NSX, and on your way with the above HOL!

Software Used for this post: vCenter/vSphere 6.0U1; VMware NSX 6.2