Home > Blogs > VMware Consulting Blog > Monthly Archives: November 2015

Monthly Archives: November 2015

Horizon View 6.2 and Blackscreens

Jeremy WheelerBy Jeremy Wheeler

With the release of Horizon View 6.2 and the vSphere 6.0 Update 1a comes new features – but also possible new issues. If you have an environment running Horizon 6.2 and anything below vSphere 6.0 Update 1, you might see some potential issues with your VDI desktops. VMware has introduced a new video driver (version 6.23) in View 6.2 that greatly improves speed and quality, but to utilize this fully you need to be on the latest vSphere bits. Customers who have not upgraded to the latest bits have reported VDI desktops black-screening and disconnecting. One fix for those difficult images is to upgrade/replace the video driver inside the Guest OS of the Gold Image.

To uninstall the old video driver inside your Gold Image Guest OS follow these steps:

  1. Uninstall the View Agent
  2. Delete Video Drivers from Windows Device Manager
    • Expand Device Manager and Display Adapters
    • Right-click on the VMware SVGA 3D driver and select Uninstall
      JWheeler Uninstall
    • Select the checkbox ‘Delete the driver roftware for this device.’
      JWheeler Confirm Device Uninstall
  3. Reboot and let Windows rescan
  4. Verify that Windows in using its bare-bone SVGA driver (if not keep deleting the driver again)
  5. Install View Agent 6.2

Note: Do NOT update VMware tools or you will have to repeat this sequence unless you upgraded the View Agent.

Optional Steps:

If you want to update the video driver without re-installing the View Agent, follow these steps:

  1. Launch View Agent 6.2 installer MSI (only launch the installer, do not proceed through the wizard!)
  2. Change the %temp% folder and sort the contents by the date/time
  3. Look for the most recent long folder name, for example:
    JWheeler Temp File Folder
  4. Change into the directory and look for the file ‘VmVideo.cab’
    JWheeler VmVideo
  5. Copy ‘VmVideo.cab’ file to a temp folder (i.e., C:/Temp)
  6. Extract all files form the VmVideo.cab file. You should see something like this:
    JWheeler Local Temp File
  7. You can execute the following type of syntax for extraction:
    – extract /e /a /l <destination><drive>:\<cabinetname>
    Reference Microsoft KP 132913 for additional information.
  8. You need to rename each file, so remove the prefix ‘_’ and anything after the extension of the filename. Example:
    JWheeler Local Disk Temp Folder 2
  9. Install View Agent 6.2 video drivers:
    1. Once rebooted in the device manager expand ‘Display Adapter’
    2. Right-click on the ‘Microsoft Basic Display Adapter’ and click ‘Update Driver Software’
    3. Select ‘Browse my computer for driver software’
    4. Select ‘Browse’ and point to the temp folder where you expanded and renamed all the View 6.2 drivers
    5. Select ‘Next and complete the video driver installation.

After completing these steps of re-installing the View Agent, and/or replacement video drivers, you will need to do the following:

  1. Power-down the Gold Image (execute any power-down scripts or tasks as you normally do)
  2. Snapshot the VM
  3. Modify the View pool to point to the new snapshot
  4. Execute a recompose

Special thanks to Matt Mabis (@VDI_Tech_Guy) on discovering this fix.


Jeremy Wheeler is an experienced senior consultant and architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013

Configuring NSX-v Load Balancer for use with vSphere Platform Services Controller (PSC) 6.0

Romain DeckerBy Romain Decker

VMware introduced a new component with vSphere 6, the Platform Services Controller (PSC). Coupled with vCenter, the PSC provides several core services, such as Certificate Authority, License service and Single Sign-On (SSO).

Multiple external PSCs can be deployed serving one (or more) service, such as vCenter Server, Site Recovery Manager or vRealize Automation. When deploying the Platform Services Controller for multiple services, availability of the Platform Services Controller must be considered. In some cases, having more than one PSC deployed in a highly available architecture is recommended. When configured in high availability (HA) mode, the PSC instances replicate state information between each other, and the external products (vCenter Server for example) interact with the PSCs through a load balancer.

This post covers the configuration of an HA PSC deployment with the benefits of using NSX-v 6.2 load balancing feature.

Due to the relationship between vCenter Server and NSX Manager, two different scenarios emerge:

  • Scenario A where both PSC nodes are deployed from an existing management vCenter. In this situation, the management vCenter is coupled with NSX which will configure the Edge load balancer. There are no dependencies between the vCenter Server(s) that will use the PSC in HA mode and NSX itself.
  • Scenario B where there is no existing vCenter infrastructure (and thus no existing NSX deployment) when the first PSC is deployed. This is a classic “chicken and egg” situation, as the NSX Manager that is actually responsible for load balancing the PSC in HA mode is also connected to the vCenter Server that use the PSC virtual IP.

While scenario A is straightforward, you need to respect a specific order for scenario B to prevent any loss of connection to the Web client during the procedure. The solution is to deploy a temporary PSC in a temporary SSO site to do the load balancer configuration, and to repoint the vCenter Server to the PSC virtual IP at the end.

Please note that scenario B is only supported with vSphere 6.0 as repointing a vCenter between sites in a SSO domain is no longer supported in vSphere 6.5 (KB 2131191).

Both scenario steps are summarized in the workflow below.

RDecker PSC Map

Environment

NSX Edge supports two deployment modes: one-arm mode and inline mode (also referred to as transparent mode). While inline mode is also possible, NSX load balancer will be deployed in a one-arm mode in our situation, as this model is more flexible and because we don’t require full visibility into the original client IP address.

Description of the environment:

  • Software versions: VMware vCenter Server 6.0 U1 Appliance, ESXi 6.0 U1, NSX-v 6.2
  • NSX Edge Services Gateway in one-arm mode
  • Active/Passive configuration
  • VLAN-backed portgroup (distributed portgroup on DVS)
  • General PSC/vCenter and NSX prerequisites validated (NTP, DNS, resources, etc.)

To offer SSO in HA mode, two PSC servers have to be installed with NSX load balancing them in active/standby mode. PSC in Active/Active mode is currently not supported by PSC.

The way SSO operates, it is not possible to configure it as active/active. The workaround for the NSX configuration is to use an application rule and to configure two different pools (with one PSC instance in each pool). The application rule will send all traffic to the first pool as long as the pool is up, and will switch to the secondary pool if the first PSC is down.

The following is a representation of the NSX-v and PSC logical design.

RDecker PSC NSX

Procedure

Each step number refers to the above workflow diagram. You can take snapshots at regular intervals to be able to rollback in case of a problem.

Step 1: Deploy infrastructure

This first step consists of deploying the required vCenter infrastructure before starting the configuration.

A. For scenario A: Deploy two PSC nodes in the same SSO site.

B. For scenario B:

  1. Deploy a first standalone Platform Services Controller (PSC-00a). This PSC will be temporary used during the configuration.
  2. Deploy a vCenter instance against the PSC-00a just deployed.
  3. Deploy NSX Manager and connect it to the vCenter.
  4. Deploy two other Platform Services Controllers in the same SSO domain (PSC-01a and PSC-02a) but in a new site. Note: vCenter will still be pointing to PSC-00a at this stage. Use the following options:
    RDecker PSC NSX Setup 1RDecker PSC NSX Setup 2

Step 2 (both scenarios): Configure both PSCs as an HA pair (up to step D in KB 2113315).

Now that all required external Platform Services Controller appliances are deployed, it’s time to configure high availability.

A. PSC pairing

  1. Download the PSC high availability configuration scripts from the Download vSphere page and extract the content to /ha on both PSC-01a and PSC-02a nodes. Note: Use the KB 2107727 to enable the Bash shell in order to copy files in SCP into the appliances.
  2. Run the following command on the first PSC node:
    python gen-lb-cert.py --primary-node --lb-fqdn=load_balanced_fqdn --password=<yourpassword>

    Note: The load_balanced_fqdn parameter is the FQDN of the PSC Virtual IP of the load balancer. If you don’t specify the option –password option, the default password will be « changeme ».
    For example:

    python gen-lb-cert.py --primary-node --lb-fqdn=psc-vip.sddc.lab --password=brucewayneisbatman
  3. On the PSC-01a node, copy the content of the directory /etc/vmware-sso/keys to /ha/keys (a new directory that needs to be created).
  4. Copy the content of the /ha folder from the PSC-01a node to the /ha folder on the additional PSC-02a node (including the keys copied in the step before).
  5. Run the following command on the PSC-02a node:
python gen-lb-cert.py --secondary-node --lb-fqdn=load_balanced_fqdn --lb-cert-folder=/ha --sso-serversign-folder=/ha/keys

Note: The load_balanced_fqdn parameter is the FQDN of the load balancer address (or VIP).

For example:

python gen-lb-cert.py --secondary-node --lb-fqdn=psc-vip.sddc.lab --lb-cert-folder=/ha --sso-serversign-folder=/ha/keys

Note: If you’re following the KB 2113315 don’t forget to stop the configuration here (end of section C in the KB).

Step 3: NSX configuration

An NSX edge device must be deployed and configured for networking in the same subnet as the PSC nodes, with at least one interface for configuring the virtual IP.

A. Importing certificates

Enter the configuration of the NSX edge services gateway on which to configure the load balancing service for the PSC, and add a new certificate in the Settings > Certificates menu (under the Manage tab). Use the content of the previously generated /ha/lb.crt file as the load balancer certificate and the content of the /ha/lb_rsa.key file as the private key.

RDecker PSC Certificate Setup

B. General configuration

Enable the load balancer service and logging under the global configuration menu of the load balancer tab.

RDecker PSC Web Client

C. Application profile creation

An application profile defines the behavior of a particular type of network traffic. Two application profiles have to be created: one for HTTPS protocol and one for other TCP protocols.

Parameters HTTPS application profile TCP application profile
Name psc-https-profile psc-tcp-profile
Type HTTPS TCP
Enable Pool Side SSL Yes N/A
Configure Service Certificate Yes N/A

Note: The other parameters shall be left with their default values.

RDecker PSC Edge

D. Creating pools

The NSX load balancer virtual server type HTTP/HTTPS provide web protocol sanity check for their backend servers pool. However, we do not want that sanity check their backend servers pool for the TCP virtual server. For that reason, different pools must be created for the PSC HTTPS virtual IP and TCP virtual IP.

Four pools have to be created: two different pools for each virtual server (with one PSC instance per pool). An application rule will be defined to switch between them in case of a failure: traffic will be send to the first pool as long as the pool is up, and will switch to the secondary pool if the first PSC is down.

Parameters Pool 1 Pool 2 Pool 3 Pool 4
Name pool_psc-01a-http pool_psc-02a-http pool_psc-01a-tcp pool_psc-02a-tcp
Algorithm ROUND-ROBIN ROUND-ROBIN ROUND-ROBIN ROUND-ROBIN
Monitors default_tcp_monitor default_tcp_monitor default_tcp_monitor default_tcp_monitor
Members psc-01a psc-02a psc-01a psc-02a
Monitor Port 443 443 443 443

Note: while you could use a custom HTTPS healthcheck, I selected the default TCP Monitor in this example.

RDecker PSC Edge 2 (Pools)

E. Creating application rules

This application rule will contain the logic that will perform the failover between the pools (for each virtual server) corresponding to the active/passive behavior of the PSC high availability mode. The ACL will check if the primary PSC is up; if the first pool is not up the rule will switch to the secondary pool.

The first application rule will be used by the HTTPS virtual server to switch between the corresponding pools for the HTTPS backend servers pool.

# Detect if pool "pool_psc-01a-http" is still UP
acl pool_psc-01a-http_down nbsrv(pool_psc-01a-http) eq 0
# Use pool " pool_psc-02a-http " if "pool_psc-01a-http" is dead
use_backend pool_psc-02a-http if pool_psc-01a-http_down

The second application rule will be used by the TCP virtual server to switch between the corresponding pools for the TCP backend servers pool.

# Detect if pool "pool_psc-01a-tcp" is still UP
acl pool_psc-01a-tcp_down nbsrv(pool_psc-01a-tcp) eq 0
# Use pool " pool_psc-02a-tcp " if "pool_psc-01a-tcp" is dead
use_backend pool_psc-02a-tcp if pool_psc-01a-tcp_down

RDecker PSC Edge 3 (app rules)

F. Configuring virtual servers

Two virtual servers have to be created: one for HTTPS protocol and one for the other TCP protocols.

Parameters HTTPS Virtual Server TCP Virtual Server
Application Profile psc-https-profile psc-tcp-profile
Name psc-https-vip psc-tcp-vip
IP Address IP Address corresponding to the PSC virtual IP
Protocol HTTPS TCP
Port 443 389,636,2012,2014,2020*
Default Pool pool_psc-01a-http pool_psc-01a-tcp
Application Rules psc-failover-apprule-http psc-failover-apprule-tcp

* Although this procedure is for a fresh install, you could target the same architecture with SSO 5.5 being upgraded to PSC. If you plan to upgrade from SSO 5.5 HA, you must add the legacy SSO port 7444 to the list of ports in the TCP virtual server.

RDecker PSC Edge 4 (VIP)

Step 4 (both scenarios)

Now it’s time to finish the PSC HA configuration (step E of KB 2113315). Update the endpoint URLs on PSC with the load_balanced_fqdn by running this command on the first PSC node.

python lstoolHA.py --hostname=psc_1_fqdn --lb-fqdn=load_balanced_fqdn --lb-cert-folder=/ha --user=Administrator@vsphere.local

Note: psc_1_fqdn is the FQDN of the first PSC-01a node and load_balanced_fqdn is the FQDN of the load balancer address (or VIP).

For example:

python lstoolHA.py --hostname=psc-01a.sddc.lab --lb-fqdn=psc-vip.sddc.lab --lb-cert-folder=/ha --user=Administrator@vsphere.local

Step 5

A. Scenario A: Deploy any new production vCenter Server or other components (such as vRA) against the PSC Virtual IP and enjoy!

B. Scenario B

Please note that scenario B is only supported with vSphere 6.0 as repointing a vCenter between sites in a SSO domain is no longer supported in vSphere 6.5 (KB 2131191).

The situation is the following: The vCenter is currently still pointing to the first external PSC instance (PSC-00a), and two other PSC instances are configured in HA mode, but are not used.

RDecker Common SSO Domain vSphere

Introduced in vSphere 6.0 Update 1, it is now possible to move a vCenter Server between SSO sites within a vSphere domain (see KB 2131191 for more information). In our situation, we have to re-point the existing vCenter that is currently connected to the external PSC-00a to the PSC Virtual IP:

  1. Download and replace the cmsso-util file on your vCenter Server using the actions described in the KB 2113911.
  2. Re-point the vCenter Server Appliance to the PSC virtual IP to the final site by running this command:
/bin/cmsso-util repoint --repoint-psc load_balanced_fqdn

Note: The load_balanced_fqdn parameter is the FQDN of the load balancer address (or VIP).

For example:

/bin/cmsso-util repoint --repoint-psc psc-vip.sddc.lab

Note: This command will also restart vCenter services.

  1. Move the vCenter services registration to the new SSO site. When a vCenter Server is installed, it creates service registrations that it issues to start the vCenter Server services. These service registrations are written to a specific site of the Platform Services Controller (PSC) that was used during the installation. Use the following command to update the vCenter Server services registrations (parameters will be asked at the prompt).
/bin/cmsso-util move-services

After the command, you end up with the following.

RDecker PSC Common SSO Domain vSphere 2

    1. Log in to your vCenter Server instance by using the vSphere Web Client to verify that the vCenter Server is up and running and can be managed.

RDecker PSC Web Client 2

In the context of the scenario B, you can always re-point to the previous PSC-00a if you cannot log, or if you have an error message. When you have confirmed that everything is working, you can remove the temporary PSC (PSC-00a) from the SSO domain with this command (KB 2106736​):

cmsso-util unregister --node-pnid psc-00a.sddc.lab --username administrator@vsphere.local --passwd VMware1!

Finally, you can safely decommission PSC-00a.

RDecker PSC Common SSO Domain vSphere 3

Note: If your NSX Manager was configured with Lookup Service, you can update it with the PSC virtual IP.

Resources:


Romain Decker is a Senior Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) portfolio – a part of the Global Technical & Professional Solutions (GTPS) team.

User Environment Manager 8.7 working with Horizon 6.2

By Dale Carter

With the release of VMware User Environment Manager 8.7 VMware added a number of new feature, all of which you will find in the VMware User Environment Manager Release Notes.

However, in this blog, I would like to call out two new features that help when deploying User Environment Manager alongside VMware Horizon 6.2. VMware’s EUC teams did a great job in my opinion getting these two great features added or enhanced to work with Horizon 6.2 in the latest releases.

Terminal Server Client IP Address or Terminal Server Client Name

The first feature, which has been enhanced to work with Horizon 6.2, is one I think will have a number of benefits. This feature gives support for detecting client IP and client names in Horizon View 6.2 and later. With this feature it is now possible to apply conditions based on the location of your physical device.

An example would be if a user connects to a virtual desktop or RDS host from their physical device in the corporate office, an application could be configured to map a drive to corporate data or configure a printer in the office. However, if the user connects to the same virtual desktop or RDS host from a physical device at home or on an untrusted network, and launches the same application, then the drive or printer may not be mapped to the application.

Another example would be to combine the Terminal Server Client IP Address or Terminal Server Client Name with a triggered task. This way you could connect/disconnect a different printer at login/logoff or disconnect/reconnect depending on where the user is connecting from.

To configure a mapped drive or printer that will be assigned when on a certain network, you would use the Terminal Server Client IP Address or Terminal Server Client Name condition as shown below.

DCarter Drive Mapping

If you choose to limit access via the physical client name, this can be done using a number of different options.

DCarter Terminal Server Client Name 1

On the other hand, if you choose to limit access via the IP address, you can use a range of addresses.

DCarter Terminal Server Client 2

Detect PCoIP and Blast Connections

The second great new feature is the ability to detect if the user is connecting to the virtual desktop or RDS host via a PCoIP or Blast connection.

The Remote Display Protocol setting was already in the User Environment Manager, but as you can see below it now includes the Blast and PCoIP protocols.

DCarter Remote Display Protocol

 

This feature has many uses, one of which could be to limit what icons a user sees when using a specific protocol.

An example would be maybe you only allow users to connect to their virtual desktops or RDS hosts remotely using the blast protocol, but when they are on the corporate network they use PCoIP. You could then limit applications that have access to sensitive data to only show in the start menu or desktop when they are using the PCoIP protocol to connect.

Of course you could also use the Terminal Server Client IP Address or Terminal Server Client Name to limit the user from seeing an application based on their physical IP address or physical name.

The examples in this blog are just a small number of uses for these great new and enhanced features, and I would encourage everyone to download User Environment Manager 8.7 and Horizon 6.2 to see how they can help in your environment.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

Improving Internal Data Center Security with NSX-PANW Integration

Dharma RajanBy Dharma Rajan

Today’s data center (DC) typically has one or more firewalls at the perimeter securing it with a strong defense, thus preventing threats to the DC. Today, applications and their associated content can easily bypass a port-based firewall using a variety of techniques. If a threat enters, the attack surface area is large. Typically the low-priority systems are often the target, as activity on those may not be monitored. Today within the DC more and more workloads are being virtualized. Thus the East-West traffic between virtual machines within the DC has increased substantially compared to the North-South traffic.

Many time threats such as data-stealing, malware, web threats, spam, Trojans, worms, viruses, spyware, bots, etc. can spread fast and cause serious damage once they enter. For example, dormant virtual machines can be a risk when they are powered back up because they may not be receiving patches or anti-malware updates, making them vulnerable to security threats. When the attack happens it can move quickly and compromise critical systems which needs to be prevented. It is also possible in many cases that the attack goes unnoticed until there is an event that triggers investigation, by which time valuable data may have been compromised or lost.

Thus it is very critical that the proper internal controls and security measures are applied at the virtual machine level to reduce the surface area of attack within the data center. So how do we do that and evolve the traditional data center to a more secure environment to overcome today’s data center challenges without additional costly hardware.

Traditional Model for Data Center Network Security

In the traditional model, we base the network architecture with a combination of perimeter-level security by way of Layer 2 VLAN segmentation. This model worked, but as we virtualize more and more workloads, and the data center grows, we are hitting the boundaries when it relates to VLANs with VLAN sprawl, and also the increased number of firewall rules that need to be created and managed. Based on RFC 5517 the maximum number of VLANs that can be provisioned is 4,094. All this adds complexity to the traditional network architecture model of the data center. Other key challenges customers run into in production data centers is too many firewall (FW) rules to create, poor documentation, and the fear of deleting FW rules when a virtual machine is deleted. Thus flexibility is lost, and holes remain for attackers to use as entry points.

Once security is compromised at one VLAN level, the spread across the network—be it Engineering VLAN, Finance VLAN, etc.—does not take very long. So the key is not just how to avoid attacks, but also—if one occurs—how to contain the spread of an attack.

DRajan Before and After Attack

Reducing Attack Surface Area

The first thing that might come to one’s mind is, “How do we prevent and isolate the spread of an attack if one occurs?” We start to look at this by keeping an eye on certain characteristics that make up today’s data centers – which are becoming more and more virtualized. With a high degree of virtualization and increased East-West data center traffic, we need certain dynamic ways to identify, isolate, and prevent attacks, and also automated ways to create FW rules and tighten security at the virtual machine level. This is what leads us to VMware NSX—VMware’s network virtualization platform—which provides the virtual infrastructure security, by way of micro-segmenting, today’s data center environments need.

Micro-Segmentation Principle

As a first step let’s take a brief look at the NSX platform and its components:

DRajan NSX Switch

In the data plane of the NSX vSwitch that are vSphere Distributed Switches (vDS) and FW hypervisor extension modules that run at the kernel level and provide Distributed Firewalling (DFW) functionality at line rate speed and performance.

The NSX Edge can provide edge firewalling functionality/perimeter firewall to the Internet-facing side. The NSX controller is the control plane-level component providing high availability. The NSX manager is the management-level component that communicates with vCenter infrastructure.

By doing micro-segmentation and applying the firewall rules at the virtual machine level we control the traffic flow on the egress side by validating the rules at the virtual machine level, avoiding multiple hops and hair pinning as the traffic does not have to make multiple hops to the physical firewall to get validated. Thus, we also get good visibility of traffic to monitor and secure the virtual machine.

Micro-segmentation is based on the startup principal: assume everything is a threat and act accordingly. This is “zero trust” model. It is indirectly saying entities that need access to resources must prove they are legitimate to gain access to the identified resource.

With a zero trust baseline assumption—which can be “deny by default” —we start to relax and apply certain design principles that enable us to build a cohesive yet scalable architecture that can be controlled and managed well. Thus we define three key design principles.

1) Isolation and segmentation – Isolation is the foundation of most network security, whether for compliance, containment or simply keeping development, test and production environments from interacting. Segmentation from a firewalling point of view refers to micro-segmentation on a single Layer 2 segment using DFW rules.

2) Unit-level trust/least privileges What this means is to provide access to a granular entity as needed for that user, be it a virtual machine level or something within the virtual machine.

3) And the final principle is ‘Ubiquity and Centralized Control’. This helps to enable control and monitoring of activity by using the NSX Controller, which provides a centralized controller, the NSX manager, and the cloud management platforms that provide integrated management.

Using the above principle, we can lay out an architecture for any greenfield or brownfield data center environment that will help us micro-segment the network in a manner that is architecturally sound, flexible to adapt, and enables safe application enablement with the ability to integrate advanced services.

DRajan Micro Segmentation

 

Dynamic Traffic Steering

Network security teams are often challenged to coordinate network security services from multiple vendors in relationship to each other. Another powerful benefit of the NSX approach is its ability to build security policies that leverage NSX service insertion, with Dynamic Services chaining and traffic steering to drive service execution in the logical services pipeline. This is based on the result of other services that make it possible to coordinate otherwise completely unrelated network security services from multiple vendors. For example, we can introduce advanced chaining services where―at a specific layer—we can direct specific traffic to, for example, a Palo Alto Networks (PANW) virtual VM-series firewall for scanning, threat identification, taking necessary action quarantine an application if required.

Palo Alto Networks VM-series Firewalls Integration with NSX

The Palo Alto Networks next-generation firewall integrates with VMware NSX at the ESXi server level to provide comprehensive visibility and safe application enablement of all data center traffic including intra-host virtual machine communications. Panorama is the centralized management tool for the VM-series firewalls. Panorama works with the NSX Manager to deploy the license and centrally administer configuration and policies on the VM-series firewall.

The first step of integration is for Panorama to register the VM-series firewall on the NSX manager. This allows the NSX Manager to deploy the VM-series firewall on each ESXi host in the ESXi cluster. The integration with the NetX API makes it possible to automate the process of installing the VM-series firewall directly on the ESXi hypervisor, and allows the hypervisor to forward traffic to the VM-series firewall without using the vSwitch configuration. It therefore requires no change to the virtual network topology.

DRajan Panorama Registration with NSX

To redirect traffic the NSX service composer is used to create security groups and define network introspection rules that specify traffic from guests who are steered to the VM-series firewall. For traffic that needs to be inspected and secured by the VM-series firewall, the NSX service composer policies redirect traffic to the Palo Alto Networks Next-Gen Firewall (NGFW) service. This traffic is then steered to the VM-series firewall and is processed by the VM-series firewall before it goes to the virtual switch.

Traffic that does not need to be inspected by the VM-series firewall, for example, network data backup or traffic to an internal domain controller, does not need to be redirected to the VM-series firewall and can be sent to the virtual switch for onward processing.

The NSX Manager sends real-time updates on the changes in the virtual environment to Panorama. The firewall rules are centrally defined and managed on Panorama and pushed to the VM-series firewalls. The VM-series firewall enforces security policies by matching source or destination IP addresses. The use of Dynamic Address Groups allows the firewall to populate members of the Dynamic Address Groups in real time, and forwards the traffic to the filters on the NSX firewall.

Integrated Solution Benefits

Better security – Micro-segmentation enables reduced surface area of attack. It enables safe application enablement and protection against known and unknown threats to protect virtual and cloud environments. The integration enables easy identification and isolation of compromised applications faster.

Simplified deployment and faster secure service enablement – When a new ESXi host is added to a cluster, a new VM-series firewall is automatically deployed, provisioned and available for immediate policy enforcement without any manual intervention.

Operational flexibility – The automated workflow allows you to keep pace with VM deployments in your data center. The hypervisor mode on the firewall removes the need to reconfigure the ports/vSwitches/network topology; because each ESXi host has an instance of the firewall, traffic does not need to traverse the network for inspection and consistent enforcement of policies.

Selective traffic redirection – Only traffic that needs inspection by VM-series firewall needs redirection.

Dynamic security enforcement – The Dynamic Address Groups maintain awareness of changes in the virtual machines/applications and ensure that security policies stay in tandem with changes in the network.

Accelerated deployments of business-critical applications – Enterprises can provision security services faster and utilize capacity of cloud infrastructures, and this makes it more efficient to deploy, move and scale their applications without worrying about security.

For more information on NSX visit: http://www.vmware.com/products/nsx/

For more information on VMware Professional Services visit: http://www.vmware.com/consulting/


Dharma Rajan is a Solution Architect in the Professional Services Organization specializing in pre-sales for SDDC and driving NSX technology solutions to the field. His experience spans Enterprise and Carrier Networks. He holds an MS degree in Computer Engineering from NCSU and M.Tech degree in CAD from IIT

How NSX Simplifies and Enables True Disaster Recovery with Site Recovery Manager

Dharma RajanBy Dharma Rajan

VMware Network Virtualization Platform (NSX) is the network virtualization platform for the software-defined datacenter (SDDC). Network virtualization using VMware NSX enables virtual networks to be created as software entities, saved and restored, and deleted on demand without requiring any reconfiguration of the physical network. Logical network entities like logical switch, logical routers, security objects, logical load balancers, distributed firewall rules and service composer rules are created as part of virtualizing the network.

To provide continuity of service from disaster recovery (DR), datacenters are built with capabilities for replicating and recovering workloads between protected and recovery sites. VMware Site Recovery Manager (SRM) helps to fully automate the recovery process.

From a DR point the recovery site has to be in synch with the protected site at all times from a compute, storage and networking point of view to enable seamless fast recovery when the protected site fails due to a disaster. When using SRM today for DR there are a couple of challenges customers face. From a compute perspective one needs to prepare the host at the recovery site, pre-allocate compute capacity for placeholder virtual machines and create placeholder virtual machines themselves.

From a storage point, the storage for protected applications/virtual machines needs to be replicated and kept in synch. Both of these steps are easy and has been handled by SRM-, vSphere- and/or Array-based replication. The challenge today is the networking piece of the puzzle. As illustrated below, depending on the type of networking established between protected and recovery site, various networking changes (carve out Layer-2, Layer-3, Firewall, Load balancer policy in recovery site, re-map of network if IP address space overlap, recreate policies, etc.) may have to be manually done to ensure smooth recovery. This adds a lot of time, subject to human error in making the changes, inability to meet internal and external SLA. The result of this is the network is the bottleneck that prevents seamless disaster recovery. From a business perspective this can easily translate into millions of dollars in business loss based on criticality of workloads/services impacted.

DRajan 1

Why Are We Running into the Networking Challenge?

The traditional DR solution is tied tightly to physical infrastructure (physical routers, switches, firewalls, load balancers). The security domains of the protected and recovery sites are completely separate. As networking changes, be it new adds, delete, updates are made (say IP address, Layer-2 extension changes, subnets, etc.) at the protected site, no corresponding automated synchronization happens at the recovery site. Thus one may have to do Layer-2 extension to preserve the changes, create and maintain special scripts, manage the tools, and perform manual DR setup and recovery steps across different infrastructure layers and vendors (physical and virtual). From a process point it requires coordination across various teams within your company, good bookkeeping and periodic validation, so you are always ready to address a DR scenario as quickly as you can.

What is the Solution?

VMware NSX from release 6.2 offers a solution that enables customers to address the above-cited networking challenges. NSX is the network virtualization platform for the SDDC. NSX provides the basic foundation to virtualize networking components in the form of logical switching, distributed logical router, distributed logical firewall, logical load balancer, and logical edge gateways. For a deeper understanding of NSX see more at: http://www.vmware.com/products/nsx

NSX 6.2 release has been integrated with SRM 6.1 to enable automated replication of networking entities between protected and recovery sites.

DRajan 2

How Does the Solution Work?

NSX 6.2 supports a couple of key concepts that will intelligently understand that it is logically the same network on both sites. These concepts include:

  1. a) “Universal Logical Switches” (ULS) – This allows for the creation of Layer-2 networks that span vCenter boundaries. This means that when utilizing ULS with NSX there will be a virtual port group at both the protected and recovery site that connect to the same Layer-2 network. When virtual machines are connected to port groups that are backed by ULS, SRM implicitly creates a network mapping, without requiring the admin to configure it. Providing seamless network services portability and synchronization automatically reconnects virtual machines connected to a ULS to the same logical switch on the other vCenter.

DRajan 3

NSX 6.2 ULS Integration with SRM 6.1 Automatic Network Mapping

  1. b) Cross vCenter Networking and Security enables key use cases such as:
  • Resource pooling, virtual machine mobility, multi-site and disaster recovery
  • Cross-vCenter NSX eliminates the need for guest customization of IP addresses

and management of portgroup mappings, two large SRM pain points today

  • Centralized management of universal objects, reducing administration effort
  • Increased mobility of workloads; virtual machines can be “vMotioned” across vCenter Servers without having to reconfigure the virtual machine or making changes to firewall rules

The deployment process would ideally be to:

  • Configure Master NSX Manager at primary site and Secondary NSX Manager at recovery site
  • Configure Universal Distributed Logical Router between primary and secondary site
  • Deploy Universal Logical Switch between primary and recovery site and connect it to Universal Distributed Logical Router
  • Deploy the VRO plugin for automation and monitoring
  • Finally map SRM network resources between primary and recovery sites

Supported Use Cases and Deployment Architectures

The primary use cases are full site disaster recovery scenarios or unplanned outage where the primary site can go down due to a disaster and secondary site takes immediate control and enables business continuity. The other key use case is planned datacenter migration scenarios where one could migrate workloads from one site to another maintaining the underlying networking and security profiles. The main difference between the two use cases is the frequency of the synchronization runs. In a datacenter migration use case you can take one datacenter running NSX and reproduce the entire networking configuration on the DR side in a single run of the synchronization workflow or run it once initially and then a second time to incrementally update the NSX objects before cutover.

DRajan 4

Other supported use cases include partial site outages, preventive failover, or when you anticipate a potential datacenter outage, for example, impending events like hurricanes, floods, forced evacuation, etc.

The standard 1:1 deployment model with one site as primary and another as secondary is the most common deployed model. In a shared recovery site configuration, like for branch offices, you install one SRM server instance and NSX on each protected site. On the recovery site, you install multiple SRM Server instances to pair with each SRM server instance on the protected sites. All of the SRM server instances on the shared recovery site connect to the same vCenter server and NSX instance. You can consider the owner of an SRM server pair to be a customer of the shared recovery site. You can use either array-based replication or vSphere replication or a combination of both when you configure an SRM server to use a shared recovery site.

DRajan 5

Logical Disaster Recovery Architecture Using NSX Universal Objects

What Deployment Architecture Will the Solution Support?

This solution applies to all Greenfield and Brownfield environments. The solution will need the infrastructure to be base-lined to vCenter 6.0 or later, ESXi 6.0 or later, vSphere Distributed switch, SRM 6.0 or later with NSX 6.2 or later.

SRM can be used for different failover scenarios. It could be Active-Active, Active-Passive, Bidirectional, and Shared Recovery.

Integrated Solution Advantages

The ability to automate the disaster recovery planning, maintenance and testing process becomes much simpler, with automation enabling significant operational efficiencies.

  • The ability to create a network that spans vCenter boundaries creates a cross-site Layer-2 network, which means that after failover, it is no longer necessary to re-configure IP addresses. Not having to re-IP recovered virtual machines can further reduce recovery time by up to 40 percent.
  • There is more automation with networking and security objects. Logical switching, logical routing, security policies (such as security groups), firewall settings and edge configurations are also preserved on recovered virtual machines, further decreasing the need for manual configurations post-recovery.
  • Making an isolated test network with all the same capabilities identical to a production environment becomes much easier.

In conclusion, the integration of NSX and SRM greatly simplifies operations, lowers operational expenses, increases testing capabilities and reduces recovery times.

For more information on NSX visit: http://www.vmware.com/products/nsx/

For more information on SRM visit: http://www.vmware.com/products/site-recovery-manager/

For more information on VMware Professional Services visit: http://www.vmware.com/consulting/

 


About the Author:

Dharma Rajan is a Solution Architect in the Professional Services Organization specializing in pre-sales for SDDC and driving NSX technology solutions to the field. His experience spans Enterprise and Carrier Networks. He holds an MS degree in Computer Engineering from NCSU and M.Tech degree in CAD from IIT