Home > Blogs > VMware Consulting Blog

Horizon View 6.2 and Blackscreens

Jeremy WheelerBy Jeremy Wheeler

With the release of Horizon View 6.2 and the vSphere 6.0 Update 1a comes new features – but also possible new issues. If you have an environment running Horizon 6.2 and anything below vSphere 6.0 Update 1, you might see some potential issues with your VDI desktops. VMware has introduced a new video driver (version 6.23) in View 6.2 that greatly improves speed and quality, but to utilize this fully you need to be on the latest vSphere bits. Customers who have not upgraded to the latest bits have reported VDI desktops black-screening and disconnecting. One fix for those difficult images is to upgrade/replace the video driver inside the Guest OS of the Gold Image.

To uninstall the old video driver inside your Gold Image Guest OS follow these steps:

  1. Uninstall the View Agent
  2. Delete Video Drivers from Windows Device Manager
    • Expand Device Manager and Display Adapters
    • Right-click on the VMware SVGA 3D driver and select Uninstall
      JWheeler Uninstall
    • Select the checkbox ‘Delete the driver roftware for this device.’
      JWheeler Confirm Device Uninstall
  3. Reboot and let Windows rescan
  4. Verify that Windows in using its bare-bone SVGA driver (if not keep deleting the driver again)
  5. Install View Agent 6.2

Note: Do NOT update VMware tools or you will have to repeat this sequence unless you upgraded the View Agent.

Optional Steps:

If you want to update the video driver without re-installing the View Agent, follow these steps:

  1. Launch View Agent 6.2 installer MSI (only launch the installer, do not proceed through the wizard!)
  2. Change the %temp% folder and sort the contents by the date/time
  3. Look for the most recent long folder name, for example:
    JWheeler Temp File Folder
  4. Change into the directory and look for the file ‘VmVideo.cab’
    JWheeler VmVideo
  5. Copy ‘VmVideo.cab’ file to a temp folder (i.e., C:/Temp)
  6. Extract all files form the VmVideo.cab file. You should see something like this:
    JWheeler Local Temp File
  7. You can execute the following type of syntax for extraction:
    – extract /e /a /l <destination><drive>:\<cabinetname>
    Reference Microsoft KP 132913 for additional information.
  8. You need to rename each file, so remove the prefix ‘_’ and anything after the extension of the filename. Example:
    JWheeler Local Disk Temp Folder 2
  9. Install View Agent 6.2 video drivers:
    1. Once rebooted in the device manager expand ‘Display Adapter’
    2. Right-click on the ‘Microsoft Basic Display Adapter’ and click ‘Update Driver Software’
    3. Select ‘Browse my computer for driver software’
    4. Select ‘Browse’ and point to the temp folder where you expanded and renamed all the View 6.2 drivers
    5. Select ‘Next and complete the video driver installation.

After completing these steps of re-installing the View Agent, and/or replacement video drivers, you will need to do the following:

  1. Power-down the Gold Image (execute any power-down scripts or tasks as you normally do)
  2. Snapshot the VM
  3. Modify the View pool to point to the new snapshot
  4. Execute a recompose

Special thanks to Matt Mabis (@VDI_Tech_Guy) on discovering this fix.

Jeremy Wheeler, Consulting Architect with the VMware End-User Computing Professional Services team, created this paper.

VMware would like to acknowledge the following people for their contributions to this document:

  • Devon Cassidy, Technical Support Engineer End User Computing, Global Tech Lead, VMware
  • Pim van de Vis, Technical, IT Infrastructure Architect, VMware

Configuring NSX-v Load Balancer for use with vSphere Platform Services Controller (PSC) 6.0

Romain DeckerBy Romain Decker

VMware introduced a new component with vSphere 6, the Platform Services Controller (PSC). Coupled with vCenter, the PSC provides several core services, such as Certificate Authority, License service and Single Sign-On (SSO).

Multiple external PSCs can be deployed serving one (or more) service, such as vCenter Server, Site Recovery Manager or vRealize Automation. When deploying the Platform Services Controller for multiple services, availability of the Platform Services Controller must be considered. In some cases, having more than one PSC deployed in a highly available architecture is recommended. When configured in high availability (HA) mode, the PSC instances replicate state information between each other, and the external products (vCenter Server for example) interact with the PSCs through a load balancer.

This post covers the configuration of an HA PSC deployment with the benefits of using NSX-v 6.2 load balancing feature.

Due to the relationship between vCenter Server and NSX Manager, two different scenarios emerge:

  • Scenario A where both PSC nodes are deployed from an existing management vCenter. In this situation, the management vCenter is coupled with NSX which will configure the Edge load balancer. There are no dependencies between the vCenter Server(s) that will use the PSC in HA mode and NSX itself.
  • Scenario B where there is no existing vCenter infrastructure (and thus no existing NSX deployment) when the first PSC is deployed. This is a classic “chicken and egg” situation, as the NSX Manager that is actually responsible for load balancing the PSC in HA mode is also connected to the vCenter Server that use the PSC virtual IP.

While scenario A is straightforward, you need to respect a specific order for scenario B to prevent any loss of connection to the Web client during the procedure. The solution is to deploy a temporary PSC in a temporary SSO site to do the load balancer configuration, and to repoint the vCenter Server to the PSC virtual IP at the end. Both path are summarized in the workflow below.

RDecker PSC Map


NSX Edge supports two deployment modes: one-arm mode and inline mode (also referred to as transparent mode). While inline mode is also possible, NSX load balancer will be deployed in a one-arm mode in our situation, as this model is more flexible and because we don’t require full visibility into the original client IP address.

Description of the environment:

  • Software versions: VMware vCenter Server 6.0 U1 Appliance, ESXi 6.0 U1, NSX-v 6.2
  • NSX Edge Services Gateway in one-arm mode
  • Active/Passive configuration
  • VLAN-backed portgroup (distributed portgroup on DVS)
  • General PSC/vCenter and NSX prerequisites validated (NTP, DNS, resources, etc.)

To offer SSO in HA mode, two PSC servers have to be installed with NSX load balancing them in active/standby mode. PSC in Active/Active mode is currently not supported by PSC.

The way SSO operates, it is not possible to configure it as active/active. The workaround for the NSX configuration is to use an application rule and to configure two different pools (with one PSC instance in each pool). The application rule will send all traffic to the first pool as long as the pool is up, and will switch to the secondary pool if the first PSC is down.

The following is a representation of the NSX-v and PSC logical design.



Each step number refers to the above workflow diagram. You can take snapshots at regular intervals to be able to rollback in case of a problem.

Step 1: Deploy infrastructure

This first step consists of deploying the required vCenter infrastructure before starting the configuration.

A. For scenario A: Deploy two PSC nodes in the same SSO site.

B. For scenario B:

  1. Deploy a first standalone Platform Services Controller (PSC-00a). This PSC will be temporary used during the configuration.
  2. Deploy a vCenter instance against the PSC-00a just deployed.
  3. Deploy NSX Manager and connect it to the vCenter.
  4. Deploy two other Platform Services Controllers in the same SSO domain (PSC-01a and PSC-02a) but in a new site. Note: vCenter will still be pointing to PSC-00a at this stage. Use the following options:
    RDecker PSC NSX Setup 1RDecker PSC NSX Setup 2

Step 2 (both scenarios): Configure both PSCs as an HA pair (up to step D in KB 2113315).

Now that all required external Platform Services Controller appliances are deployed, it’s time to configure high availability.

A. PSC pairing

  1. Download the PSC high availability configuration scripts from the Download vSphere page and extract the content to /ha on both PSC-01a and PSC-02a nodes. Note: Use the KB 2107727 to enable the Bash shell in order to copy files in SCP into the appliances.
  2. Run the following command on the first PSC node:
    python gen-lb-cert.py --primary-node --lb-fqdn=load_balanced_fqdn --password=<yourpassword>

    Note: The load_balanced_fqdn parameter is the FQDN of the PSC Virtual IP of the load balancer. If you don’t specify the option –password option, the default password will be « changeme ».
    For example:

    python gen-lb-cert.py --primary-node --lb-fqdn=psc-vip.sddc.lab --password=brucewayneisbatman
  3. On the PSC-01a node, copy the content of the directory /etc/vmware-sso/keys to /ha/keys (a new directory that needs to be created).
  4. Copy the content of the /ha folder from the PSC-01a node to the /ha folder on the additional PSC-02a node (including the keys copied in the step before).
  5. Run the following command on the PSC-02a node:
python gen-lb-cert.py --secondary-node --lb-fqdn=load_balanced_fqdn --lb-cert-folder=/ha --sso-serversign-folder=/ha/keys

Note: The load_balanced_fqdn parameter is the FQDN of the load balancer address (or VIP).

For example:

python gen-lb-cert.py --secondary-node --lb-fqdn=psc-vip.sddc.lab --lb-cert-folder=/ha --sso-serversign-folder=/ha/keys

Note: If you’re following the KB 2113315 don’t forget to stop the configuration here (end of section C in the KB).

Step 3: NSX configuration

An NSX edge device must be deployed and configured for networking in the same subnet as the PSC nodes, with at least one interface for configuring the virtual IP.

A. Importing certificates

Enter the configuration of the NSX edge services gateway on which to configure the load balancing service for the PSC, and add a new certificate in the Settings > Certificates menu (under the Manage tab). Use the content of the previously generated /ha/lb.crt file as the load balancer certificate and the content of the /ha/lb_rsa.key file as the private key.

RDecker PSC Certificate Setup

B. General configuration

Enable the load balancer service and logging under the global configuration menu of the load balancer tab.

RDecker PSC Web Client

C. Application profile creation

An application profile defines the behavior of a particular type of network traffic. Two application profiles have to be created: one for HTTPS protocol and one for other TCP protocols.

Parameters HTTPS application profile TCP application profile
Name psc-https-profile psc-tcp-profile
Enable Pool Side SSL Yes N/A
Configure Service Certificate Yes N/A

Note: The other parameters shall be left with their default values.

RDecker PSC Edge

D. Creating pools

Two applications have to be created, one per PSC node. While you could use a custom HTTPS healthcheck, I selected the default TCP Monitor in this example.

Parameters Pool 1 Pool 2
Name pool_psc-01a pool_psc-02a
Monitors default_tcp_monitor default_tcp_monitor
Members psc-01a psc-02a
Monitor Port 443 443

RDecker PSC Edge 2

E. Creating application rules

This application rule will contain the logic that will perform the failover between the two pools (each containing one PSC) corresponding to the active/passive behavior of the PSC high availability mode. The ACL will check if the primary PSC is up; if the first pool is not up the rule will switch to the secondary pool.

# Detect if pool "pool_psc-01a" is still UP
acl pool_psc-01a_down nbsrv(pool_psc-01a) eq 0
# Use pool " pool_psc-02a " if "pool_psc-01a" is dead
use_backend pool_psc-02a if pool_psc-01a_down

RDecker PSC Edge 3

F. Configuring virtual servers

Two virtual servers have to be created: one for HTTPS protocol and one for the other TCP protocols.

Parameters HTTPS Virtual Server TCP Virtual Server
Application Profile psc-https-profile psc-tcp-profile
Name psc-https-vip psc-tcp-vip
IP Address IP Address corresponding to the PSC virtual IP
Protocol HTTPS TCP
Port 443 389,636,2012,2014,2020*
Default Pool pool_psc-01a
Application Rules psc-failover-apprule

* Although this procedure is for a fresh install, you could target the same architecture with SSO 5.5 being upgraded to PSC. If you plan to upgrade from SSO 5.5 HA, you must add the legacy SSO port 7444 to the list of ports in the TCP virtual server.

RDecker PSC Edge 4

Step 4 (both scenarios)

Now it’s time to finish the PSC HA configuration (step E of KB 2113315). Update the endpoint URLs on PSC with the load_balanced_fqdn by running this command on the first PSC node.

python lstoolHA.py --hostname=psc_1_fqdn --lb-fqdn=load_balanced_fqdn --lb-cert-folder=/ha --user=Administrator@vsphere.local

Note: psc_1_fqdn is the FQDN of the first PSC-01a node and load_balanced_fqdn is the FQDN of the load balancer address (or VIP).

For example:

python lstoolHA.py --hostname=psc-01a.sddc.lab --lb-fqdn=psc-vip.sddc.lab --lb-cert-folder=/ha --user=Administrator@vsphere.local

Step 5

A. Scenario A: Deploy any new production vCenter Server or other components (such as vRA) against the PSC Virtual IP and enjoy!

B. Scenario B

The situation is the following: The vCenter is currently still pointing to the first external PSC instance (PSC-00a), and two other PSC instances are configured in HA mode, but are not used.

RDecker Common SSO Domain vSphere

Introduced in vSphere 6.0 Update 1, it is now possible to move a vCenter Server between SSO sites within a vSphere domain (see KB 2131191 for more information). In our situation, we have to re-point the existing vCenter that is currently connected to the external PSC-00a to the PSC Virtual IP:

  1. Download and replace the cmsso-util file on your vCenter Server using the actions described in the KB 2113911.
  2. Re-point the vCenter Server Appliance to the PSC virtual IP to the final site by running this command:
/bin/cmsso-util repoint --repoint-psc load_balanced_fqdn

Note: The load_balanced_fqdn parameter is the FQDN of the load balancer address (or VIP).

For example:

/bin/cmsso-util repoint --repoint-psc psc-vip.sddc.lab

Note: This command will also restart vCenter services.

  1. Move the vCenter services registration to the new SSO site. When a vCenter Server is installed, it creates service registrations that it issues to start the vCenter Server services. These service registrations are written to a specific site of the Platform Services Controller (PSC) that was used during the installation. Use the following command to update the vCenter Server services registrations (parameters will be asked at the prompt).
/bin/cmsso-util move-services

After the command, you end up with the following.

RDecker PSC Common SSO Domain vSphere 2

    1. Log in to your vCenter Server instance by using the vSphere Web Client to verify that the vCenter Server is up and running and can be managed.

RDecker PSC Web Client 2

In the context of the scenario B, you can always re-point to the previous PSC-00a if you cannot log, or if you have an error message. When you have confirmed that everything is working, you can delete the temporary PSC (PSC-00a).
However, before doing that, the PSC replication agreements between nodes have to be updated to remove PSC-00a from the PSC topology (following the KB 2127057).

First, confirm the current partnerships from the PSC-01a node by running this command:

/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartners -h psc-01a.sddc.lab -u administrator -w VMware1!

The result should be similar to:


We can see that PSC-01A has two replication agreements, including one to the temporary instance (PSC-00a). Use the following command to remove the partnership between PSC-01a and PSC-00a:

/usr/lib/vmware-vmdir/bin/vdcrepadmin -f removeagreement -2 -h psc-01a.sddc.lab -H psc-00a.sddc.lab administrator -w VMware1!

The temporary PSC can now be safely removed from the SSO domain with this command (KB 2106736​):

cmsso-util unregister --node-pnid psc-00a.sddc.lab --username administrator@vsphere.local --passwd VMware1!

Finally, you can now safely decommission PSC-00a.

RDecker PSC Common SSO Domain vSphere 3

Note: If your NSX Manager was configured with Lookup Service, you can update it with the PSC virtual IP.


Romain Decker is a Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) portfolio – a part of the Global Technical & Professional Solutions (GTPS) team.

User Environment Manager 8.7 working with Horizon 6.2

Dale-Carter-150x150By Dale Carter

With the release of VMware User Environment Manager 8.7 VMware added a number of new feature, all of which you will find in the VMware User Environment Manager Release Notes.

However, in this blog, I would like to call out two new features that help when deploying User Environment Manager alongside VMware Horizon 6.2. VMware’s EUC teams did a great job in my opinion getting these two great features added or enhanced to work with Horizon 6.2 in the latest releases.

Terminal Server Client IP Address or Terminal Server Client Name

The first feature, which has been enhanced to work with Horizon 6.2, is one I think will have a number of benefits. This feature gives support for detecting client IP and client names in Horizon View 6.2 and later. With this feature it is now possible to apply conditions based on the location of your physical device.

An example would be if a user connects to a virtual desktop or RDS host from their physical device in the corporate office, an application could be configured to map a drive to corporate data or configure a printer in the office. However, if the user connects to the same virtual desktop or RDS host from a physical device at home or on an untrusted network, and launches the same application, then the drive or printer may not be mapped to the application.

Another example would be to combine the Terminal Server Client IP Address or Terminal Server Client Name with a triggered task. This way you could connect/disconnect a different printer at login/logoff or disconnect/reconnect depending on where the user is connecting from.

To configure a mapped drive or printer that will be assigned when on a certain network, you would use the Terminal Server Client IP Address or Terminal Server Client Name condition as shown below.

DCarter Drive Mapping

If you choose to limit access via the physical client name, this can be done using a number of different options.

DCarter Terminal Server Client Name 1

On the other hand, if you choose to limit access via the IP address, you can use a range of addresses.

DCarter Terminal Server Client 2

Detect PCoIP and Blast Connections

The second great new feature is the ability to detect if the user is connecting to the virtual desktop or RDS host via a PCoIP or Blast connection.

The Remote Display Protocol setting was already in the User Environment Manager, but as you can see below it now includes the Blast and PCoIP protocols.

DCarter Remote Display Protocol


This feature has many uses, one of which could be to limit what icons a user sees when using a specific protocol.

An example would be maybe you only allow users to connect to their virtual desktops or RDS hosts remotely using the blast protocol, but when they are on the corporate network they use PCoIP. You could then limit applications that have access to sensitive data to only show in the start menu or desktop when they are using the PCoIP protocol to connect.

Of course you could also use the Terminal Server Client IP Address or Terminal Server Client Name to limit the user from seeing an application based on their physical IP address or physical name.

The examples in this blog are just a small number of uses for these great new and enhanced features, and I would encourage everyone to download User Environment Manager 8.7 and Horizon 6.2 to see how they can help in your environment.

Dale Carter is a CTO Ambassador and VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.

Improving Internal Data Center Security with NSX-PANW Integration

Dharma RajanBy Dharma Rajan

Today’s data center (DC) typically has one or more firewalls at the perimeter securing it with a strong defense, thus preventing threats to the DC. Today, applications and their associated content can easily bypass a port-based firewall using a variety of techniques. If a threat enters, the attack surface area is large. Typically the low-priority systems are often the target, as activity on those may not be monitored. Today within the DC more and more workloads are being virtualized. Thus the East-West traffic between virtual machines within the DC has increased substantially compared to the North-South traffic.

Many time threats such as data-stealing, malware, web threats, spam, Trojans, worms, viruses, spyware, bots, etc. can spread fast and cause serious damage once they enter. For example, dormant virtual machines can be a risk when they are powered back up because they may not be receiving patches or anti-malware updates, making them vulnerable to security threats. When the attack happens it can move quickly and compromise critical systems which needs to be prevented. It is also possible in many cases that the attack goes unnoticed until there is an event that triggers investigation, by which time valuable data may have been compromised or lost.

Thus it is very critical that the proper internal controls and security measures are applied at the virtual machine level to reduce the surface area of attack within the data center. So how do we do that and evolve the traditional data center to a more secure environment to overcome today’s data center challenges without additional costly hardware.

Traditional Model for Data Center Network Security

In the traditional model, we base the network architecture with a combination of perimeter-level security by way of Layer 2 VLAN segmentation. This model worked, but as we virtualize more and more workloads, and the data center grows, we are hitting the boundaries when it relates to VLANs with VLAN sprawl, and also the increased number of firewall rules that need to be created and managed. Based on RFC 5517 the maximum number of VLANs that can be provisioned is 4,094. All this adds complexity to the traditional network architecture model of the data center. Other key challenges customers run into in production data centers is too many firewall (FW) rules to create, poor documentation, and the fear of deleting FW rules when a virtual machine is deleted. Thus flexibility is lost, and holes remain for attackers to use as entry points.

Once security is compromised at one VLAN level, the spread across the network—be it Engineering VLAN, Finance VLAN, etc.—does not take very long. So the key is not just how to avoid attacks, but also—if one occurs—how to contain the spread of an attack.

DRajan Before and After Attack

Reducing Attack Surface Area

The first thing that might come to one’s mind is, “How do we prevent and isolate the spread of an attack if one occurs?” We start to look at this by keeping an eye on certain characteristics that make up today’s data centers – which are becoming more and more virtualized. With a high degree of virtualization and increased East-West data center traffic, we need certain dynamic ways to identify, isolate, and prevent attacks, and also automated ways to create FW rules and tighten security at the virtual machine level. This is what leads us to VMware NSX—VMware’s network virtualization platform—which provides the virtual infrastructure security, by way of micro-segmenting, today’s data center environments need.

Micro-Segmentation Principle

As a first step let’s take a brief look at the NSX platform and its components:

DRajan NSX Switch

In the data plane of the NSX vSwitch that are vSphere Distributed Switches (vDS) and FW hypervisor extension modules that run at the kernel level and provide Distributed Firewalling (DFW) functionality at line rate speed and performance.

The NSX Edge can provide edge firewalling functionality/perimeter firewall to the Internet-facing side. The NSX controller is the control plane-level component providing high availability. The NSX manager is the management-level component that communicates with vCenter infrastructure.

By doing micro-segmentation and applying the firewall rules at the virtual machine level we control the traffic flow on the egress side by validating the rules at the virtual machine level, avoiding multiple hops and hair pinning as the traffic does not have to make multiple hops to the physical firewall to get validated. Thus, we also get good visibility of traffic to monitor and secure the virtual machine.

Micro-segmentation is based on the startup principal: assume everything is a threat and act accordingly. This is “zero trust” model. It is indirectly saying entities that need access to resources must prove they are legitimate to gain access to the identified resource.

With a zero trust baseline assumption—which can be “deny by default” —we start to relax and apply certain design principles that enable us to build a cohesive yet scalable architecture that can be controlled and managed well. Thus we define three key design principles.

1) Isolation and segmentation – Isolation is the foundation of most network security, whether for compliance, containment or simply keeping development, test and production environments from interacting. Segmentation from a firewalling point of view refers to micro-segmentation on a single Layer 2 segment using DFW rules.

2) Unit-level trust/least privileges What this means is to provide access to a granular entity as needed for that user, be it a virtual machine level or something within the virtual machine.

3) And the final principle is ‘Ubiquity and Centralized Control’. This helps to enable control and monitoring of activity by using the NSX Controller, which provides a centralized controller, the NSX manager, and the cloud management platforms that provide integrated management.

Using the above principle, we can lay out an architecture for any greenfield or brownfield data center environment that will help us micro-segment the network in a manner that is architecturally sound, flexible to adapt, and enables safe application enablement with the ability to integrate advanced services.

DRajan Micro Segmentation


Dynamic Traffic Steering

Network security teams are often challenged to coordinate network security services from multiple vendors in relationship to each other. Another powerful benefit of the NSX approach is its ability to build security policies that leverage NSX service insertion, with Dynamic Services chaining and traffic steering to drive service execution in the logical services pipeline. This is based on the result of other services that make it possible to coordinate otherwise completely unrelated network security services from multiple vendors. For example, we can introduce advanced chaining services where―at a specific layer—we can direct specific traffic to, for example, a Palo Alto Networks (PANW) virtual VM-series firewall for scanning, threat identification, taking necessary action quarantine an application if required.

Palo Alto Networks VM-series Firewalls Integration with NSX

The Palo Alto Networks next-generation firewall integrates with VMware NSX at the ESXi server level to provide comprehensive visibility and safe application enablement of all data center traffic including intra-host virtual machine communications. Panorama is the centralized management tool for the VM-series firewalls. Panorama works with the NSX Manager to deploy the license and centrally administer configuration and policies on the VM-series firewall.

The first step of integration is for Panorama to register the VM-series firewall on the NSX manager. This allows the NSX Manager to deploy the VM-series firewall on each ESXi host in the ESXi cluster. The integration with the NetX API makes it possible to automate the process of installing the VM-series firewall directly on the ESXi hypervisor, and allows the hypervisor to forward traffic to the VM-series firewall without using the vSwitch configuration. It therefore requires no change to the virtual network topology.

DRajan Panorama Registration with NSX

To redirect traffic the NSX service composer is used to create security groups and define network introspection rules that specify traffic from guests who are steered to the VM-series firewall. For traffic that needs to be inspected and secured by the VM-series firewall, the NSX service composer policies redirect traffic to the Palo Alto Networks Next-Gen Firewall (NGFW) service. This traffic is then steered to the VM-series firewall and is processed by the VM-series firewall before it goes to the virtual switch.

Traffic that does not need to be inspected by the VM-series firewall, for example, network data backup or traffic to an internal domain controller, does not need to be redirected to the VM-series firewall and can be sent to the virtual switch for onward processing.

The NSX Manager sends real-time updates on the changes in the virtual environment to Panorama. The firewall rules are centrally defined and managed on Panorama and pushed to the VM-series firewalls. The VM-series firewall enforces security policies by matching source or destination IP addresses. The use of Dynamic Address Groups allows the firewall to populate members of the Dynamic Address Groups in real time, and forwards the traffic to the filters on the NSX firewall.

Integrated Solution Benefits

Better security – Micro-segmentation enables reduced surface area of attack. It enables safe application enablement and protection against known and unknown threats to protect virtual and cloud environments. The integration enables easy identification and isolation of compromised applications faster.

Simplified deployment and faster secure service enablement – When a new ESXi host is added to a cluster, a new VM-series firewall is automatically deployed, provisioned and available for immediate policy enforcement without any manual intervention.

Operational flexibility – The automated workflow allows you to keep pace with VM deployments in your data center. The hypervisor mode on the firewall removes the need to reconfigure the ports/vSwitches/network topology; because each ESXi host has an instance of the firewall, traffic does not need to traverse the network for inspection and consistent enforcement of policies.

Selective traffic redirection – Only traffic that needs inspection by VM-series firewall needs redirection.

Dynamic security enforcement – The Dynamic Address Groups maintain awareness of changes in the virtual machines/applications and ensure that security policies stay in tandem with changes in the network.

Accelerated deployments of business-critical applications – Enterprises can provision security services faster and utilize capacity of cloud infrastructures, and this makes it more efficient to deploy, move and scale their applications without worrying about security.

For more information on NSX visit: http://www.vmware.com/products/nsx/

For more information on VMware Professional Services visit: http://www.vmware.com/consulting/

Dharma Rajan is a Solution Architect in the Professional Services Organization specializing in pre-sales for SDDC and driving NSX technology solutions to the field. His experience spans Enterprise and Carrier Networks. He holds an MS degree in Computer Engineering from NCSU and M.Tech degree in CAD from IIT

How NSX Simplifies and Enables True Disaster Recovery with Site Recovery Manager

Dharma RajanBy Dharma Rajan

VMware Network Virtualization Platform (NSX) is the network virtualization platform for the software-defined datacenter (SDDC). Network virtualization using VMware NSX enables virtual networks to be created as software entities, saved and restored, and deleted on demand without requiring any reconfiguration of the physical network. Logical network entities like logical switch, logical routers, security objects, logical load balancers, distributed firewall rules and service composer rules are created as part of virtualizing the network.

To provide continuity of service from disaster recovery (DR), datacenters are built with capabilities for replicating and recovering workloads between protected and recovery sites. VMware Site Recovery Manager (SRM) helps to fully automate the recovery process.

From a DR point the recovery site has to be in synch with the protected site at all times from a compute, storage and networking point of view to enable seamless fast recovery when the protected site fails due to a disaster. When using SRM today for DR there are a couple of challenges customers face. From a compute perspective one needs to prepare the host at the recovery site, pre-allocate compute capacity for placeholder virtual machines and create placeholder virtual machines themselves.

From a storage point, the storage for protected applications/virtual machines needs to be replicated and kept in synch. Both of these steps are easy and has been handled by SRM-, vSphere- and/or Array-based replication. The challenge today is the networking piece of the puzzle. As illustrated below, depending on the type of networking established between protected and recovery site, various networking changes (carve out Layer-2, Layer-3, Firewall, Load balancer policy in recovery site, re-map of network if IP address space overlap, recreate policies, etc.) may have to be manually done to ensure smooth recovery. This adds a lot of time, subject to human error in making the changes, inability to meet internal and external SLA. The result of this is the network is the bottleneck that prevents seamless disaster recovery. From a business perspective this can easily translate into millions of dollars in business loss based on criticality of workloads/services impacted.

DRajan 1

Why Are We Running into the Networking Challenge?

The traditional DR solution is tied tightly to physical infrastructure (physical routers, switches, firewalls, load balancers). The security domains of the protected and recovery sites are completely separate. As networking changes, be it new adds, delete, updates are made (say IP address, Layer-2 extension changes, subnets, etc.) at the protected site, no corresponding automated synchronization happens at the recovery site. Thus one may have to do Layer-2 extension to preserve the changes, create and maintain special scripts, manage the tools, and perform manual DR setup and recovery steps across different infrastructure layers and vendors (physical and virtual). From a process point it requires coordination across various teams within your company, good bookkeeping and periodic validation, so you are always ready to address a DR scenario as quickly as you can.

What is the Solution?

VMware NSX from release 6.2 offers a solution that enables customers to address the above-cited networking challenges. NSX is the network virtualization platform for the SDDC. NSX provides the basic foundation to virtualize networking components in the form of logical switching, distributed logical router, distributed logical firewall, logical load balancer, and logical edge gateways. For a deeper understanding of NSX see more at: http://www.vmware.com/products/nsx

NSX 6.2 release has been integrated with SRM 6.1 to enable automated replication of networking entities between protected and recovery sites.

DRajan 2

How Does the Solution Work?

NSX 6.2 supports a couple of key concepts that will intelligently understand that it is logically the same network on both sites. These concepts include:

  1. a) “Universal Logical Switches” (ULS) – This allows for the creation of Layer-2 networks that span vCenter boundaries. This means that when utilizing ULS with NSX there will be a virtual port group at both the protected and recovery site that connect to the same Layer-2 network. When virtual machines are connected to port groups that are backed by ULS, SRM implicitly creates a network mapping, without requiring the admin to configure it. Providing seamless network services portability and synchronization automatically reconnects virtual machines connected to a ULS to the same logical switch on the other vCenter.

DRajan 3

NSX 6.2 ULS Integration with SRM 6.1 Automatic Network Mapping

  1. b) Cross vCenter Networking and Security enables key use cases such as:
  • Resource pooling, virtual machine mobility, multi-site and disaster recovery
  • Cross-vCenter NSX eliminates the need for guest customization of IP addresses

and management of portgroup mappings, two large SRM pain points today

  • Centralized management of universal objects, reducing administration effort
  • Increased mobility of workloads; virtual machines can be “vMotioned” across vCenter Servers without having to reconfigure the virtual machine or making changes to firewall rules

The deployment process would ideally be to:

  • Configure Master NSX Manager at primary site and Secondary NSX Manager at recovery site
  • Configure Universal Distributed Logical Router between primary and secondary site
  • Deploy Universal Logical Switch between primary and recovery site and connect it to Universal Distributed Logical Router
  • Deploy the VRO plugin for automation and monitoring
  • Finally map SRM network resources between primary and recovery sites

Supported Use Cases and Deployment Architectures

The primary use cases are full site disaster recovery scenarios or unplanned outage where the primary site can go down due to a disaster and secondary site takes immediate control and enables business continuity. The other key use case is planned datacenter migration scenarios where one could migrate workloads from one site to another maintaining the underlying networking and security profiles. The main difference between the two use cases is the frequency of the synchronization runs. In a datacenter migration use case you can take one datacenter running NSX and reproduce the entire networking configuration on the DR side in a single run of the synchronization workflow or run it once initially and then a second time to incrementally update the NSX objects before cutover.

DRajan 4

Other supported use cases include partial site outages, preventive failover, or when you anticipate a potential datacenter outage, for example, impending events like hurricanes, floods, forced evacuation, etc.

The standard 1:1 deployment model with one site as primary and another as secondary is the most common deployed model. In a shared recovery site configuration, like for branch offices, you install one SRM server instance and NSX on each protected site. On the recovery site, you install multiple SRM Server instances to pair with each SRM server instance on the protected sites. All of the SRM server instances on the shared recovery site connect to the same vCenter server and NSX instance. You can consider the owner of an SRM server pair to be a customer of the shared recovery site. You can use either array-based replication or vSphere replication or a combination of both when you configure an SRM server to use a shared recovery site.

DRajan 5

Logical Disaster Recovery Architecture Using NSX Universal Objects

What Deployment Architecture Will the Solution Support?

This solution applies to all Greenfield and Brownfield environments. The solution will need the infrastructure to be base-lined to vCenter 6.0 or later, ESXi 6.0 or later, vSphere Distributed switch, SRM 6.0 or later with NSX 6.2 or later.

SRM can be used for different failover scenarios. It could be Active-Active, Active-Passive, Bidirectional, and Shared Recovery.

Integrated Solution Advantages

The ability to automate the disaster recovery planning, maintenance and testing process becomes much simpler, with automation enabling significant operational efficiencies.

  • The ability to create a network that spans vCenter boundaries creates a cross-site Layer-2 network, which means that after failover, it is no longer necessary to re-configure IP addresses. Not having to re-IP recovered virtual machines can further reduce recovery time by up to 40 percent.
  • There is more automation with networking and security objects. Logical switching, logical routing, security policies (such as security groups), firewall settings and edge configurations are also preserved on recovered virtual machines, further decreasing the need for manual configurations post-recovery.
  • Making an isolated test network with all the same capabilities identical to a production environment becomes much easier.

In conclusion, the integration of NSX and SRM greatly simplifies operations, lowers operational expenses, increases testing capabilities and reduces recovery times.

For more information on NSX visit: http://www.vmware.com/products/nsx/

For more information on SRM visit: http://www.vmware.com/products/site-recovery-manager/

For more information on VMware Professional Services visit: http://www.vmware.com/consulting/


About the Author:

Dharma Rajan is a Solution Architect in the Professional Services Organization specializing in pre-sales for SDDC and driving NSX technology solutions to the field. His experience spans Enterprise and Carrier Networks. He holds an MS degree in Computer Engineering from NCSU and M.Tech degree in CAD from IIT

VMware Horizon View Secret Weapon

Andreas LambrechtBy Andreas Lambrecht

Over the last couple of years, I have worked on many challenging Horizon View projects with different business, technical and security requirements. Finding the balance between these points is not always easy. During design workshops and the discussions with desktop management teams and security departments the following questions come up over and over again:

“How can we apply different settings (e.g., clip boards, redirection, printing, etc.) to the user session or desktop based on the user’s location?”

“How can we apply PCoIP optimization to the user session or desktop based on the user’s location?”

Note that these can be internal (LAN or office) or external (Internet or home office) connections.

From the Horizon View architecture point of view we can create different desktop pools with different hardening policies and PCoIP settings, but this means the user will have two different virtual desktops: one for internal and one for external. This may not be optimal in terms of the end user experience because they expect the same virtual desktop behavior in both working environments; when they disconnect the session in the office they expect to continue working on the same document from home without encountering issues. And here is the challenge: ensuring a positive end user experience vs. security policies/PCoIP optimization.

After some research on this particular use case I found a way to manage this requirement without additional costs – while using out-of-the-box Horizon View features. This service comes with the Horizon View Agent as a standard feature and offers many capabilities. I call it the Horizon View Secret Weapon.

Let’s take a closer look at what this secret weapon looks like and what it offers. There are three main ingredients:

  1. VMware Horizon View Script Host Service
  2. System information sent to View Desktop upon user connect or reconnect.
  3. Start Session Script. But note, the intelligence of this script depends on the use case, the security requirements and the ingenuity of the script owner.

Official recommendation: Use start session scripts only if you have a particular need to configure desktop policies before a desktop session begins. As a best practice, use the View Agents CommandsToRunOnConnect and CommandsToRunOnReconnect group policy settings to run command scripts after a desktop session is connected or reconnected. Running scripts within a desktop session will satisfy most use cases. For details, see “Running Commands on View Desktops” in the View Administration document.

For some requirements you can use the View Agents CommandsToRunOnConnect and

CommandsToRunOnReconnect group policy settings, as mentioned above. But what if this is a computer setting or view desktops setting that needs to be configured before the desktop session starts, e.g., PCoIP optimization, clipboard redirection, etc. This is where the secret weapon kicks in and can help fulfill this requirement.

Note: To apply PCoIP optimization there is no need to reconnect because these settings are set before the session or PCoIP protocol start.

In this example I would like to cover a use case with the following technical requirements.

Internal connect

Clipboard redirection:

  • Enabled in both directions

PCoIP settings:

  • BTL set to off
  • Maximum image quality 80
  • Minimum image quality 40
  • Maximum frames per seconds 20

PCoIP Audio limit:

  • 250 kbit/s

USB access:

  • Enabled


  • Enabled

External connect

Clipboard redirection:

  • Disabled in both directions

PCoIP setting:

  • BTL set to off
  • Maximum image quality 70
  • Minimum image quality 30
  • Maximum frames per seconds 16

PCoIP Audio limit:

  • 50 kbits/s

USB access:

  • Disabled


  • Disabled

First, we must enable the VMware Horizon View Script Host Service on each View Desktop where we want View to run the start session script (e.g., on the base image for a Linked Clone Pool). The service is disabled by default.

To configure the VMware View Script Host Service:

  1. Start the Windows Services tool by entering msc in the command prompt.
  2. In the details pane, right-click on the VMware View Script Host service entry and select Properties.
  3. On the General tab, in Startup type, select Automatic.
  4. If you do not want the local system account to run the start session script, select This account, and enter the details of the account to run the start session script.
  5. Click OK and exit the Services tool.

ALambrecht 1
For more details see “Dynamically Setting Desktop Policies with Start Session Scripts.“

Now we need to find a way to differentiate between an internal and external connection. Here we can draw on the information the Horizon View client has gathered about the client system when a user connects or reconnects to the View Desktop, or we can use the values sent directly from the View Connection Server. This can be any variable from the list (see link below) but I would recommend using ViewClient_Broker_DNS_Name. The reason for this choice is simple: if the user connects from the outside (external connect) the authentication will be managed by the View Connection Server that is paired with the View Security Server. But keep an important View Architecture rule in mind; the View Connection Server paired with the View Security Server should be used exclusively for external connections.

For more details see “Client System Information Sent to View Desktops.”

Important note: The start session variables have the prefix VDM_StartSession_ instead of ViewClient_. This is important for our scripts and is described below.

We are now at the point where we need to talk about the most important ingredient of the secret weapon. But before we start writing the script we need to set some registry values to make the start session script available for execution.

  1. Start the Windows Registry Editor by entering regedit at the command prompt.
  2. In the registry, navigate to HKLM\SOFTWARE\VMware, Inc.\VMware VDM\ScriptEvents.
  3. Edit > Select New > Key, and create a key named StartSession.
  4. In the navigation area, right-click StartSession, select New > String Value, and create a string value (REG_SZ) “Bullet1” and at the command line enter (wscript C:\Program Files\VMware\VMware View\Agent\scripts\bullet1.vbs) .
  5. This will invoke the start session script. Click OK.

Note: As a best practice, place the start session scripts in the following location: %ProgramFiles%\VMware\VMware View\Agent\scripts. By default, this folder is accessible only by the SYSTEM and administrator accounts.

ALambrecht 2

  1. Navigate to HKLM\SOFTWARE\VMware, Inc.\VMware VDM\Agent\Configuration.
  2. Edit > Select New > DWord (32 bit) Value, and type RunScriptsOnStartSession and type 1 to enable start session scripting.

ALambrecht 3

  1. Navigate to HKLM\SOFTWARE\VMware, Inc.\VMware VDM\ScriptEvents.
  2. Add a DWord value called TimeoutsInMinutes.
  3. Set a data value of 0.

ALambrecht 4

For more details see “Add Windows Registry Entries for a Start Session Script.”

Here is a simple script example which covers the technical requirements of this use case.


‘ This script dynamically applies specific session settings based on

‘ the user location.

‘ Author: Andreas Lambrecht VMware PSO CEMEA.

‘ Date: October 2015


Option Explicit

On Error Resume Next


Dim objShell

Dim WshShell

Dim objWMIService

Dim strComputer

Dim colServiceList

Dim objService

Dim WScript


strComputer = “.”

Set objWMIService = GetObject(“winmgmts:\\” & strComputer & “\root\cimv2″)

Set objShell = CreateObject(“WScript.Shell”)


‘ Check to see if the user was authenticated and has assigned the session

‘ by the “external” View Connection Servers, which is paired with

‘ View Security Server or by the “internal” View Connection Server.

‘ Based on the result this script will set appropriate settings.


If objShell.ExpandEnvironmentStrings(“%VDM_StartSession_Broker_DNS_Name%”)=”NAMEOFYOURCONNECTIONSERVER1″ Or objShell.ExpandEnvironmentStrings(“%VDM_StartSession_Broker_DNS_Name%”) = “NAMEOFYOURCONNECTIONSERVER2″ Then


‘ Apply the settings for external connect

‘ – Stop and disable TP Auto Connect Service and TP VC Gateway Service

‘ – Disable enable_build_to_lossless

‘ – Set minimum_image_quality to 30

‘ – Set maximum_initial_image_quality to 70

‘ – Set maximum_frame_rate to 12

‘ – Disable Use image settings from Zero client, if available

‘ – Disable server_clipboard_state in both directions

‘ – Set audio_bandwidth_limit to 80

‘ – Exclude all USB devices


Set colServiceList = objWMIService.ExecQuery _

(“Select * from Win32_Service where Name = ‘TPAutoConnSvc’ OR Name = ‘TPVCGateway'”)


For Each objService in colServiceList

If objService.State = “Running” Then



Wscript.Sleep 5000

End If

Set WshShell = CreateObject( “WScript.Shell” )

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.enable_build_to_lossless”, 0, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.minimum_image_quality”, 30, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.maximum_initial_image_quality”, 70, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.maximum_frame_rate”, 12, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.use_client_img_settings”, 0, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.server_clipboard_state”, 0, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.audio_bandwidth_limit”, 80, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\VMware, Inc.\VMware VDM\Agent\USB\ExcludeAllDevices”, “true”, “REG_SZ”

Set WshShell = Nothing




‘ Apply the settings for internal connect

‘ – Start and enable TP Auto Connect Service and TP VC Gateway Service

‘ – Disable enable_build_to_lossless

‘ – Set minimum_image_quality to 40

‘ – Set maximum_initial_image_quality to 80

‘ – Set maximum_frame_rate to 20

‘ – Disable Use image settings from Zero client, if available

‘ – Enable server_clipboard_state in both directions

‘ – Set audio_bandwidth_limit to 250

‘ – Disable Exclude all USB devices


Set colServiceList = objWMIService.ExecQuery _

(“Select * from Win32_Service where Name = ‘TPAutoConnSvc’ OR Name = ‘TPVCGateway'”)

For Each objService in colServiceList

If objService.State = “Stopped” Then



Wscript.Sleep 5000

End If

Set WshShell = CreateObject( “WScript.Shell” )

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.enable_build_to_lossless”, 0, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.minimum_image_quality”, 40, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.maximum_initial_image_quality”, 80, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.maximum_frame_rate”, 20, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.use_client_img_settings”, 0, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.server_clipboard_state”, 1, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin\pcoip.audio_bandwidth_limit”, 250, “REG_DWORD”

WshShell.RegWrite “HKLM\SOFTWARE\Policies\VMware, Inc.\VMware VDM\Agent\USB\ExcludeAllDevices”, “false”, “REG_SZ”

Set WshShell = Nothing


End If


Now the secret weapon is ready for use.

Once the secret weapon is implemented and is running, we need to validate whether the specified settings were applied accordingly.

There are four places where we can check the functionality of our solution:

  1. VDM Debug log for StartSessionScript

ALambrecht 5

In the red rectangle we can see that the Start Session Script was sucessfully applied before the PCoIP protocol starts.

For more details see “Location of VMware View log files (1027744).“

  1. PCoIP Server log for PCoIP optimization

ALambrecht 6

In this red rectangle was can see the PCoIP optimization for external connect, as specified in the script.

For more details see “Location of VMware View log files (1027744).“

  1. Management Tools > Services.exe for ThinPrint settings

ALambrecht 7

Here we can see that the ThinPrint services have been stopped and disabled, and the user is no longer able to print.

  1. Registry.exe for USB Access, PCoIP Optimization and Clipboard redirection

ALambrecht 8


ALambrecht 9

Finally we can see that all settings were applied as specified by the secret weapon.


Andreas Lambrecht is an experienced senior consultant and architect for VMware’s Professional Services Organization specializing in the EUC space. He has worked at VMware for the past 4 years with more than 15 years of experience in the IT industry. Andreas is certified VCP-DCV, VCP-DT, VCAP-DTA VCAP-DTD and also owns the ITIL v4 Foundation certification.

Common VCDX-DTM Questions Answered by a Double VCDX

Travis Wood


By Travis Wood

Last year the VCDX-DTM track was released and Simon Long, Ray Heffer and myself became the first to be certified in this new certification. Since then I have sat on several panels and fielded a lot of questions specific to the desktop certification track, so I wanted to answer some of those frequently asked questions here on how to prepare for this track.

I have a View design, should I submit for VCDX-DCV or VCDX-DTM?

If your design is focused on View and EUC then you will likely be better prepared if you submit for a VCDX-DTM. Whilst you can, and people have in the past, use a View design for DCV, you still need to demonstrate mastery of skills applicable to DCV designs. The criteria for VCDX-DTM is specifically designed to evaluate desktop designs, so this is likely the better option for a View design, and if this is where your core skills are then this will give you better preparation for your defence.

What products are in-scope for VCDX-DTM?

As specified in the VCDX-DTM Blueprint, VCDX-DTM is focused on the VMware Horizon suite to deliver end-user computing solutions. Within this product suite there are a number of products that may be utilized to meet your requirements including Horizon View, Mirage, Identify Manager and vCenter Operations for Horizon. Also vSphere makes up a key component of a desktop virtualization design.

Does the design have to be based on Horizon View?

Whilst the blueprint does not specifically say that Horizon View must be used, it would be extremely difficult if not impossible to cover the required solution areas without it.

Do I need to use ThinApp or AppVolumes?

Application integration is important to a VDI design but may be achieved in different ways depending on your requirements and constraints. These specific products are not required, but you might want to consider how you will demonstrate the application integration that is in your design, and be prepared for questions that may arise in the design or troubleshooting scenarios.

Do I have to use a VMware product for profile management?

Not necessarily – there are many ways to handle profile management and the best solution for your design should be used. But if you do include third-party profile management, do not simply mention it – ensure ALL documentation includes the detail required to design and implement the solution correctly.

Do I need to know about products other than Horizon View?

Even if your design does not use products such as Mirage or AppVolumes, prepare yourself for being presented with business requirements that could be solved using these products – or in other ways. Having a breadth of knowledge of the VMware EUC portfolio will give you greater capability to solve problems presented by the design and troubleshooting scenarios.

Is AirWatch in-scope of VCDX-DTM?

VCDX-DTM is focused on the Horizon Suite, and AirWatch is not a part of Horizon Suite.

Is my design large or complex enough?

There are no specific size or complexity requirements, but the design must be “enterprise-scale.” This is the same requirement that was specified by both DCV and DT; neither size nor complexity at either end of the scale will guarantee success. The panellists are looking for the candidate to demonstrate mastery of the solution areas defined in the blueprint. Choose a design that allows you to do this.

Can I modify my design?

Absolutely! The most valuable advice I got when preparing for my first VCDX is that you can modify your actual design to better demonstrate mastery. This may be adding or removing elements to achieve a better design that will demonstrate your ability to design a solution across all of the solution areas.

Hopefully these answers will help clear up some questions. If you have any further questions please tweet me at @vTravWood.

Travis Wood is a VMware Senior Solutions Architect

Manually Uploading Dedup Files on Mirage Branch Reflector

Eric MonjoinBy Eric Monjoin

Mirage is a great desktop administration tool; it not only makes it easier to backup and restore user data easily and conveniently via a web interface, or backup all or part of the system by the IT department, but it also ensures the compliance of the user’s workstation by sending and applying operating system or applications updates through “Base Layers” and “Application Layers.”

One problem that arises for IT managers is how to update workstations located at remote locations and connected to the data center via a low-bandwidth network or saturated by other network services.

Therein Mirage provides a first solution by using “Branch Reflectors,” which can be either a PC dedicated to this task or any PC on the remote site as long as it has sufficient disk space and it stays on constantly to receive all Base Layers and Application Layers. This is used to apply updates to workstations located on the same local network, thus avoiding all desktops receiving tedious updates from central Mirage servers.

But sometimes—despite the use of reflector Branch—it appears that the necessary bandwidth is too small for an update, even if it is for only one desktop. The solution that will be developed—which is shown below—explains how to manually update a Branch Reflector from extracted Base Layers or Application Layers.

So, the first thing to do is export layers. This is achieved using a command line from the management servers, but before exporting this layer you need to know its ID and version.

From the MMC or the Web Management Console, look at the Image Composer\Base Layer or Image Composer\App Layer and note the ID and Version of the layer you want to extract.

EMonjoin Dedup 1


Then, open a command line in your Mirage Management Server or Mirage Server and run the following command:

# “c:\Program Files\Wanova\Mirage Management Server\Wanova.Server.Tools.exe” LayerExtract \\MirageStorage ID Version Path Target_Path

EMonjoin Dedup 2

Once you export the layers, copy all files to an appropriate storage device, such as an external HDD or USB stick, send it or bring it to your remote location, and copy all files to a folder in your branch reflector.

Note: If the branch reflector is not dedicated but runs on a user desktop, I would recommend hiding the folder where you put the files.

In the meantime, we have to configure the factory policy to scan for this folder so Mirage will know that files are already on the Branch Reflector and will not try to push them again.

  1. On the Mirage Management Server, open a Command Prompt and type the following command:
# “c:\Program Files\Wanova\Mirage Management Server\Wanova.Server.Cli.exe”
  1. In the CLI type: GetFactoryPolicy c:\factory_policy.xml
  1. Open and edit the file: c:\factory_policy.xml
  1. Find the “ExtraDedupArea” area and modify the section to add the directory used to receive all dedup files on the Branch Reflector:
      <Directory path="%windows%.old" recursive="true" filter="*" />
       <Directory path="%systemvolume%\MirageDedup" recursive="true" filter="*" />     <<= Line added
    <ExcludeList />
  1. Import the new rules to add them in the Factory Policy by typing in the CLI: setFactoryPolicy c:\factory_policy.xml

This generally needs to be done only once, unless you have a really big update with new applications to push to desktops.

Eric Monjoin joined VMware France in 2009 as PSO Senior Consultant after spending 15 years at IBM as a Certified IT Specialist. Passionate for new challenges and technology, Eric has been a key leader in the VMware EUC practice in France. Recently, Eric has moved to the VMware Professional Services Engineering organization as Technical Solutions Architect. Eric is certified VCP6-DT, VCAP-DTA and VCAP-DTD and was awarded vExpert for the 4th consecutive year.

VMware Certificate Authority, Part 3: My Favorite New Feature of vSphere 6.0 – The New!

jonathanm-profileBy Jonathan McDonald

In the last blog, I left off right after the architecture discussion. To be honest, this was not because I wanted to but more because I couldn’t say anything more about it at the time. As of September 10, vSphere 6.0 Update 1 has been released with some fantastic new features in this area that make the configuration of customized certificates even easier. At this point what is shown is a tech preview, however it shows the direction that the development is headed in the future. It is amazing when things just work out and with a little bit of love, an incredibly complex area becomes much easier.

In this release, there is a UI that has been released for configuration of the Platform Services Controller. This new interface can be accessed by navigating to:


When you first navigate here, a first time setup screen may be shown:

JMcDonald 1

To set up the configuration, login with a Single Sign-On administrator account, and the actual setup will run and be complete in short order. Subsequently when you login, the screen is plain and similar to the login of the vSphere Web Client:

JMcDonald 2
After login, the interface appears as follows:

JMcDonald 3

As you can see, it provides a ton of new and great functionality, including a GUI for installation of certificates! I will not be talking about the other features except to say there is some pretty fantastic content in there, including the single sign-on configuration, as well as appliance-specific configurations. I only expect this to grow in the future, but it is definitely amazing for a first start.

Let’s dig in to the certificate stuff.

Certificate Store

When navigating to the Certificate Store link, it allows you to see all of the different certificate stores that exist on the VMware Certificate Authority System:

JMcDonald 4This gives the option to view the details of all the different stores that are on the system, as well as view details, and add or remove entry details of each of the entries available:

JMcDonald 5
This is very useful when troubleshooting a configuration or for auditing/validating the different certificates that are trusted on the system.

Certificate Authority

Next up: the Certificate Authority option, which shows a view similar to the following:

JMcDonald 6

This area shows the Active, Revoked, Expired and Root Certificate for the VMware Certificate Authority. It also provides the option to be able to show details of each certificate for auditing or review purposes:

JMcDonald 7

In addition to providing a review, the Root Certificate Tab also allows the additional functionality of replacing the root certificate:

JMcDonald 8

When you go here to do just that, you are prompted to input the new Certificate and Private Key:

JMcDonald 9

Once processed the new certificate will show up in the list.

Certificate Management

Finally, and by far the most complex, is the Certificate Management screen. When you first click this, you will need to enter the single sign-on credentials for the server you want to connect to. In this case, it is to the local Platform Services Controller:

JMcDonald 10

Once logged in the interface looks as follows:

JMcDonald 11

Don’t worry, however, the user or server is not a one-time thing and can be changed by clicking the logout button. This interface allows the Machine Certificates and Solution User Certificates to be viewed, renewed and changed as appropriate.

If the renew button is clicked the certificate will be renewed from VMware Certificate Authority.JMcDonald 12

Once complete the following message is presented:

JMcDonald Renewal

If the certificate is to be replaced it is similar to the process of replacing the root certificate:

JMcDonald Root

Remember that the root certificate must be valid or replaced first or the installation will fail. Finally, the last screenshot I will show is the Solution Users Screen:

JMcDonald Solutions

The notable difference here is that there is a Renew All button, which will allow for all the solution user certificates to be changed.

This new interface for certificates is the start of something amazing, and I can’t wait to see the continued development in the future. Although it is still a tech preview, from my own testing it seems to work very well. Of course my environment is a pretty clean one with little environmental complexity which can sometimes show some unexpected results.

For further details on the exact steps you should take to replace the certificates (including all of the command line steps, which are still available as per my last blog) see, Replacing default certificates with CA signed SSL certificates in vSphere 6.0 (2111219).

I hope this blog series has been useful to you – it is definitely something I am passionate about so I can write about it for hours! I will be writing next about my experiences at VMworld and hopefully to help address the most common concerns I heard from customers while there.

Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments


New! Making Your Move to the Cloud – Part 2: Best Practices for Building a Migration Plan

In Part 2 of the “Making Your Move to the Cloud” series, Michael Francis and RadhaKrishna (RK) Dasari discuss best practices for building a Cloud migration plan for your organization.migration to cloud

“The primary technological goal of any migration project is to transfer an existing compute resource/application from one environment to another, as quickly, efficiently, and cost effectively as possible. This is especially critical when considering a migration to a public cloud; considerations must include security controls, latency and subsequent performance, operations practices for backup/recovery and others. Though VMware now provides technology in Hybrid Cloud Manager to minimize any downtime or eliminate downtime; these considerations persist. VMware Professional Services often assist customers with assessing their environment and focusing on generating an effective, actionable migration plan to achieve this goal. It adopts the philosophy of spending the time up front in the plan to reduce the time and increase likelihood of success of a subsequent migration project.”

Read the full blog for more information: http://vmw.re/1LhuAQ8