Home > Blogs > VMware Consulting Blog > Tag Archives: IT Architecture

Tag Archives: IT Architecture

Cloud Pod Architecture and Cisco Nexus 1000V Bug

Jeremy WheelerBy Jeremy Wheeler

I once worked with a customer who owned two vBlocks between two data centers. They ran Nexus 1000V for the virtual networking component. They deployed VDI, and when we enabled cloud pod architecture, global data replication worked great; however, all of our connection servers in the remote pod would show red or offline. I found that we could not telnet to the internal pod or remote pod connection servers over port 8472. All other ports were good. VMware Support confirmed that the issue is with the Nexus 1000V and found that there was a bug in the N1KV and a TCP Checksum Offload function.

The specific ports in question are the following:

VMware View Port 8472 – The View Interpod API (VIPA) interpod communication channel runs on this port. View Connection Server instances use the VIPA interpod communication channel to launch new desktops, find existing desktops, and share health status data and other information.

Cisco Nexus 1000V Port 8472 – VXLAN; Cisco posted a bug report about 8472 being dropped at the VEM for N1KV: Cisco Bug: CSCup55389 – Traffic to TCP port 8472 dropped on the VEM

The bug report mentions TCP Checksum being the root cause and offloading only 8472 packets. If removing the N1KV isn’t an option, you can disable TCP Offloading.

To Disable TCP Offloading

  • In the Windows server, open the Control Panel and select Network Settings Change Adapter Settings.
    JWheeler Ethernet Adapter Properties 1
    Right-click on each of the adapters (private and public), select Configure from the Networking menu, and then click the Advanced tab. The TCP Offload settings are listed for the Citrix adapter.JWheeler Ethernet Adapter Properties 2

I recommend applying the following:

  • IPv4 Checksum Offload
  • Large Receive Offload (was not present for our vmxnet3 advanced configuration)
  • Large Send Offload
  • TCP Checksum Offload

You would need to do this on each of the VMXNET3 Adapters on each connection server at both data centers. Once disabled (it did cause nic to blip), we were able to Telnet between the data centers on port 8472 again.

After making these adjustments you should be able to login to the View Admin portal and see all greens for remote connection servers. I have tested and validated this, and it works as intended. For more information I recommend you read Understanding TCP Segmentation Offload (TSO) and Large Receive Offload (LRO) in a VMware environment (2055140).


Jeremy Wheeler is an experienced senior consultant and architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013

Horizon View 6.2 and Blackscreens

Jeremy WheelerBy Jeremy Wheeler

With the release of Horizon View 6.2 and the vSphere 6.0 Update 1a comes new features – but also possible new issues. If you have an environment running Horizon 6.2 and anything below vSphere 6.0 Update 1, you might see some potential issues with your VDI desktops. VMware has introduced a new video driver (version 6.23) in View 6.2 that greatly improves speed and quality, but to utilize this fully you need to be on the latest vSphere bits. Customers who have not upgraded to the latest bits have reported VDI desktops black-screening and disconnecting. One fix for those difficult images is to upgrade/replace the video driver inside the Guest OS of the Gold Image.

To uninstall the old video driver inside your Gold Image Guest OS follow these steps:

  1. Uninstall the View Agent
  2. Delete Video Drivers from Windows Device Manager
    • Expand Device Manager and Display Adapters
    • Right-click on the VMware SVGA 3D driver and select Uninstall
      JWheeler Uninstall
    • Select the checkbox ‘Delete the driver roftware for this device.’
      JWheeler Confirm Device Uninstall
  3. Reboot and let Windows rescan
  4. Verify that Windows in using its bare-bone SVGA driver (if not keep deleting the driver again)
  5. Install View Agent 6.2

Note: Do NOT update VMware tools or you will have to repeat this sequence unless you upgraded the View Agent.

Optional Steps:

If you want to update the video driver without re-installing the View Agent, follow these steps:

  1. Launch View Agent 6.2 installer MSI (only launch the installer, do not proceed through the wizard!)
  2. Change the %temp% folder and sort the contents by the date/time
  3. Look for the most recent long folder name, for example:
    JWheeler Temp File Folder
  4. Change into the directory and look for the file ‘VmVideo.cab’
    JWheeler VmVideo
  5. Copy ‘VmVideo.cab’ file to a temp folder (i.e., C:/Temp)
  6. Extract all files form the VmVideo.cab file. You should see something like this:
    JWheeler Local Temp File
  7. You can execute the following type of syntax for extraction:
    – extract /e /a /l <destination><drive>:\<cabinetname>
    Reference Microsoft KP 132913 for additional information.
  8. You need to rename each file, so remove the prefix ‘_’ and anything after the extension of the filename. Example:
    JWheeler Local Disk Temp Folder 2
  9. Install View Agent 6.2 video drivers:
    1. Once rebooted in the device manager expand ‘Display Adapter’
    2. Right-click on the ‘Microsoft Basic Display Adapter’ and click ‘Update Driver Software’
    3. Select ‘Browse my computer for driver software’
    4. Select ‘Browse’ and point to the temp folder where you expanded and renamed all the View 6.2 drivers
    5. Select ‘Next and complete the video driver installation.

After completing these steps of re-installing the View Agent, and/or replacement video drivers, you will need to do the following:

  1. Power-down the Gold Image (execute any power-down scripts or tasks as you normally do)
  2. Snapshot the VM
  3. Modify the View pool to point to the new snapshot
  4. Execute a recompose

Special thanks to Matt Mabis (@VDI_Tech_Guy) on discovering this fix.


Jeremy Wheeler is an experienced senior consultant and architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013

Configuring NSX-v Load Balancer for use with vSphere Platform Services Controller (PSC) 6.0

Romain DeckerBy Romain Decker

VMware introduced a new component with vSphere 6, the Platform Services Controller (PSC). Coupled with vCenter, the PSC provides several core services, such as Certificate Authority, License service and Single Sign-On (SSO).

Multiple external PSCs can be deployed serving one (or more) service, such as vCenter Server, Site Recovery Manager or vRealize Automation. When deploying the Platform Services Controller for multiple services, availability of the Platform Services Controller must be considered. In some cases, having more than one PSC deployed in a highly available architecture is recommended. When configured in high availability (HA) mode, the PSC instances replicate state information between each other, and the external products (vCenter Server for example) interact with the PSCs through a load balancer.

This post covers the configuration of an HA PSC deployment with the benefits of using NSX-v 6.2 load balancing feature.

Due to the relationship between vCenter Server and NSX Manager, two different scenarios emerge:

  • Scenario A where both PSC nodes are deployed from an existing management vCenter. In this situation, the management vCenter is coupled with NSX which will configure the Edge load balancer. There are no dependencies between the vCenter Server(s) that will use the PSC in HA mode and NSX itself.
  • Scenario B where there is no existing vCenter infrastructure (and thus no existing NSX deployment) when the first PSC is deployed. This is a classic “chicken and egg” situation, as the NSX Manager that is actually responsible for load balancing the PSC in HA mode is also connected to the vCenter Server that use the PSC virtual IP.

While scenario A is straightforward, you need to respect a specific order for scenario B to prevent any loss of connection to the Web client during the procedure. The solution is to deploy a temporary PSC in a temporary SSO site to do the load balancer configuration, and to repoint the vCenter Server to the PSC virtual IP at the end.

Please note that scenario B is only supported with vSphere 6.0 as repointing a vCenter between sites in a SSO domain is no longer supported in vSphere 6.5 (KB 2131191).

Both scenario steps are summarized in the workflow below.

RDecker PSC Map

Environment

NSX Edge supports two deployment modes: one-arm mode and inline mode (also referred to as transparent mode). While inline mode is also possible, NSX load balancer will be deployed in a one-arm mode in our situation, as this model is more flexible and because we don’t require full visibility into the original client IP address.

Description of the environment:

  • Software versions: VMware vCenter Server 6.0 U1 Appliance, ESXi 6.0 U1, NSX-v 6.2
  • NSX Edge Services Gateway in one-arm mode
  • Active/Passive configuration
  • VLAN-backed portgroup (distributed portgroup on DVS)
  • General PSC/vCenter and NSX prerequisites validated (NTP, DNS, resources, etc.)

To offer SSO in HA mode, two PSC servers have to be installed with NSX load balancing them in active/standby mode. PSC in Active/Active mode is currently not supported by PSC.

The way SSO operates, it is not possible to configure it as active/active. The workaround for the NSX configuration is to use an application rule and to configure two different pools (with one PSC instance in each pool). The application rule will send all traffic to the first pool as long as the pool is up, and will switch to the secondary pool if the first PSC is down.

The following is a representation of the NSX-v and PSC logical design.

RDecker PSC NSX

Procedure

Each step number refers to the above workflow diagram. You can take snapshots at regular intervals to be able to rollback in case of a problem.

Step 1: Deploy infrastructure

This first step consists of deploying the required vCenter infrastructure before starting the configuration.

A. For scenario A: Deploy two PSC nodes in the same SSO site.

B. For scenario B:

  1. Deploy a first standalone Platform Services Controller (PSC-00a). This PSC will be temporary used during the configuration.
  2. Deploy a vCenter instance against the PSC-00a just deployed.
  3. Deploy NSX Manager and connect it to the vCenter.
  4. Deploy two other Platform Services Controllers in the same SSO domain (PSC-01a and PSC-02a) but in a new site. Note: vCenter will still be pointing to PSC-00a at this stage. Use the following options:
    RDecker PSC NSX Setup 1RDecker PSC NSX Setup 2

Step 2 (both scenarios): Configure both PSCs as an HA pair (up to step D in KB 2113315).

Now that all required external Platform Services Controller appliances are deployed, it’s time to configure high availability.

A. PSC pairing

  1. Download the PSC high availability configuration scripts from the Download vSphere page and extract the content to /ha on both PSC-01a and PSC-02a nodes. Note: Use the KB 2107727 to enable the Bash shell in order to copy files in SCP into the appliances.
  2. Run the following command on the first PSC node:
    python gen-lb-cert.py --primary-node --lb-fqdn=load_balanced_fqdn --password=<yourpassword>

    Note: The load_balanced_fqdn parameter is the FQDN of the PSC Virtual IP of the load balancer. If you don’t specify the option –password option, the default password will be « changeme ».
    For example:

    python gen-lb-cert.py --primary-node --lb-fqdn=psc-vip.sddc.lab --password=brucewayneisbatman
  3. On the PSC-01a node, copy the content of the directory /etc/vmware-sso/keys to /ha/keys (a new directory that needs to be created).
  4. Copy the content of the /ha folder from the PSC-01a node to the /ha folder on the additional PSC-02a node (including the keys copied in the step before).
  5. Run the following command on the PSC-02a node:
python gen-lb-cert.py --secondary-node --lb-fqdn=load_balanced_fqdn --lb-cert-folder=/ha --sso-serversign-folder=/ha/keys

Note: The load_balanced_fqdn parameter is the FQDN of the load balancer address (or VIP).

For example:

python gen-lb-cert.py --secondary-node --lb-fqdn=psc-vip.sddc.lab --lb-cert-folder=/ha --sso-serversign-folder=/ha/keys

Note: If you’re following the KB 2113315 don’t forget to stop the configuration here (end of section C in the KB).

Step 3: NSX configuration

An NSX edge device must be deployed and configured for networking in the same subnet as the PSC nodes, with at least one interface for configuring the virtual IP.

A. Importing certificates

Enter the configuration of the NSX edge services gateway on which to configure the load balancing service for the PSC, and add a new certificate in the Settings > Certificates menu (under the Manage tab). Use the content of the previously generated /ha/lb.crt file as the load balancer certificate and the content of the /ha/lb_rsa.key file as the private key.

RDecker PSC Certificate Setup

B. General configuration

Enable the load balancer service and logging under the global configuration menu of the load balancer tab.

RDecker PSC Web Client

C. Application profile creation

An application profile defines the behavior of a particular type of network traffic. Two application profiles have to be created: one for HTTPS protocol and one for other TCP protocols.

Parameters HTTPS application profile TCP application profile
Name psc-https-profile psc-tcp-profile
Type HTTPS TCP
Enable Pool Side SSL Yes N/A
Configure Service Certificate Yes N/A

Note: The other parameters shall be left with their default values.

RDecker PSC Edge

D. Creating pools

The NSX load balancer virtual server type HTTP/HTTPS provide web protocol sanity check for their backend servers pool. However, we do not want that sanity check their backend servers pool for the TCP virtual server. For that reason, different pools must be created for the PSC HTTPS virtual IP and TCP virtual IP.

Four pools have to be created: two different pools for each virtual server (with one PSC instance per pool). An application rule will be defined to switch between them in case of a failure: traffic will be send to the first pool as long as the pool is up, and will switch to the secondary pool if the first PSC is down.

Parameters Pool 1 Pool 2 Pool 3 Pool 4
Name pool_psc-01a-http pool_psc-02a-http pool_psc-01a-tcp pool_psc-02a-tcp
Algorithm ROUND-ROBIN ROUND-ROBIN ROUND-ROBIN ROUND-ROBIN
Monitors default_tcp_monitor default_tcp_monitor default_tcp_monitor default_tcp_monitor
Members psc-01a psc-02a psc-01a psc-02a
Monitor Port 443 443 443 443

Note: while you could use a custom HTTPS healthcheck, I selected the default TCP Monitor in this example.

RDecker PSC Edge 2 (Pools)

E. Creating application rules

This application rule will contain the logic that will perform the failover between the pools (for each virtual server) corresponding to the active/passive behavior of the PSC high availability mode. The ACL will check if the primary PSC is up; if the first pool is not up the rule will switch to the secondary pool.

The first application rule will be used by the HTTPS virtual server to switch between the corresponding pools for the HTTPS backend servers pool.

# Detect if pool "pool_psc-01a-http" is still UP  acl pool_psc-01a-http_down nbsrv(pool_psc-01a-http) eq 0  # Use pool " pool_psc-02a-http " if "pool_psc-01a-http" is dead  use_backend pool_psc-02a-http if pool_psc-01a-http_down

The second application rule will be used by the TCP virtual server to switch between the corresponding pools for the TCP backend servers pool.

# Detect if pool "pool_psc-01a-tcp" is still UP  acl pool_psc-01a-tcp_down nbsrv(pool_psc-01a-tcp) eq 0  # Use pool " pool_psc-02a-tcp " if "pool_psc-01a-tcp" is dead  use_backend pool_psc-02a-tcp if pool_psc-01a-tcp_down

RDecker PSC Edge 3 (app rules)

F. Configuring virtual servers

Two virtual servers have to be created: one for HTTPS protocol and one for the other TCP protocols.

Parameters HTTPS Virtual Server TCP Virtual Server
Application Profile psc-https-profile psc-tcp-profile
Name psc-https-vip psc-tcp-vip
IP Address IP Address corresponding to the PSC virtual IP
Protocol HTTPS TCP
Port 443 389,636,2012,2014,2020*
Default Pool pool_psc-01a-http pool_psc-01a-tcp
Application Rules psc-failover-apprule-http psc-failover-apprule-tcp

* Although this procedure is for a fresh install, you could target the same architecture with SSO 5.5 being upgraded to PSC. If you plan to upgrade from SSO 5.5 HA, you must add the legacy SSO port 7444 to the list of ports in the TCP virtual server.

RDecker PSC Edge 4 (VIP)

Step 4 (both scenarios)

Now it’s time to finish the PSC HA configuration (step E of KB 2113315). Update the endpoint URLs on PSC with the load_balanced_fqdn by running this command on the first PSC node.

python lstoolHA.py --hostname=psc_1_fqdn --lb-fqdn=load_balanced_fqdn --lb-cert-folder=/ha --user=Administrator@vsphere.local

Note: psc_1_fqdn is the FQDN of the first PSC-01a node and load_balanced_fqdn is the FQDN of the load balancer address (or VIP).

For example:

python lstoolHA.py --hostname=psc-01a.sddc.lab --lb-fqdn=psc-vip.sddc.lab --lb-cert-folder=/ha --user=Administrator@vsphere.local

Step 5

A. Scenario A: Deploy any new production vCenter Server or other components (such as vRA) against the PSC Virtual IP and enjoy!

B. Scenario B

Please note that scenario B is only supported with vSphere 6.0 as repointing a vCenter between sites in a SSO domain is no longer supported in vSphere 6.5 (KB 2131191).

The situation is the following: The vCenter is currently still pointing to the first external PSC instance (PSC-00a), and two other PSC instances are configured in HA mode, but are not used.

RDecker Common SSO Domain vSphere

Introduced in vSphere 6.0 Update 1, it is now possible to move a vCenter Server between SSO sites within a vSphere domain (see KB 2131191 for more information). In our situation, we have to re-point the existing vCenter that is currently connected to the external PSC-00a to the PSC Virtual IP:

  1. Download and replace the cmsso-util file on your vCenter Server using the actions described in the KB 2113911.
  2. Re-point the vCenter Server Appliance to the PSC virtual IP to the final site by running this command:
/bin/cmsso-util repoint --repoint-psc load_balanced_fqdn

Note: The load_balanced_fqdn parameter is the FQDN of the load balancer address (or VIP).

For example:

/bin/cmsso-util repoint --repoint-psc psc-vip.sddc.lab

Note: This command will also restart vCenter services.

  1. Move the vCenter services registration to the new SSO site. When a vCenter Server is installed, it creates service registrations that it issues to start the vCenter Server services. These service registrations are written to a specific site of the Platform Services Controller (PSC) that was used during the installation. Use the following command to update the vCenter Server services registrations (parameters will be asked at the prompt).
/bin/cmsso-util move-services

After the command, you end up with the following.

RDecker PSC Common SSO Domain vSphere 2

    1. Log in to your vCenter Server instance by using the vSphere Web Client to verify that the vCenter Server is up and running and can be managed.

RDecker PSC Web Client 2

In the context of the scenario B, you can always re-point to the previous PSC-00a if you cannot log, or if you have an error message. When you have confirmed that everything is working, you can remove the temporary PSC (PSC-00a) from the SSO domain with this command (KB 2106736​):

cmsso-util unregister --node-pnid psc-00a.sddc.lab --username administrator@vsphere.local --passwd VMware1!

Finally, you can safely decommission PSC-00a.

RDecker PSC Common SSO Domain vSphere 3

Note: If your NSX Manager was configured with Lookup Service, you can update it with the PSC virtual IP.

Resources:


Romain Decker is a Senior Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) portfolio – a part of the Global Technical & Professional Solutions (GTPS) team.

User Environment Manager 8.7 working with Horizon 6.2

By Dale Carter

With the release of VMware User Environment Manager 8.7 VMware added a number of new feature, all of which you will find in the VMware User Environment Manager Release Notes.

However, in this blog, I would like to call out two new features that help when deploying User Environment Manager alongside VMware Horizon 6.2. VMware’s EUC teams did a great job in my opinion getting these two great features added or enhanced to work with Horizon 6.2 in the latest releases.

Terminal Server Client IP Address or Terminal Server Client Name

The first feature, which has been enhanced to work with Horizon 6.2, is one I think will have a number of benefits. This feature gives support for detecting client IP and client names in Horizon View 6.2 and later. With this feature it is now possible to apply conditions based on the location of your physical device.

An example would be if a user connects to a virtual desktop or RDS host from their physical device in the corporate office, an application could be configured to map a drive to corporate data or configure a printer in the office. However, if the user connects to the same virtual desktop or RDS host from a physical device at home or on an untrusted network, and launches the same application, then the drive or printer may not be mapped to the application.

Another example would be to combine the Terminal Server Client IP Address or Terminal Server Client Name with a triggered task. This way you could connect/disconnect a different printer at login/logoff or disconnect/reconnect depending on where the user is connecting from.

To configure a mapped drive or printer that will be assigned when on a certain network, you would use the Terminal Server Client IP Address or Terminal Server Client Name condition as shown below.

DCarter Drive Mapping

If you choose to limit access via the physical client name, this can be done using a number of different options.

DCarter Terminal Server Client Name 1

On the other hand, if you choose to limit access via the IP address, you can use a range of addresses.

DCarter Terminal Server Client 2

Detect PCoIP and Blast Connections

The second great new feature is the ability to detect if the user is connecting to the virtual desktop or RDS host via a PCoIP or Blast connection.

The Remote Display Protocol setting was already in the User Environment Manager, but as you can see below it now includes the Blast and PCoIP protocols.

DCarter Remote Display Protocol

 

This feature has many uses, one of which could be to limit what icons a user sees when using a specific protocol.

An example would be maybe you only allow users to connect to their virtual desktops or RDS hosts remotely using the blast protocol, but when they are on the corporate network they use PCoIP. You could then limit applications that have access to sensitive data to only show in the start menu or desktop when they are using the PCoIP protocol to connect.

Of course you could also use the Terminal Server Client IP Address or Terminal Server Client Name to limit the user from seeing an application based on their physical IP address or physical name.

The examples in this blog are just a small number of uses for these great new and enhanced features, and I would encourage everyone to download User Environment Manager 8.7 and Horizon 6.2 to see how they can help in your environment.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

Improving Internal Data Center Security with NSX-PANW Integration

Dharma RajanBy Dharma Rajan

Today’s data center (DC) typically has one or more firewalls at the perimeter securing it with a strong defense, thus preventing threats to the DC. Today, applications and their associated content can easily bypass a port-based firewall using a variety of techniques. If a threat enters, the attack surface area is large. Typically the low-priority systems are often the target, as activity on those may not be monitored. Today within the DC more and more workloads are being virtualized. Thus the East-West traffic between virtual machines within the DC has increased substantially compared to the North-South traffic.

Many time threats such as data-stealing, malware, web threats, spam, Trojans, worms, viruses, spyware, bots, etc. can spread fast and cause serious damage once they enter. For example, dormant virtual machines can be a risk when they are powered back up because they may not be receiving patches or anti-malware updates, making them vulnerable to security threats. When the attack happens it can move quickly and compromise critical systems which needs to be prevented. It is also possible in many cases that the attack goes unnoticed until there is an event that triggers investigation, by which time valuable data may have been compromised or lost.

Thus it is very critical that the proper internal controls and security measures are applied at the virtual machine level to reduce the surface area of attack within the data center. So how do we do that and evolve the traditional data center to a more secure environment to overcome today’s data center challenges without additional costly hardware.

Traditional Model for Data Center Network Security

In the traditional model, we base the network architecture with a combination of perimeter-level security by way of Layer 2 VLAN segmentation. This model worked, but as we virtualize more and more workloads, and the data center grows, we are hitting the boundaries when it relates to VLANs with VLAN sprawl, and also the increased number of firewall rules that need to be created and managed. Based on RFC 5517 the maximum number of VLANs that can be provisioned is 4,094. All this adds complexity to the traditional network architecture model of the data center. Other key challenges customers run into in production data centers is too many firewall (FW) rules to create, poor documentation, and the fear of deleting FW rules when a virtual machine is deleted. Thus flexibility is lost, and holes remain for attackers to use as entry points.

Once security is compromised at one VLAN level, the spread across the network—be it Engineering VLAN, Finance VLAN, etc.—does not take very long. So the key is not just how to avoid attacks, but also—if one occurs—how to contain the spread of an attack.

DRajan Before and After Attack

Reducing Attack Surface Area

The first thing that might come to one’s mind is, “How do we prevent and isolate the spread of an attack if one occurs?” We start to look at this by keeping an eye on certain characteristics that make up today’s data centers – which are becoming more and more virtualized. With a high degree of virtualization and increased East-West data center traffic, we need certain dynamic ways to identify, isolate, and prevent attacks, and also automated ways to create FW rules and tighten security at the virtual machine level. This is what leads us to VMware NSX—VMware’s network virtualization platform—which provides the virtual infrastructure security, by way of micro-segmenting, today’s data center environments need.

Micro-Segmentation Principle

As a first step let’s take a brief look at the NSX platform and its components:

DRajan NSX Switch

In the data plane of the NSX vSwitch that are vSphere Distributed Switches (vDS) and FW hypervisor extension modules that run at the kernel level and provide Distributed Firewalling (DFW) functionality at line rate speed and performance.

The NSX Edge can provide edge firewalling functionality/perimeter firewall to the Internet-facing side. The NSX controller is the control plane-level component providing high availability. The NSX manager is the management-level component that communicates with vCenter infrastructure.

By doing micro-segmentation and applying the firewall rules at the virtual machine level we control the traffic flow on the egress side by validating the rules at the virtual machine level, avoiding multiple hops and hair pinning as the traffic does not have to make multiple hops to the physical firewall to get validated. Thus, we also get good visibility of traffic to monitor and secure the virtual machine.

Micro-segmentation is based on the startup principal: assume everything is a threat and act accordingly. This is “zero trust” model. It is indirectly saying entities that need access to resources must prove they are legitimate to gain access to the identified resource.

With a zero trust baseline assumption—which can be “deny by default” —we start to relax and apply certain design principles that enable us to build a cohesive yet scalable architecture that can be controlled and managed well. Thus we define three key design principles.

1) Isolation and segmentation – Isolation is the foundation of most network security, whether for compliance, containment or simply keeping development, test and production environments from interacting. Segmentation from a firewalling point of view refers to micro-segmentation on a single Layer 2 segment using DFW rules.

2) Unit-level trust/least privileges What this means is to provide access to a granular entity as needed for that user, be it a virtual machine level or something within the virtual machine.

3) And the final principle is ‘Ubiquity and Centralized Control’. This helps to enable control and monitoring of activity by using the NSX Controller, which provides a centralized controller, the NSX manager, and the cloud management platforms that provide integrated management.

Using the above principle, we can lay out an architecture for any greenfield or brownfield data center environment that will help us micro-segment the network in a manner that is architecturally sound, flexible to adapt, and enables safe application enablement with the ability to integrate advanced services.

DRajan Micro Segmentation

 

Dynamic Traffic Steering

Network security teams are often challenged to coordinate network security services from multiple vendors in relationship to each other. Another powerful benefit of the NSX approach is its ability to build security policies that leverage NSX service insertion, with Dynamic Services chaining and traffic steering to drive service execution in the logical services pipeline. This is based on the result of other services that make it possible to coordinate otherwise completely unrelated network security services from multiple vendors. For example, we can introduce advanced chaining services where―at a specific layer—we can direct specific traffic to, for example, a Palo Alto Networks (PANW) virtual VM-series firewall for scanning, threat identification, taking necessary action quarantine an application if required.

Palo Alto Networks VM-series Firewalls Integration with NSX

The Palo Alto Networks next-generation firewall integrates with VMware NSX at the ESXi server level to provide comprehensive visibility and safe application enablement of all data center traffic including intra-host virtual machine communications. Panorama is the centralized management tool for the VM-series firewalls. Panorama works with the NSX Manager to deploy the license and centrally administer configuration and policies on the VM-series firewall.

The first step of integration is for Panorama to register the VM-series firewall on the NSX manager. This allows the NSX Manager to deploy the VM-series firewall on each ESXi host in the ESXi cluster. The integration with the NetX API makes it possible to automate the process of installing the VM-series firewall directly on the ESXi hypervisor, and allows the hypervisor to forward traffic to the VM-series firewall without using the vSwitch configuration. It therefore requires no change to the virtual network topology.

DRajan Panorama Registration with NSX

To redirect traffic the NSX service composer is used to create security groups and define network introspection rules that specify traffic from guests who are steered to the VM-series firewall. For traffic that needs to be inspected and secured by the VM-series firewall, the NSX service composer policies redirect traffic to the Palo Alto Networks Next-Gen Firewall (NGFW) service. This traffic is then steered to the VM-series firewall and is processed by the VM-series firewall before it goes to the virtual switch.

Traffic that does not need to be inspected by the VM-series firewall, for example, network data backup or traffic to an internal domain controller, does not need to be redirected to the VM-series firewall and can be sent to the virtual switch for onward processing.

The NSX Manager sends real-time updates on the changes in the virtual environment to Panorama. The firewall rules are centrally defined and managed on Panorama and pushed to the VM-series firewalls. The VM-series firewall enforces security policies by matching source or destination IP addresses. The use of Dynamic Address Groups allows the firewall to populate members of the Dynamic Address Groups in real time, and forwards the traffic to the filters on the NSX firewall.

Integrated Solution Benefits

Better security – Micro-segmentation enables reduced surface area of attack. It enables safe application enablement and protection against known and unknown threats to protect virtual and cloud environments. The integration enables easy identification and isolation of compromised applications faster.

Simplified deployment and faster secure service enablement – When a new ESXi host is added to a cluster, a new VM-series firewall is automatically deployed, provisioned and available for immediate policy enforcement without any manual intervention.

Operational flexibility – The automated workflow allows you to keep pace with VM deployments in your data center. The hypervisor mode on the firewall removes the need to reconfigure the ports/vSwitches/network topology; because each ESXi host has an instance of the firewall, traffic does not need to traverse the network for inspection and consistent enforcement of policies.

Selective traffic redirection – Only traffic that needs inspection by VM-series firewall needs redirection.

Dynamic security enforcement – The Dynamic Address Groups maintain awareness of changes in the virtual machines/applications and ensure that security policies stay in tandem with changes in the network.

Accelerated deployments of business-critical applications – Enterprises can provision security services faster and utilize capacity of cloud infrastructures, and this makes it more efficient to deploy, move and scale their applications without worrying about security.

For more information on NSX visit: http://www.vmware.com/products/nsx/

For more information on VMware Professional Services visit: http://www.vmware.com/consulting/


Dharma Rajan is a Solution Architect in the Professional Services Organization specializing in pre-sales for SDDC and driving NSX technology solutions to the field. His experience spans Enterprise and Carrier Networks. He holds an MS degree in Computer Engineering from NCSU and M.Tech degree in CAD from IIT

VMware Certificate Authority, Part 3: My Favorite New Feature of vSphere 6.0 – The New!

jonathanm-profileBy Jonathan McDonald

In the last blog, I left off right after the architecture discussion. To be honest, this was not because I wanted to but more because I couldn’t say anything more about it at the time. As of September 10, vSphere 6.0 Update 1 has been released with some fantastic new features in this area that make the configuration of customized certificates even easier. At this point what is shown is a tech preview, however it shows the direction that the development is headed in the future. It is amazing when things just work out and with a little bit of love, an incredibly complex area becomes much easier.

In this release, there is a UI that has been released for configuration of the Platform Services Controller. This new interface can be accessed by navigating to:

https://psc.domain.com/psc

When you first navigate here, a first time setup screen may be shown:

JMcDonald 1

To set up the configuration, login with a Single Sign-On administrator account, and the actual setup will run and be complete in short order. Subsequently when you login, the screen is plain and similar to the login of the vSphere Web Client:

JMcDonald 2
After login, the interface appears as follows:

JMcDonald 3

As you can see, it provides a ton of new and great functionality, including a GUI for installation of certificates! I will not be talking about the other features except to say there is some pretty fantastic content in there, including the single sign-on configuration, as well as appliance-specific configurations. I only expect this to grow in the future, but it is definitely amazing for a first start.

Let’s dig in to the certificate stuff.

Certificate Store

When navigating to the Certificate Store link, it allows you to see all of the different certificate stores that exist on the VMware Certificate Authority System:

JMcDonald 4This gives the option to view the details of all the different stores that are on the system, as well as view details, and add or remove entry details of each of the entries available:

JMcDonald 5
This is very useful when troubleshooting a configuration or for auditing/validating the different certificates that are trusted on the system.

Certificate Authority

Next up: the Certificate Authority option, which shows a view similar to the following:

JMcDonald 6

This area shows the Active, Revoked, Expired and Root Certificate for the VMware Certificate Authority. It also provides the option to be able to show details of each certificate for auditing or review purposes:

JMcDonald 7

In addition to providing a review, the Root Certificate Tab also allows the additional functionality of replacing the root certificate:

JMcDonald 8

When you go here to do just that, you are prompted to input the new Certificate and Private Key:

JMcDonald 9

Once processed the new certificate will show up in the list.

Certificate Management

Finally, and by far the most complex, is the Certificate Management screen. When you first click this, you will need to enter the single sign-on credentials for the server you want to connect to. In this case, it is to the local Platform Services Controller:

JMcDonald 10

Once logged in the interface looks as follows:

JMcDonald 11

Don’t worry, however, the user or server is not a one-time thing and can be changed by clicking the logout button. This interface allows the Machine Certificates and Solution User Certificates to be viewed, renewed and changed as appropriate.

If the renew button is clicked the certificate will be renewed from VMware Certificate Authority.JMcDonald 12

Once complete the following message is presented:

JMcDonald Renewal

If the certificate is to be replaced it is similar to the process of replacing the root certificate:

JMcDonald Root

Remember that the root certificate must be valid or replaced first or the installation will fail. Finally, the last screenshot I will show is the Solution Users Screen:

JMcDonald Solutions

The notable difference here is that there is a Renew All button, which will allow for all the solution user certificates to be changed.

This new interface for certificates is the start of something amazing, and I can’t wait to see the continued development in the future. Although it is still a tech preview, from my own testing it seems to work very well. Of course my environment is a pretty clean one with little environmental complexity which can sometimes show some unexpected results.

For further details on the exact steps you should take to replace the certificates (including all of the command line steps, which are still available as per my last blog) see, Replacing default certificates with CA signed SSL certificates in vSphere 6.0 (2111219).

I hope this blog series has been useful to you – it is definitely something I am passionate about so I can write about it for hours! I will be writing next about my experiences at VMworld and hopefully to help address the most common concerns I heard from customers while there.


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments

 

New! Making Your Move to the Cloud – Part 2: Best Practices for Building a Migration Plan

In Part 2 of the “Making Your Move to the Cloud” series, Michael Francis and RadhaKrishna (RK) Dasari discuss best practices for building a Cloud migration plan for your organization.migration to cloud

“The primary technological goal of any migration project is to transfer an existing compute resource/application from one environment to another, as quickly, efficiently, and cost effectively as possible. This is especially critical when considering a migration to a public cloud; considerations must include security controls, latency and subsequent performance, operations practices for backup/recovery and others. Though VMware now provides technology in Hybrid Cloud Manager to minimize any downtime or eliminate downtime; these considerations persist. VMware Professional Services often assist customers with assessing their environment and focusing on generating an effective, actionable migration plan to achieve this goal. It adopts the philosophy of spending the time up front in the plan to reduce the time and increase likelihood of success of a subsequent migration project.”

Read the full blog for more information: http://vmw.re/1LhuAQ8 

VMware Certificate Authority, Part 2: My Favorite New Feature of vSphere 6.0 – The Architecture

jonathanm-profileBy Jonathan McDonald

Picking up where Part 1 left off, I will now discuss the architecture decisions I have seen commonly used for the VMware Certificate Authority. This comes from many conversations with customers, VMware Professional Services, VMware Engineering and even VMware Support. In addition to these sources I recently participated in many conversations at VMworld. I spoke at several sessions as well while I was manning the VMworld booth. I ended up with the opportunity to better appreciate the complexities, and also got to talk with some fantastic people about their environments.

Architecture of the Environment

Getting back to the conversation, the architecture of the environment becomes one that is incredibly important to design up front. This allows an administrator to avoid much of the complexity and allows for security to be kept. That being said, from my current experiences I have seen three different ways that environments are most frequently configured:

  • VMware Certificate Authority as the root CA in the default configuration
  • VMware Certificate Authority used but operating as a subordinate CA
  • Hybrid model using custom Machine SSL Certificates, but using VMware Certificate Authority in its default configuration.

Before we get into them, however, keep in mind that as I mentioned in my previous blog series regarding the architecture changes for vSphere 6.0, there are two basic platform service controller architectures that should be considered when designing your certificate infrastructure.

Be sure to note up-front whether an external or embedded Platform Services controller is to be used as this is quite important. The difference here is that a separate Machine SSL endpoint certificate would be required for each system.

This means that on an embedded system a single certificate is required as shown below:

JMcDonald 1

Or, for an external environment, two or more will be needed depending on the size of the infrastructure, as can be seen in the following figure.

JMcDonald 2

For further details on the different platform services controller architectures, and to become familiar with them before proceeding, see http://blogs.vmware.com/consulting/2015/03/vsphere-datacenter-design-vcenter-architecture-changes-vsphere-6-0-part-1.html.

Using VMware Certificate Authority as the Root CA in the Default Configuration

This is by far the most common configuration I have seen deployed. Of course this is also the default, which explains why this is the case. Personally, I fully support and recommend actually using this configuration in almost all circumstances. The beauty of it is that it takes very little configuration to be fully secured. Why change something if you do not need to, right? By default, after installation everything is already configured to use VMware Certificate Authority and has already had certificates granted, which are deployed to all solutions and ESXi hosts, and are then added to vCenter Server.

In this situation the only thing required to secure the environment is to download and install the root certificate (or chain) from VMware Certificate Authority.

JMcDonald 3

Note: When you download this file in vSphere 6.0, it is simply called ‘download.’ This is a known issue in some browsers. It is actually a ZIP file, which contains the root certificate chain.

Once downloaded and extracted, the certificate(s) are the file(s) ending in .0. To install simply rename .0 to .cer and double-click to import to the Windows certificate store. Repeat the procedure for all certificates in the chain. The certificates should be installed into the Local Machine’s Trusted Root Certificate Authority or the Intermediate Certificate Authority stores respectively. If using Firefox, import to its proper store to ensure the chain is trusted.

The next time you go to the web page, it will show as trusted as in the following screenshot.

JMcDonald 4

This can potentially take some time if there are many clients who need to have certificates imported to them, however, this is the easiest (and default) deployment model.

Using VMware Certificate Authority as a Subordinate CA

This mode is less commonly used but it is the second largest deployment type I have seen. This takes a bit of work, but essentially it allows you to integrate VMware Certificate Authority into an existing certificate infrastructure. The big benefit to this is that you will issue completely signed certificates in the existing hierarchy, and in many cases no installation of the certificate chain is required. The downside is you will need a subordinate CA certificate to be able to implement the configuration in this way. I have seen this in some cases but it is simply not allowed by policy. This is where the hybrid configuration comes into play as discussed next.

To configure this use the command line utility called certificate-manager.

JMcDonald 5

Once launched, Option 2 is used to Replace VMCA Root Certificate with Custom Signing Certificate and to replace all certificates. The first part of the process is to generate the private key, and the certificate request that will be submitted to the Certificate Authority. To do this, select Option 1:
JMcDonald 6

Once generated, submit the request to the CA for issuance of the certificate, as well as collection of the root certificate chain. For more details, see KB article Configuring VMware vSphere 6.0 VMware Certificate Authority as a subordinate Certificate Authority (2112016).

When the new certificate has been collected, it is a matter of providing this new certificate, as well as the key, to the manager utility. If the certificate-manager screen is still open, select Option 1 to continue, otherwise select Option 2 and then Option 2 again. You will be prompted to provide the details for the certificate including all details for the certificate being issued. Once complete, services are stopped and restarted:

JMcDonald 7

After this the Machine Certificate (aka the Reverse Proxy certificate) can be regenerated from the VMware Certificate Authority for the vCenter Server(s) that are in the environment. This is done by selecting Option 3 from the menu:

JMcDonald 8

This will prompt for a Single Sign-On administrator password—as well as the Platform Services Controller IP—if the system is a distributed installation. It will then prompt you to enter the information for the certificate and restart the vCenter services. The server has its reverse proxy certificate replaced at this point.

The next step is to replace the solution user certificates by running Option 6 from certificate-manager:

JMcDonald 9

This completes the configuration of the custom certificate authority certificates for the vCenter components, but it is not quite done yet.

The final step is to replace the ESXi host certificates. This is done directly from each host’s configuration in the Web Client. The first step here is to set the details for the certificate in the advanced settings for the vCenter Server:

JMcDonald 10

When complete, navigate to an ESXi host and select Manage > Certificates. On this screen, click Renew. This will regenerate the certificate for this host.

JMcDonald 11

In any case, this is the most complex of the architectures shown, but it also is the most environmentally integrated as well. It provides an additional level of security that is required to satisfy compliance regulatory requirements.

Using Hybrid Configurations for the Reverse Proxy – but VMware Certificate Authority for All Other Certificates

Hybrid configurations are the middle ground in this discussion. It is a balance between security and complexity and also satisfies the security team in many cases as well. The biggest issues I have seen that require such a configuration such are:

  • Granting or purchasing a Subordinate CA certificate is not possible due to policy or cost.
  • Issuing multiple certificates to the same server, one for each service, is not something that can be done due to policy or regulatory requirements.
  • Controls to approve or deny certificate requests are required.

In these cases, although it may not be possible to fully integrate VMware Certificate Authority into the existing CA Hierarchy, it is possible to still provide added levels of security. In this case it is possible to leave VMware Certificate Authority in the default configuration, and replace the Machine certificate only. You can do this by using Option 1 from the Certificate Manager tool.

JMcDonald 12

When configured, the environment will use the corporate CA certificate for external user-facing communication because everything now goes through the reverse proxy. On the other side of things components are still secured by certificates from VMware Certificate Authority. The configuration should look like this:

JMcDonald 13

As you can see, the architectures are varied and also provide quite a bit of flexibility for the configuration.

Which Configuration Method Should You Use?

The only remaining question is – which method is the best for you to use? As stated in a previous section, my personal preference is actually to use the “keep it simple” methodology, and use the default configuration. The reason is it’s the simplest configuration, and the only requirement is that the root certificate is installed on clients rather than regenerating custom certificates, and then modifying the configuration.

Obviously where policy or regulatory compliance is concerned there may be a need to integrate it. This, although more complex than doing nothing, is also something that is much easier than it was in prior versions.

Hopefully all of this information has been of use to you for consideration when designing or configuring your different vSphere environments. As can be seen, the complexity has been dramatically reduced, and only promises to get better. I can’t wait till I complete the next blog in this series—part 3—which will provide even more detail that will make all of this even simpler.


 

Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments.

App Volumes AD Objects Move Issue

JeffSmallBy Jeffrey Davidson

In this blog entry I will talk about App Volumes and things to check when you are having trouble logging in.

There are times when a user may not be able to login to the App Volumes Manager Server. Users might get a message similar to “You must be in the Administrators group to login” as shown below. There can be a few reasons for this issue to occur.

App Volumes AD Objects Move Issue JDavidson 1

The first is that an administrators group needs to be defined in the App Volumes configuration, which is done during the initial setup of the App Volumes instance. Users need to be members of this group in order to login to the App Volumes Manager Server.

Secondly, you need to validate the App Volumes Manager Server to be able to communicate with the SQL instance configured during installation and Active Directory. The credentials for connectivity to both environments should be verified.

If you have validated that all of the above configurations are accurate and the services are running, there is one other thing you should investigate; this issue can occur if the user object has been moved in Active Directory. For example, a user was in the “Chicago” organizational unit (OU) and has been moved to the “Cleveland” OU. When an object is moved in this way, App Volumes can have trouble finding the user because its distinguished name (DN) value in Active Directory has changed. App Volumes stores the user DN value in the “Users” App Volumes SQL database table.

App Volumes AD Objects Move Issue JDavidson 2

To restore App Volumes functionality to the object, the App Volumes SQL database needs to be updated with the new Active Directory DN value. This is done by retrieving the correct DN value from Active Directory, then updating the database record for that user.

App Volumes AD Objects Move Issue JDavidson 3

Updating the SQL record can be done directly through SQL Server Management Studio.

If this issue is occurring for all users then you will want to validate that the Active Directory group—defined as the App Volumes Administrators group—has not been move. In this circumstance you will want to validate the DN of the group specified in the App Volumes SQL Database against its location in Active Directory. The administrators group DN is stored in the “group_permissions” App Volumes SQL database table.

App Volumes AD Objects Move Issue JDavidson 4

If the group has been moved you will need to update the App Volumes SQL Database with the DN value of the new group.

App Volumes AD Objects Move Issue JDavidson 5

This record can also be updated directly through SQL Server Management Studio.


Jeffrey Davidson, Senior Consultant, VMware EUC. Jeffrey has over 15 years of IT experience and joined VMware in 2014. He is also a VCP5-DCV and VCP5-DT. He is a strong advocate of virtualization technologies, focusing on operational readiness with customers.

 

EUC Design Series: Application Rationalization and Workspace Management

TJBy TJ Vatsa

Introduction

Over the last few years, End User Computing (EUC) and the associated workspace mobility space have emerged to be transformational enterprise initiatives. Today’s workforce expects anytime and anywhere access to their applications, be it enterprise applications or user-installed applications (UIA), and everything in between. These expectations create newer opportunities, as well as newer challenges for the existing processes that are followed by enterprise and application architects. So what are the different facets of these challenges that the architects need to be aware of while analyzing and defining an enterprise application strategy? Let’s dive right in.

The What

Application rationalization is the process of strategizing an available set of corporate applications along the key perspectives of business priority, packaging, delivery, security, management and consumption to achieve a defined business outcome. The tangible artifact(s) The Whatof an application rationalization process is a leaner collection of one or more application catalogs. An application catalog is a logical grouping of application taxonomies based on a user’s roles and responsibilities within an organization, as well as within the enterprise. For instance, a user belonging to the finance department will have access to a department-specific catalog housing financial applications, as well as access to a corporate catalog housing all corporate-issued applications. While a user from the IT department will not need access to key financial applications used by a user from the finance department, they will have access to an IT-specific application catalog that may include applications like infrastructure monitoring. With end-user mobility/computing pervading every aspect of workforce productivity within the enterprise, organizations intend to leverage their existing investments in various application delivery platforms including those from VMware, Citrix, Microsoft and other vendors. The application rationalization process is an enabler of application governance, management and operations leading to minimal applications sprawl within the enterprise.

The Why

Traditionally, managing legacy applications has been a time-consuming and complex process from the perspective of application packaging, provisioning and monitoring. Delivery of such applications has been equally— if not more—complex. Add to that the constraints of application conflicts when it comes to supporting different devices and integration with other App Management 1 App Managementapplications. For instance, the requirement of integrating with the authentication process of an Identity Management (IDM) platform that all mission-critical applications need to support as part of the security directive coming from the Chief Information Security Officer’s (CISO) office.

So, first things first, we need to ask ourselves some of these key questions:

  • What are these applications, and what are the business priorities of these applications?
  • Do all these applications need to adhere to security directives and regulatory compliance directives such as HIPAA, PCI, etc., and if so, how soon?
  • Have the non-adherence risks been assessed, and what are the exceptions?
  • How do we package, provision, deliver, access, maintain, monitor and finally retire these applications?

What this means is that it is very important to make the available application catalog(s) lean in case they have become bulky over a given period of time due to inefficient Application Lifecycle Management (ALM) processes, mergers and acquisitions, emerging business priorities and other factors outside the control of enterprise, application and IT architects/leaders. Furthermore, the application portfolio(s) reflected in these collective catalogs need to be agile to support the ever-changing innovations in the areas of end-user mobility/computing, hybrid cloud, and the emerging Internet of Things-aware applications.

The How

A pragmatic approach to application rationalization relies on a strong foundation of people, processes and technology platforms. It is recommended to start by identifying some of the key application classifications along the lines of Mission Critical (MC), Business Critical (BC) and User Critical (UC) applications, and map these classifications to your user segmentation along the lines of key roles and responsibilities within and across the organizations. An existing organizational level RACI (Responsible, Accountable, Consulted, and Informed) matrix may come in very handy as part of this process. The information in the table below reflects a sample of how this could be accomplished.

The How

While the people and the processes parts may take multiple iterations, once these applications have been rationalized and the key stakeholders have been identified, we need to define an enterprise Application Management Architecture (AMA) to mature the EUC initiatives within an enterprise. The schematic below lists key components that help develop a mature Application Management Architecture.

App Management 1

What this means is that the AMA needs to address the following capabilities as illustrated in the schematic above:

  • Application packaging and isolation. For instance, whether the applications are natively installed in the base image or whether they are virtualized.
  • A unified application provisioning launch-pad for virtual, Web, Citrix XenApps, RDSH and SaaS applications.
  • Real-time application delivery for just-in-time desktops that would abstract the desktop guest operating system (GOS) from the end-user applications.
  • Unified authentication and application entitlement policy platform that supports Single Sign-on (SSO) and acts as a policy enforcement point (PEP) and a policy decision point (PDP).
  • Application maintenance capability that enables flexible patch management.
  • Application monitoring functionality that provides in-guest metrics for application performance monitoring.
  • Most importantly, supporting EUC mobility by interoperating with virtual, hybrid cloud and mobile platforms.

Conclusion

Now let’s tie it all together. VMware’s End User Computing (EUC) Workspace Environment Management (WEM) Solution includes VMware’s EUC product portfolio in combination with VMware’s experienced Professional Services Organization (PSO). This platform accelerates application rationalization initiatives by additionally providing application isolation, real-time application delivery and monitoring for Citrix and VMware environments. It facilitates comprehensive governance of end-user management with dynamic policy configuration so you can deliver a personalized environment to virtual, physical and cloud-hosted environments across devices. It is your fast track approach to success for your Application Rationalization initiatives within your enterprise where not only the technology—but also the people and processes—are given high priority. For additional information please visit VMware.

 

Find out more about Application Rationalization from the perspectives of an Enterprise EUC strategy and BCDR (Business Continuity and Disaster Recovery) by attending the following sessions at VMworld 2015, San Francisco.


TJ Vatsa is a Principal Architect and member of CTO Ambassadors at VMware representing the Professional Services organization. He has worked at VMware for the past 5+ years with more than 20 years of experience in the IT industry. During this time he has focused on enterprise architecture and applied his extensive experience in professional services and R&D to Cloud Computing, VDI infrastructure, SOA architecture planning and implementation, functional/solution architecture, enterprise data services and technical project management.

TJ holds a Bachelor of Engineering (BE) degree in Electronics and Communications from Delhi University, India, and has attained industry and professional certifications in enterprise architecture and technology platforms. He has also been a speaker and a panelist at industry conferences such as VMworld, VMware’s PEX (Partner Exchange), Briforum and BEAworld. He is an avid blogger who likes to write on real-life application of technology that drives successful business outcomes.