Home > Blogs > VMware Consulting Blog

Use Horizon View to Access Virtual Desktops Remotely – Without a VPN

 

By Eric Monjoin and Xavier Montaron

VMware Horizon View enables you to access a virtual desktop from anywhere, anytime. You can work remotely from your office or from a cybercafé, or anywhere else as long as there is a network connection to connect you to Horizon View infrastructure. It’s an ideal solution – but external connections can be risky.

So, how do you protect and secure your data? How do you authorize only some users—or groups of users—to connect from an external network without establishing a VPN connection?

You can achieve this by relaying into an external solution like F5 Networks’ BIG-IP Access Policy Manager (APM). It can perform pre-authentication checks to end-points based on criteria like user rights, desktop compliancy, antivirus up-to-date, and more. Or, you can simply use the built-in capabilities of Horizon View, which is perfect if you are a small or medium company with a limited budget.

There are two ways to achieve this with Horizon View:

  •  Pool tagging
  •  Two-factor authentication

Pool Tagging

Pool tagging consists of setting one or more tags on each View Connection Server (see Figure 1) and restricting desktop pools using those tags to specific brokers (see Figure 2).

EMonjoin Figure 1

Figure 1. View Connection Server tagging

In the following example a tag “EXTERNAL” has been created for brokers paired with a View Security Server, and it is dedicated to an external connection with the tag “INTERNAL,” which has been created for brokers dedicated to internal connections only. Only desktop pools assigned with the “EXTERNAL” tag will be available, and will appear in the desktop pool list while connected to a broker used for external connections.

EMonjoin Figure 2

Figure 2. Desktop pools tagging

As shown in Table 1, if you fail to restrict a pool with a tag, that pool will be available on all View Connection Servers. So, as soon as you start using tags, you have to use tags for all of your desktop pools.

Connection to View Connection Server with following tags Desktop pools with following restricted tag set Pool appears in desktop pools list
EXTERNAL EXTERNAL YES
EXTERNAL INTERNAL NO
INTERNAL EXTERNAL NO
INTERNAL INTERNAL YES
INTERNAL or EXTERNAL INTERNAL and EXTERNAL YES
INTERNAL or EXTERNAL “None” YES

Table 1. TAG relationships between VCS and desktop pools

Keep in mind that when using tags, it is implied that the administrator has created specific pools for external connections, and specific pools for internal connections.

 

Two-Factor Authentication

The other method when using Horizon View is two-factor authentication. This requires two separate methods of authentication to increase security.

The mechanism is simple; you first authenticate yourself using a one-time password (OTP) passcode as seen in Figure 3. These are generated approximatively every 45 seconds depending on the solution provider. If the provided credentials are authorized, a second login screen appears (see Figure 4) where you enter your Active Directory login and password used for single sign-on to the hosted virtual desktop.

EMonjoin Figure 3

Figure 3. OTP login screen

EMonjoin Figure 4

Figure 4. Domain login screen

 

The advantages with this solution are:

  • Enhanced security You need to have the OTP passcode (the user’s token) and must know the user’s Active Directory login and password.
  • Simplicity There is no need to create two separate desktop pools – one for external connections and another for internal connections.
  • You can be selective Distribute tokens only to employees who require external access.

The most commonly and widely implemented solution is RSA Security from EMC (see below), but you can also use any solution that is RADIUS-compliant.

For more detailed information you can read the white paper “ How to Set Up 2-Factor Authentication in Horizon View with Google Authenticator.” It describes how to set up FreeRADIUS and Google Authenticator to secure external connections, and authorize only specific users or groups of users to connect to Horizon View. This solution was successfully implemented at no cost at the City Hall in Drancy, France, by its chief information officer, Xavier Montaron.

 

Sources:

F5 BIG-IP Access Policy Manager 

http://www.f5.com/pdf/white-papers/f5-vmware-view-wp.pdf

https://support.f5.com/content/kb/en-us/products/big-ip_apm/manuals/product/apm-vmware-integration-implementations-11-4-0/_jcr_content/pdfAttach/download/file.res/BIG-IP_Access_Policy_Manager__VMware_Horizon_View_Integration_Implementations.pdf

RSA SecureID

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2003455

https://gallery.emc.com/servlet/JiveServlet/download/1971-24-4990/VMware_Horizon_View_52_AM8.0.pdf

 

 


Eric MonjoinEric Monjoin joined VMware France in 2009 as PSO Senior Consultant after spending 15 years at IBM as a Certified IT Specialist. Passionate for new challenges and technology, Eric has been a key leader in the VMware EUC practice in France. Recently, Eric has moved to the VMware Professional Services Engineering organization as Technical Solutions Architect. Eric is certified VCP6-DT, VCAP-DTA and VCAP-DTD and was awarded vExpert for the 4th consecutive year.


Xavier_MontaronXavier Montaron owns a Master in Computer Science from EPITECH school and has a strong developer background. He joined Town Hall of Drancy during December 2007 in the CIO organization, and became the actual CIO since 2010. Town Hall of Drancy has been a long-time IT innovator and user of VMware technology, both for infrastructure servers as well as for VDI, where all desktops have been fully virtualized since 2011 with Horizon View. Town Hall of Drancy recently has decided to externalize all servers and VDI infrastructure and are now hosted by OVH, a global leader in internet hosting based in France.

VMware App Volumes™ with F5′s Local Traffic Manager

Dale-Carter-150x150By Dale Carter, Senior Solutions Architect, End User Computing & Justin Venezia, Senior Solutions Architect, F5 Networks

App Volumes™—a result of VMware’s recent acquisition of Cloud Volumes—provides an alternative, just-in-time method for integrating and delivering applications to virtualized desktop- and Remote Desktop Services (RDS)-based computing environments. With this real-time application delivery system, applications are delivered by attaching virtual disks (VMDKs) to the virtual machine (VM) without modifying the VM – or the applications themselves. Applications can be scaled out with superior performance, at lower costs, and without compromising the end-user experience.

For this blog post, I have colluded with Justin Venezia – one of my good friends and a former colleague now working at F5 Networks. Justin and I will discuss ways to build resiliency and scalability within the App Volumes architecture using F5′s Local Traffic Manager (LTM).

App Volumes Nitty-Gritty

Let’s start out with the basics. Harry Labana’s blog post gives a great overview of how App Volumes work and what it does. The following picture depicts a common App Volumes conceptual architecture:

HLabana AppVolumes

 

Basically, App Volumes does a “real time” attachment of applications (read-only and writable) to virtual desktops and RDS hosts using VMDKs. When the App Volumes Agent checks in with the manager, the App Volumes Manager (the brains of App Volumes) will attach the necessary VMDKs to the virtual machines through a connection with a paired vCenter. The App Volumes Agent manages the redirection of file system calls to AppStacks (read-only VMDK of applications) or Writeable Volumes (a user-specific writeable VMDK). Through the Web-based App Volumes Manager console, IT administrators can dynamically provision, manage, or revoke applications access. Applications can even be dynamically delivered while users are logged into the RDS session or virtual desktop.

The App Volumes Manager is a critical component for administration and Agent communications. By using F5′s LTM capabilities, we can intelligently monitor the health of each App Volumes Manager server, balance and optimize the communications for the App Volume Agents, and build a level of resiliency for maximum system uptime.

Who is Talking with What?

As with any application, there’s always some back-and-forth chatter on the network. Besides administrator-initiated actions to the App Volumes Manager using a web browser, there are four other events that will generate traffic through the F5’s BIG-IP module; these four events are very short, quick communications. There aren’t any persistent or long-term connections kept between the App Volumes Agent and Manager.

When an IT administrator assigns an application to a desktop/user that is already powered on and logged in, the App Volumes Manager talks directly with vCenter and attaches the VMDK. The Agent then handles the rest of the integration of the VMDK into the virtual machine. When this event occurs, the agent never communicates with the App Volumes Manager during this process.

Configuring Load Balancing with App Volume Managers

Setting up the load balancing for App Volumes Manager servers is pretty straightforward. Before we walk through the load-balancing configuration, we’ll assume your F5 is already set up on your internal network and has the proper licensing for LTM.

Also, it’s important to ensure the App Volume agents will be able to communicate with the BIG-IP’s virtual IP address/FQDN assigned to App Volumes Manager; take the time to check routing and access to/from the agents and BIG-IP.

Since the App Volumes Manager works with both HTTP and HTTPS, we’ll show you how to load balance App Volumes using SSL Termination. We’ll be doing SSL Bridging: SSL from the client to the F5 → it is decrypted → it is re-encrypted and sent to the App Volumes Manager server. This method will allow the F5 to use advanced features—such as iRules and OneConnect—while maintaining a secure, end-to-end connection.

Click here to get a step-by-step guide on integrating App Volumes Manager servers with F5′s LTM. Here are some prerequisites you’ll need to consider before you start:

  • Determine what the FQDN will be and what virtual IP address will be used.
  • Add the FQDN and virtual IP into your company’s DNS.
  • Create and/or import the certificate that will be used; this blog post, does not cover creating, importing and chaining certificates.

The certificate should contain the FQDN that we will use for load balancing. We can actually leave the default certificates on the App Volumes Manager servers. BIG-IP will handle all the SSL translations, even with self-signed certificates created on the App Volumes servers. A standard, 2,048-bit web server (with private key) will work well with the BIG-IP, just make sure you import and chain the Root and Intermediate Certificates with the Web Server Certificate.

Once you’re done running through the instructions, you’ll have some load-balanced App Volumes Manager servers!

Again, BIG thanks to Justin Venezia from the F5 team – you can read more about Justin Venezia and his work here.


Dale Carter, a VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.

Justin Venezia is a Senior Solutions Architect for F5 Networks

Upgrading VMware Horizon View with Zero Downtime

Dale-Carter-150x150By Dale Carter, Senior Solutions Architect, End-User Computing

Over the last few years working with VMware Horizon View and doing many upgrades, two of the biggest issues I would hear from customers when planning for an upgrade was: “Why do we have to have so much downtime, and with seven connection brokers, why do we have to take them all down at once?”

These questions and issues came up when I was speaking to Engineering about the upgrade process and making it smoother for the customer.

I was told that, in fact, this was not the case, and you did not have to take all connection brokers down during the upgrade process; you can upgrade one connection broker at a time while the other servers are happily running.

This has been changed in View 6, and the upgrade documentation now reflects it. You can find the document here.

In this blog I will show you how to upgrade a cluster of connection servers with zero downtime. For this post I will be upgrading my View 5.3 servers to View 6.0.1

Here are the steps needed to upgrade a View pod with zero downtime:

  1. Follow all prerequisites in the upgrade document referenced above, including completing all backups and snapshots.
  2. In the load balancer managing the View servers, disable the server that is going to be upgraded from the load balanced pool.
  3. Log in to the admin console.
  4. Disable the connection server you are going to upgrade. From the View Configuration menu select Server, then select Connection Servers and highlight the correct server. Finally, click Disable.
    DCarter 1
  5. Click OK. The view server will now be disabled.
    DCarter 2
  6. Log in to the View connection server and launch the executable. For this example I will launch VMware-viewconnectionserver-x86_64-6.0.1-2088845.exe. NOTE: We did not disable any services at this point.
  7. Click Next.
    D Carter 3
  8. Accept the license agreement, and click Next.
  9. Click Install.
    DCarter 4
  10. Once the process is done click Finish.
    D Carter 5
  11. Now back in the Admin Console enable the connection server by clicking Enable. Also notice the new version has been installed.
    D Carter 6
  12. In the load balancer managing the View servers, enable the server that has been upgraded in the load balanced pool.
  13. Follow step 2 – 12 to upgrade all of your View servers.
    D Carter 7

Security Servers

If one of the connection servers is paired with a security server then there are a couple of additional steps to cover.

The following steps will need to be done to upgrade a connection server that is paired with a security server.

  1. In the load balancer managing the View Security servers, disable the server that is going to be upgraded from the load balanced pool.
  2. Follow all pre-requisites in the upgrade document referenced above, including disabling IPsec rules for the security server and take snapshots.
  3. Prepare the security server to be upgraded. From the View Configuration menu select Server, then select Security Servers. Highlight the correct server, click More Commands, and then click Prepare for Upgrade or Reinstall.
    D Carter 8
  4. Click OK.
  5. Upgrade the paired Connection server outlined in steps 2 – 12.
  6. Log in to the View Security server and launch the executable. For this example I will launch VMware-viewconnectionserver-x86_64-6.0.1-2088845.exe.
  7. Click Next.
    D Carter 9
  8. Accept the License agreement and click Next.
  9. Confirm the paired Connection server and click Next.
  10. Enter the pairing password and click Next.
  11. Confirm the configuration and click Next.
  12. Click Install.
  13. In the load balancer managing the View Security servers, enable the server that has been upgraded in the load balanced pool.

Dale Carter, a VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.

Understanding View Disposable Disks

Travis WoodBy Travis Wood, VCDX-97

When VMware introduced Linked-Clones in View 4.5 there was a new type of disk included called the Disposable Disk. The purpose of this disk was to redirect certain volatile files away from the OS Disk to help reduce linked-clone growth.  I have read a lot of designs that utilize disposable disks but it has become clear that there is a lot of confusion and misunderstanding about what they do and exactly how they function.  This confusion is highlighted in a View whitepaper called View Storage Considerations which describes disposable disks as:

Utilizing the disposable disk allows you to redirect transient paging and temporary file operations to a VMDK hosted on an alternate datastore. When the virtual machine is powered off, these disposable disks are deleted.

The three elements from this paragraph I want to demystify are:

  1. What is redirected to the disposable disk?
  2. Where are disposable disks hosted?
  3. When are disposable disks deleted/refreshed?

What is redirected?

By default there are three elements that are redirected to the disposable disk.  The first is the Windows swap file, View Composer will redirect the Swap file from C: to the disposable disk. It is recommended to set this to a specific size to make capacity planning easier.

 

TWood1

 

The other elements that are redirected are the System Environment Variables TMP and TEMP.  By default, the User TEMP and TMP Environment Variables are NOT redirected.  However it is highly recommended to remove the User TEMP and TMP Environment variables, if this is done then Windows will use the System Variables instead and the user temporary files will then be redirected to the disposable disk.

TWood4

 

 

Where is the disposable disk stored?

There is a common misconception that like the User Data Disk, the Disposable Disk can be redirected to a different tier.  This is not the case and the Disposable Disk is always stored with the OS Disk.  In later versions of View you can choose the drive letter within the GUI for the Disposable Disk to avoid conflicts with mapped drives, but this setting and the size are the only customizations you can make to the disposable disk.

When is the disposable disk refreshed?

This is the question that tends to cause the most confusion.  Many people I have spoken to have said that it is refreshed when the user logs off, whilst others say it’s on reboot.  The Disposable Disk is actually only refreshed when View powers off the VM. User initiated shutdown & reboots as well as power actions within vCenter do not impact the disposable disk.  The following actions will cause the disposable disk to be refreshed:

  • Rebalance
  • Refresh
  • Recompose
  • VM powered off due to the Pool Power Policy set to “Always Powered Off”

This is quite important to understand, as if the Pool Power Policy is set to any of the other settings (Powered On, Do Nothing or Suspend) then your disposable disks are not getting refreshed automatically.

What does all this mean?

Understanding Disposable Disks and their functionality will enable you to design your environment appropriately.  The View Storage Reclamation Feature that was introduced in View 5.2 uses an SE Sparse disk for the OS Disk, this allows View to shrink OS disks if files are deleted from within the OS.  However only the OS disk is created as an SE Sparse disk, User Data Disks and Disposable Disks are created as a standard VMDK.  The key difference with this feature compared with Disposable Disks, is it relies on files being deleted from within the Guest Operating System, where as the Disposable Disk is deleted along with all the files it contains when View powers off the VM.  It is also important to note, that currently SE Sparse disks are not supported on VSAN.

If you choose to use Disposable Disks in your design, then depending on your power cycle you may want to add an operational task for administrators to periodically change the Power On setting for the pool within a maintenance window to refresh the Disposable Disk.  This is particularly important for the use case of Persistent Desktops which have long refresh/recompose cycles.


Travis Wood is a VMware Senior Solutions Architect

MomentumSI Brings New DevOps and Cloud Professional Services to VMware

By now, it is common knowledge that VMware has evolved beyond server MomentumSI_logovirtualization and is a leading Private Cloud, Hybrid Cloud, and End-User Computing provider.  To enable the transformational business outcomes that these technologies support, we have continued to invest in building the best Professional Services team in the industry.

I am excited to share that in Q4 2014, VMware acquired MomentumSI, a leading IT consultancy that expands our capabilities to help our customers transform their IT processes and infrastructures into strategic advantage.

MomentumSI is a pure-play Professional Services business that served many of the same Fortune 500 companies that VMware does today. The company focused on four key solution areas:

  • Building DevOps capabilities for customers, leveraging technologies such as Docker, Puppet, Chef, Jenkins, Salt and Ansible
  • Architecting and implementing OpenStack Private Clouds
  • Enabling Hybrid Cloud solutions, with an emphasis on AWS and vCloud Air
  • Modernizing applications for cloud environments

The MomentumSI team has joined the Americas Professional Services Organization (PSO).  Together, the combined practice will assist our clients in achieving business results through IT transformation.

So with that, we welcome the MomentumSI team to the VMware family and look forward to expanding the value that we can deliver to our customers.

For more information on the services MomentumSI is bringing to VMware, please visit http://page.momentumsi.com/vmware.

Bret

Begin Your Journey to vRealize Operations Manager

By Brent Douglas

In early December, VMware launched an exciting new array of updates to its products. For some products, this update was a refinement of already widely used functionality and capabilities. For other products, the December release marked a new direction and new path forward. One such product is vRealize Operations Manager.

With VMware’s acquisition of Integrien’s patented real-time performance analytics solution in August 2010, VMware added a powerful tool to its arsenal of virtualization management solutions. This tool, vCenter Operations Manager, enabled customers to begin managing beyond “what my environment is doing now” and into “what my environment will be doing in 30 minutes—and beyond?” In essence, with vCenter Operations Manager, customers gained a tool that could predict―and ultimately prevent―the phone from ringing.

Since August 2010, vCenter Operations Manager received bug fixes, regular updates, and new features and capabilities. Even with those, the VMware product designers and engineers knew they could produce a new version of the product that captured and extended the capabilities of vCenter Operations Manager. On December 9, VMware released that tool—vRealize Operations Manager.

In many respects, vRealize Operations Manager, is a new product from the ground up. Due to the differences between vCenter Operations Manager v5.x and vRealize Operations Manager v6.x, current users of vCenter Operations Manager cannot simply apply a v6.x update to existing environments. For customers with little historical data or default policies, the best course forward may be to just install and begin using vRealize Operations Manager. Other customers, with deep historical data and advanced configuration/policies, the best path forward is likely a migration of existing data and configuration information from their vCenter Operations Manager v5.x instance.

A full discussion of migration planning and procedures is available in the vRealize Operations Manager Customization and Administration Guide. This guide also outlines many common vCenter Operations Manager scenarios and suggests migration paths to vRealize Operations Manager.

Important note: In order to migrate data and/or configuration information from an existing vCenter Operations Manager instance, the instance must be at least v.5.8.1 at a minimum, and preferably v5.8.3 or higher.

Question 1: Should any portion of my existing vCenter Operations Manager instance(s) be migrated?

VMware believes you are a candidate for a full migration (data and configuration information) if you can answer “yes” to any one of the following:

  • Have you operationalized capacity planning in vCenter Operations Manager 5.8.x?
    • Actively reclaiming waste
    • Reallocating resources
  • Have you operationalized vCenter Operations Manager to be performance- and health monitoring-based?
  • Do you act upon the performance alerts that are generated by vCenter Operations Manager?
  • Is any aspect of data in vCenter Operations Manager feeding another production system?
    • Raw metrics, alerts, reports, emails, etc
  • Do you have a company policy to retain monitoring data?
    • Does your current vCenter Operations Manager instance fall into this category (e.g., it’s running in TEST)?

VMware believes you are a candidate for a configuration-only migration if you answer “yes” to any one of the following:

  • Are you happy with your current configuration?
    • Dashboards
    • Policies
    • Users
    • Super Metrics

— AND —

  • I do not need to save the data I have collected
    • Running in a test environment or proof-of-concept you have refined and find useful
    • Not really using the data yet

If you answered “no” to these questions, you should install and try vRealize Operations Manager today. You are ready to go with a fresh install without migrating any existing data or configuration information.

Question 2: If some portion of an existing vCenter Operations Manager instance is to be migrated, who should perform the migration?

vRealize Operations Manager is capable of migrating existing data and configuration information from an existing vCenter Operations Manager instance. However, complicating factors may require an in-depth look by a VMware services professional to ensure a successful migration. The following table outlines some of the complicating factors and suggests paths forward.

Consulting_blog_table_012815

 

That’s it! With a bit of upfront planning you can be well on your journey to vRealize Operations Manager! The information above addresses the “big hitters” for planning a migration to vRealize Operations Manager from vCenter Operations Manager. As mentioned, a full discussion of migration planning and procedures is available in the vRealize Operations Manager Customization and Administration Guide.

On a personal note, I am excited about vRealize Operations Manager. Although vCenter Operations Manager served VMware and its customers well for many years, it is time for something new and exciting. I encourage you to try vRealize Operations Manager today. This post represents information produced in collaboration with David Moore, VMware Professional Services, and Dave Overbeek, VMware Technical Marketing team. I thank them for their contributions and continued focus on VMware and its customers.


Brent Douglas is a VMware Cloud Technical Solutions Architect

DevOps and Performance Management

Michael_Francis

By Michael Francis

Continuing on from Ahmed’s recent blog on DevOps, I thought I would share an experience I had with a customer regarding performance management for development teams.

Background

I was working with an organization that is essentially an independent software vendor (ISV) in a specific vertical; their business is writing software in the gambling sector, and⎯in some cases⎯hosting that software to deliver services to their partners. It is a very large revenue stream for them, and their development expertise and software functionality is their differentiation.

Due to historical stability issues and lack of trust between the application development teams and the infrastructure team, the organization introduced into the organization a new VP of Infrastructure and an Infrastructure Chief Architect a number of years previous. They focused on changing the process and culture − and also aligning the people. They took our technology and implemented an architecture that aligned with our best practices with the primary aim of delivering a stable, predictable platform.

This transformation of people/process and technology provided a stable infrastructure platform that soon improved the trust and credibility of the infrastructure team with the applications development teams for their test and development requirements.

Challenges

The applications team in this organization, as you would expect, carries significant influence. Even though the applications team had come to trust virtual infrastructure for test and development, they still had reservations about a private cloud model for production. Their applications had significant demands on infrastructure and needed to guarantee transactions per second rates committed across multiple databases; any latency could cause significant processing issues, and therefore, loss of revenue. Visibility across the stack was a concern.

The applications team  responsible for this critical in-house developed application designed the application to instrument it’s performance by writing out flat files on each server with application-specific information about transaction commit times and other application specific performance information.

Irrelevant of complete stack visibility, the applications team responsible for this application was challenged with how to monitor the performance of this custom distributed application performance data from a central point. The applications team also desired some means of understanding normal performance data levels, as well as a way to gain insight into the stack to see where any abnormality originated.

Due to the trust that had developed with the infrastructure team, they engaged with them to determine whether the infrastructure team had any capability to support their performance monitoring needs.

Solution

The infrastructure team was just beginning to review their needs for performance and capacity management tools for their Private Cloud. The team had implemented a proof-of-concept of vCenter Operations Manager and found its visualizations useful; so they asked us to work with the applications team to determine whether we could digest this custom performance information.

We started by educating them on the concept of a dynamic learning monitoring system. It had to allow hard thresholds to be set, but also⎯more importantly⎯determine the spectrum of normal behavior based upon data pattern prediction algorithms for an application; both as a whole and each of its individual components.

We discussed the benefits of a data analytics system that could take a stream of data, and
irrespective of the data source, create a monitored object from it. The data analytics system had to be able to assign the data elements in the stream to metrics, start determining normality, provide a comparison to any hard thresholds, and provide the visualization.

The applications team was keen to investigate and so our proof-of-concept expanded to include the custom performance data from this in-house developed application.

The Outcome

The screenshot below shows VMware vCenter Operations Manager. It shows the Resource Type screen that allows us to define a customer Resource Type, which allows us to represent the application-specific metrics and the application itself.

MFrancis1

To get the data into vCenter Operation Manager we simply wrote a script that opened the flat file on each of the servers participating in the application; it read the file and then posted the information into vCenter Operations Manager using its HTTP POST adapter. This adapter provides the ability to post data from any endpoint that needs to be monitored; because of this vCenter Operations Manager is a very flexible tool.

In this instance we posted into vCenter Operation Manager a combination of application-specific counters and Windows Management Instrumentation (WMI) counters from the Windows operating system platform the apps run on. This is shown in the following screenshot.

MFrancis2

You can see the Resource Kind is something I called vbs_vcops_httpost, which is not a ‘standard’ monitored object in vCenter Operations Manager; the product has created this based on the data stream I was pumping into it. I just needed to tell vCenter Operations Manager what metrics it should monitor from the data stream – which you can see in the following screenshot.

 MFrancis3

For each attribute (metric) we can configure whether hard thresholds are used and whether vCenter Operations Manager should use that metric as an indicator of normality. We refer to the normality as dynamic thresholds.

Once we have identified which metrics we want to mark, we can create spectrums of normality for them and affect the health of the application, which allows us to create visualizations. The screenshot below shows an example of a simple visualization. It shows the applications team a round-trip time metric plotted over time, alongside a standard windows WMI performance counter for CPU.

MFrancis4

In introducing the capabilities to monitor custom in-house developed applications using combinations of application-specific custom metrics, a standard guest operating system and platform metrics, the DevOps team now has visibility into the health of the whole stack. This enables them to see the impact of code changes against different layers of the stack so they can compare the before and after from the perspective of the spectrum of normality for varying key metrics.

This capability from a cultural perspective brought the applications development team and infrastructure team onto the same page; both teams gain an appreciation of any performance issues through a common view.

In my team we have developed services that enable our customers to adopt and mature a performance and capacity management capability for the hybrid cloud, which⎯in my view―is one of the most challenging considerations for hybrid cloud adoption.

 


Michael Francis is a Principal Systems Engineer at VMware, based in Brisbane.

Automating Security Policy Enforcement with NSX Service Composer

Romain DeckerBy Romain Decker

Over the past decade, IT organizations have gained significant benefits as a direct result of compute virtualization, permitting a reduction in physical complexity and an increase in operational efficiency. It also allowed for dynamic re-purposing of underlying resources to quickly and optimally meet the needs of an increasingly dynamic business.

In dynamic cloud data centers, application workloads are provisioned, moved and decommissioned on demand. In legacy network operating models, network provisioning is slow and workload mobility is limited. While compute virtualization has become the new norm, network and security models remained unchanged in data centers.

NSX is VMware’s solution to virtualize network and security for your software-defined data center. NSX network virtualization decouples the network from hardware and places it into a software abstraction layer, thus delivering for networking what VMware has already delivered for compute and storage.

Inside NSX, the Service Composer is a built-in tool that defines a new model for consuming network and security services; it allows you to provision and assign firewall policies and security services to applications in real time in a virtual infrastructure. Security policies are assigned to groups of virtual machines, and the policy is automatically applied to new virtual machines as they are added to the group.

RDecker 1

From a practical point of view, NSX Service Composer is a configuration interface that gives administrators a consistent and centralized way to provision, apply and automate network security services like anti-virus/malware protection, IPS, DLP, firewall rules, etc. Those services can be available natively in NSX or enhanced by third-party solutions.

With NSX Service Composer, security services can be consumed more efficiently in the software-defined data center. Security can be easily organized by dissociating the assets you want to protect from the policies that define how you want to protect them.

RDecker 2

Security Groups

A security group is a powerful construct that allows static or dynamic grouping based on inclusion and exclusion of objects such as virtual machines, vNICs, vSphere clusters, logical switches, and so on.

If a security group is static, the protected assets are a limited set of specific objects, whereas dynamic membership of a security group can be defined by one or multiple criteria, like vCenter containers (data centers, port groups and clusters), security tags, Active Directory groups, regular expressions on virtual machine names, and so on. When all criteria are met, virtual machines are immediately moved to the security group automatically.

In the example below, any virtual machine with a name containing “web”―AND running in “Capacity Cluster A”―will belong to this security group.

RDecker 3

 

Security group considerations:

  • Security groups can have multiple security policies assigned to them.
  • A virtual machine can live in multiple security groups at the same time.
  • Security groups can be nested inside other security groups.
  • You can include AND exclude objects from security groups.
  • Security group membership can change constantly.
  • If a virtual machine belongs to multiple security groups, the services applied to it depend on the precedence of the security policy mapped to the security groups.

Security Policies

A security policy is a collection of security services and/or firewall rules. It can contain the following:

  • Guest Introspection services (applies to virtual machines) – Data Security or third-party solution provider services such as anti-virus or vulnerability management services.
  • Distributed Firewall rules (applies to vNIC) – Rules that define the traffic to be allowed to/from/within the security group.
  • Network introspection services (applies to virtual machines) – Services that monitor your network such as IPS and network forensics.

Security services such as vulnerability management, IDS/IPS or next-generation firewalling can be inserted into the traffic flow and chained together.

Security policies are applied according to their respective weight: a security policy with a higher weight has a higher priority. By default, a new policy is assigned the highest weight so it is at the top of the table (but you can manually modify the default suggested weight to change the order).

Multiple security policies may be applied to a virtual machine because either (1) the security group that contains the virtual machine is associated with multiple policies, or, (2) the virtual machine is part of multiple security groups associated with different policies. If there is a conflict between services grouped with each policy, the weight of the policies determine the services that will be applied to the virtual machine.

For example: If policy A blocks incoming HTTP and has a weight value of 1,000, while policy B allows incoming HTTP with a weight value of 2,000, incoming HTTP traffic will be allowed because policy B has a higher weight.

The mapping between security groups and security policies results in a running configuration that is immediately enforced. The relationships between all objects can be observed in the Service Composer Canvas.

RDecker 4

 

Each block represents a security group with its associated security policies, Guest Introspection services, firewall rules, network introspection services, and the virtual machines belonging to the group or included security groups.

NSX Service Composer offers a way to automate the consumption of security services and their mapping to virtual machines using a logical policy, and it makes your life easier because you can rely on it to manage your firewall policies; security groups allow you to statically or dynamically include or exclude objects into a container, which can be used as a source or destination in a firewall rule.

Firewall rules defined in security policies are automatically adapted (based on the association between security groups and policies) and integrated into NSX Distributed Firewall (or any third-party firewall). As virtual machines are automatically added and removed from security groups during their lifecycle, the corresponding firewall rules are enforced when needed. With this association, your imagination is your only limit!

In the screenshot below, firewall rules are applied via security policies to a three-tier application; since the security group membership is dynamic, there is no need to modify firewall rules when virtual machines are added to the application (in order to scale-out, for example).

RDecker 5

 

Provision, Apply, Automate

Service Composer is one of the most powerful features of NSX: it simplifies the application of security services to virtual machines within the software-defined data center, and allows administrators to have more control over―and visibility into―security.

Service Composer accomplishes this by providing a three-step workflow:

      • Provision the services to be applied:
        • Registering the third-party service with NSX Manager (if you are not using the out-of-the-box security services available)
        • Deploying the service by installing if necessary the components required for that service to operate into each ESXi host (“Networking & Security > Installation > Service Deployments” tab)
    • Apply and visualize the security services to defined containers by applying the security policies to security groups.
    • Automate the application of these services by defining rules and criteria that specify the circumstances under which each service will be applied to a given virtual machine.

Possibilities around the NSX Service Composer are tremendous; you can create an almost infinite number of associations between security groups and security policies to efficiently automate the how security services will be consumed in the software-defined data center.

You can, for example, combine service composer capabilities and VMware vRealize Automation Center to achieve secure, automated, on-demand micro-segmentation. Another example is a quarantine workflow, where― after a virus detection―a virtual machine is automatically and immediately moved to a quarantine security group, whose security policies can take action, like remediation, strengthened firewall rules and traffic steering.


Romain Decker is a Technical Solutions Architect in the Professional Services Engineering team and is based in France.

Application Delivery Strategy: A Key Piece in the VDI Design Puzzle

By Michael Bradley and Hans Bader

Let’s face it: applications are the bane of a desktop administrator’s existence. It seems there is always something that makes the installation and management of an application difficult and challenging. Whether it’s a long list of confusing and conflicting requirements or a series of software and hardware incompatibilities, management of applications is one of the more difficult aspects of an administrator’s job.

It’s not surprising that application delivery and management is one of the key areas that often gets overlooked when planning and deploying a virtual desktop infrastructure (VDI), such as VMware’s Horizon View 6. This often-overlooked aspect is a common pitfall hindering many VDI implementations. A great deal of work and effort goes into ensuring that desktop images are optimized, the correct corporate security settings are applied to the operating system, the underlying architecture is built to scale appropriately, and the guaranteed end-user performance is acceptable. These are all important goals that require attention, but the application delivery strategy is frequently missed, forgotten, or even ignored.

Before we go further, let’s take a moment to define application delivery. A long time ago in a cube farm far, far away, application delivery was all about getting the applications installed on the desktop. But with the emergence of new technologies the definition has evolved. Software application delivery is no longer solely about the installation; it has taken on a broader meaning. In today’s end-user environment, application delivery is more about providing the end-user with access to the applications they need. In today’s modern enterprise, end-user access can come in many different forms. Some of the most common examples are:

  • Installing applications directly on the virtual desktop, either manually or by using software such as Microsoft SCCM.
  • Application virtualization using VMware ThinApp or Microsoft’s App-V.
  • Delivering the applications to the desktop using technologies such as VMware App Volumes or Liquidware Labs’ FlexApp.
  • Application presentation using RDS Hosted Applications in VMware Horizon 6.

All these examples are application delivery mechanisms. Each one can solve a different application deployment problem, and each can be used alone or in conjunction with a complimentary one. For example, using App Volumes to delivery ThinApps.

An application delivery strategy should be an integral part of your VDI design; it is just as crucial as the physical infrastructure, like storage, networking, processing and the virtual infrastructure. It is perfectly alright to have a top-notch VDI, but if you can’t deliver new and existing applications to your end-users in a fast and efficient manner, you might be spinning your bits and bytes. Your end-users need applications delivered efficiently and quickly, or the VDI project becomes a bottleneck. The prime factor to remember about VDI is it forces you to change the way you operate. Features―such as VMware’s Linked Clone technology―can change the application delivery paradigm that many desktop administrators have grown accustomed to in a physical PC world. Let’s face it: how effective is it to push and install applications to linked clone desktops every time a desktop refreshes or recomposes?

To this end, if an application delivery strategy is so important, why is it often missed or ignored? There are three primary reasons for this:

  • First, it is simply forgotten, or the VDI designers simply don’t realize they need to consider it as part of the design.
  • Second, application delivery is often considered too big of a challenge, and no one wants to tackle it when they’re already facing tight deadlines on a VDI project.
  • Third, and probably most commonly heard in enterprise environments, is there is already a mechanism in place for application delivery for physical PCs, so it is assumed that what exists will suffice.

Once the need for an application delivery strategy is established, you need to determine what goes into one. First, you need to consider all tiers of your applications: tier one, tier two, tier-n. With that be sure to identify which are most common. Determine which applications need to be provided to all end-users versus which ones go to just a small subset. That will help determine what could be installed in the base image, as opposed to being delivered by some other mechanism. For instance, Microsoft Office may be an application that would be included in the base image for all users, but a limited use accounting package may only be required for the accounting team, and therefore delivered another way.

Next, consider the delivery mechanism for your virtual desktops. Are they all full virtual machine desktops – or linked clone desktops? Determining which type you are using will play a major part in what your application delivery strategy looks like. If you are using all full virtual machine desktops―which deserves serious consideration―then you could effectively continue to use the existing application delivery strategy you use for physical PCs. But using linked clones could cause your existing application delivery strategy to become a bottleneck.

Then, you need to consider what technology will work best for you and your applications. Will application virtualization such as ThinApp be a suitable mechanism? Or, perhaps using RDS Hosted Applications in Horizon 6 is a more viable option for application delivery. You may even find the best option is a combination of technologies. You should take time to evaluate the pros and cons of each option to ensure the needs of your end-users are met ‒ and with efficiency. One question you should ask is, “Do my end-users have the ability to install their own applications?” If the answer is “yes,” you need to ensure you either change corporate policy or select a technology that supports user-installed applications. Keep in mind that an application delivery strategy can vary for different types of users.

Finally, you should consider how to handle one-off situations. There will always be the one user, or a small group of users, who require a specialized application that falls outside the realm of your standard application delivery mechanisms. Determining how to address those instances are rare but inevitable, but as a desktop administrator, it will help you respond quickly to the needs of your end-users.

A good VDI implementation is only successful if the end-users can perform their assigned tasks. Nine times out of ten, that requires access to applications. Ensuring you have a strategy in place to ensure delivery of the right applications to the right end-users is vital to the success of any VDI implementation.


Michael Bradley

Michael Bradley, a VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for almost 20 years. He is also a VCP5-DCV, VCAP4-DCD, VCP4-DT, VCP5-DT, and VCAP-DTD, as well as an Airwatch Enterprise Mobility Associate.

 

Hans Bader

Hans Bader Consulting Architect, VMware EUC. Hans has over 20 years of IT experience and joined VMware in 2009. With a focus on helping organizations being operationally ready, he works with customers to avoid common mistakes.  He is a strong advocate for proactive load testing of environment before allowing users access.  Hans has won numerous consulting awards within VMware.

vCloud Automation Center Disaster Recovery

Gary BlakeBy Gary Blake

Prior to the release of vCloud Automation Center (vCAC) v5.2 there was no awareness or understanding of vCenter Site Recovery Manager protecting virtual machines. However, with the introduction of vCAC v5.2, VMware now provides enhanced integration so vCAC can correctly discover the relationship between the primary and recovery virtual machines.

These enhancements consist of what may be considered minor modifications, but they are fundamental enough to ensure vCenter Site Recovery Manager (SRM) can be successfully implemented to deliver disaster recovery of virtual machines managed by vCAC.

GBlake 1

 

So What’s Changed?

When a virtual machine is protected by SRM a Managed Object Reference ID (or MoRefID) is created against the virtual machine record in the vCenter Server database.

Prior to SRM v5.5 a single virtual machine property was created on the placeholder virtual machine object in the recovery site vCenter Server database called “ManagedBy:SRM,placeholderVM,” but vCAC did not inspect this value, so it would attempt to add a second duplicate entry into its database. With the introduction of 5.2, when a data collection is run, vCAC now ignores virtual machines with this value set, thus avoiding the duplicate entry attempt.

In addition, SRM v5.5 introduced a second managed-by-property value that is placed on the virtual machine vCenter Server database record called “ManagedBy:SRM,testVM.” When a test recovery process is performed and data collection is run at the recovery site, vCAC inspects this value and ignores virtual machines with this set. This too avoids creating a duplicate entry in the vCAC database.

With the changes highlighted above, SRM v5.5 and later—and vCAC 5.2 and later—can now be implemented in tandem with full awareness of each other. However, one limitation still remains when moving a virtual machine into recovery or re-protect mode: vCAC does not properly recognize the move. To successfully perform these machine operations and continue managing the machine lifecycle, you must use the Change Reservation operation – which is still a manual task.

Introducing the CloudClient

In performing the investigation around the enhancements between SRM and vCAC just described, and on uncovering the need for the manual change of reservation, I spent some time with our Cloud Solution Engineering team discussing the need for finding a way to automate this step. They were already developing a tool called CloudClient, which is essentially a wrapper for our application programming interfaces that allows simple command line-driven steps to be performed, and suggested this could be developed to support this use case.

Conclusion

In order to achieve fully functioning integration between vCloud Automation Center (5.2 or later) and vCenter Site Recovery Manager, adhere to the following design decisions:

  • Configure vCloud Automation Center with endpoints for both the protected and recovery sites.
  • Perform a manual/automatic change reservation following a vCenter Site Recovery Manager planned for disaster migration.

GBlake2

 

Frequently Asked Questions

Q. When I fail over my virtual machines from the protected site to the recovery site, what happens if I request the built-in vCAC machine operations?

A. Once you have performed a Planned Migration or a Disaster Recovery process, as long as you have changed the reservation within the vCAC Admin UI for the virtual machine, machine operations will be performed in the normal way on the recovered virtual machine.

Q. What happens if I do not perform the Change Reservation step to a virtual machine once I’ve completed a Planned Migration or Disaster Recovery processand I then attempt to perform the built-in vCAC machine operations on the virtual machine?

A. Depending on which tasks you perform, some things are blocked by vCAC, and you see an error message in the log such as “The method is disabled by ‘com.vmware.vcDR’” and some actions look like they are being processed, but nothing happens. There are also a few actions that are processed regardless of the virtual machine failure scenario; these are Change Lease and Expiration Reminder.

Q. What happens if I perform a re-provision action on a virtual machine that is currently in a Planned Migration or Disaster Recovery state?

A. vCAC will re-provision the virtual machine in the normal manner, where the hostname and IP address (if assigned through vCAC) will be maintained. However, the SRM recovery plan will now fail if you attempt to re-protect the virtual machine back to the protected site as the original object being managed is replaced. It’s recommended that—for blueprints where SRM protection is a requirement—you disable the ‘Re-provision’ machine operation.


Gary Blake is a VMware Staff Solutions Architect & CTO Ambassador