Home > Blogs > VMware Consulting Blog

VMware App Volumes Backup Utility Fling: Introduction

First published on VMware’s End-User Computing blog

By Dale Carter, Chris Halstead and Stéphane Asselin

In December 2014, VMware released VMware App Volumes, and since then, lots of new features have been added, and people love using App Volumes. Organizations use App Volumes not only in VMware environments, but also in many Citrix environments.

However, there has been one big request from our App Volumes users: Every time I talk to people about App Volumes, they ask about how to back up their AppStacks and writable volumes. Normal virtual-machine backup tools cannot back up App Volumes AppStacks and writable volumes because the AppStacks and writable volumes are not part of the vCenter inventory unless they are connected to a user’s virtual machine (VM). As I talked to other people within VMware, I found this question coming up more and more, so I started to think of how we could help.

Last summer during an internal conference, Travis Wood, Senior Solutions Architect at VMware, and I were throwing around a few ideas of how to address this request, and we came up with the idea of an App Volumes backup tool.

Because I do not have any programming skills, I started talking with Chris Halstead, End-User-Computing Architect at VMware, about the idea for this tool. Chris was instantly excited and agreed that this would be a great solution. Chris and I also enlisted Stéphane Asselin, Senior End-User-Computing Architect, to help with creating and testing the tool.

Over the last couple of months, Chris, Stéphane, and I have been working on the tool, and today we are happy to announce that the App Volumes Backup Utility has been released as a VMware Fling for everyone to download.

Use Case and Benefits

The issue with backing up App Volumes AppStacks and writable volumes is that these VMDK files do not show up in the vCenter inventory unless they are currently in use and connected to a user’s virtual desktop. The standard backup tools do not see the VMDKs on the datastore if they are not in the vCenter inventory, and you do not want to back up these files while users are connected to their desktops.

The use case for this tool was to provide a way to make your backup tools see the AppStack and writable-volume VMDKs when they are not connected to a user’s virtual desktop. We also did not want to create other virtual machines that would require an OS; we wanted to keep the footprint and resources to a minimum, and the cost down.

The benefits of using the App Volumes Backup Utility are

  • It connects AppStacks and writable volumes to a VM that is never in use and that also does not have an OS installed.
  • The solution is quick and uses very few resources. The only resource that the tool does use is a 1 MB storage footprint for each temporary backup VM you create.
  • The tool can be used in conjunction with any standard software that backs up your current virtual infrastructure.

How Does the Tool Work?

DCarter_app-volumes-backup-utility-19

In the App Volumes Backup Utility, we made it easy for your existing backup solution to see and back up all of the AppStacks and writable volumes. This is accomplished in a fairly straightforward way. Using the tool, you connect to both your App Volumes Manager and vCenter. Then, using the tool, you create a backup VM. This VM is only a shell, has no OS installed, and has a very small footprint of just 1 MB.

Note: This VM will never be powered on.

After the backup VM is created, you select which AppStacks and writable volumes you want to back up, and you attach them to the backup VM using the App Volumes Backup Utility.

After the AppStacks and writable volumes are attached, you can use your standard backup solution to back up the backup VM, including the attached VMDK files. After the backup is complete, open the tool and detach the AppStacks and writable volumes from the backup VM, and delete the backup VM.

For more details on how to use the tool, see the VMware App Volumes Backup Utility Fling: Instructions.

Download the App Volumes Backup Utility Fling, and feel free to give Chris Halstead, Stéphane Asselin, and me your feedback. You can comment on the Fling site or below this blog post, or find our details on this blog site and connect with us.


Dale CarterDale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years’ experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently holds a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA. For more blog post from Dale visit his website at http://vdelboysview.com

Chris_Halstead

Chris Halstead is an EUC Architect on the End User Computing Technical Marketing & Enablement team. He has over 20 years’ experience in the End User Computing space. Chris’ experience ranges from managing a global desktop environment for a Fortune 500 company, to managing and proving EUC professional services at a VMware partner–and most recently as an End User Computing SE for VMware. Chris has written four other VMware Flings, many detailed blog articles (http://chrisdhalstead.net), has been a VMware vExpert since 2012 and is active on Twitter at @chrisdhalstead

Stephane_Asselin

Stéphane Asselin with his twenty years experience in IT, is a Senior Consultant for the Global Center of Excellence (CoE) for the End-User Computing business unit at VMware. In his recent role, he had national responsibility for Canada for EUC planning, designing and implementing virtual infrastructure solutions and all processes involved. At VMware, Stephane has worked on EUC pre-sales activities, internal IP, product development and technical specialist lead on BETA programs. He has also done work as a Subject Matter Expert for project Octopus, Horizon, View, vCOps and ThinApp. Previously, he was with CA as Senior Systems Engineer where he has worked on Enterprise Monitoring pre sales activities and technical specialist. 

In his current role in the Global Center of Excellence at VMware, he’s one of the resources developing presentation materials and technical documentation for training and knowledge transfer to customers and peer systems engineers. Visit myeuc.net for more information.

Composite USB Devices Step by Step

Jeremy WheelerBy Jeremy Wheeler

Users have a love/hate relationship with VDI: they love the ability to access apps and information from any device, at any time, but they hate the usual trade-offs in performance and convenience. If you’re using VMware Horizon View, you’ve already overcome a huge acceptance hurdle, by providing a consistently great experience for knowledge workers, mobile workers and even 3D developers across devices, locations, media and connections.

But sometimes, peripherals don’t behave as expected in a VDI environment, which can lead to JWheeler Composite USB White Paperuser frustration. For example, when someone wants to use a Microsoft LifCam Cinema camera, they naturally expect to just plug it into a USB device and have it auto-connect to their VDI session. But if anyone in your organization has tried to do this, you already know that’s not the case. Fortunately, there is an easy workaround to fix the problem.

Download the white paper for the VMware-tested fix to this common problem.

 


Jeremy Wheeler is an experienced Consulting Architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013

How to configure HA LDAP Server with the vRO Active Directory Plug-in Using F5 BIG-IP

Spas_KaloferovBy Spas Kaloferov

In this post we will demonstrate how to configure a highly availability (HA) LDAP server to use with the VMware vRealize Orchestrator Server (vRO) Active Directory Plug-in. We will accomplish this task using F5 BIG-IP, which can also be used to achieve LDAP load balancing.

The Problem

The Configure Active Directory Server workflow part of the vRO Active Directory Plug-in allows you to configure a single active directory (AD) host via IP or URL. For example:

SKaloferov_Configure Active Directory

Q: What if we want to connect to multiple AD domain controller (DC) servers to achieve high availability?
A: One way is to create additional DNS records for those servers with the same name, and use that name when running the workflow to add the AD server. DNS will return based on round robin, any of the given AD servers.

Q: Will this prevent me from hitting a DC server that is down or unreachable?
A: No, health checks are not performed to determine if a server is down.

Q: How can I implement a health checking mechanism to determine if a given active directory domain controller server is down, so that this is not returned to vRO?
A: By using F5 BIG-IP Virtual Server configured for LDAP request.

Q: How can I configure that in F5?
A: This is covered in the next chapter.

The Solution

We can configure an F5 BIG-IP device to listen for and satisfy LDAP requests in the same way we configured it for vIDM in an earlier post.

To learn more on how to configure F5 BIG-IP Virtual Server to listen for and satisfy LDAP requests, visit the “How to set vIDM (SSO) LDAP Site-Affinity for vRA“ blog, and read the Method 2: Using F5 BIG-IP chapter.

In this case we will use the same F5 BIG-IP Virtual Server (VS) we created for the vIDM server:

  1. Log in to vRO and navigate to the Workflows tab.
  2. Navigate to Library > Microsoft > Active Directory > Configuration and start the Configure Active Directory Server
  3. In the Active Directory Host IP/URL field provide the FQDN of the VS you created.
  4. Fill in the rest of the input parameters as per your AD requirements.
  5. Click Submit.

SKaloferov_Active Directory Server

Go to the Inventory tab; you should see that the LDAP server has been added, and you should be able to expand and explore the inventory objects coming from that plug-in.

SKaloferov_LDAP

Now, in my case, I have two LDAP servers lying behind the virtual server.

SKaloferov_F5 Standalone

I will shut the first one down and see if vRO will continue to work as expected.

SKaloferov_F5 Standalone Network Map

Right-click the LDAP server and select Reload.

SKaloferov_LDAP Reload

Expand again and explore the LDAP server inventory. Since there is still one LDAP server that can satisfy requests it should work.

Now let’s check to see what happens if we simulate a failure of all the LDAP servers.

SKaloferov_LDAP Pool

Right-click the LDAP server and select Reload.

You should see an error because there are no LDAP servers available to satisfy queries.

SKaloferov_Plugin Error

Additional resources

My dear friend Oliver Leach wrote a blog post on a similar/related topic. Make sure to check it out at: “vRealize Orchestrator – connecting to more than one domain using the Active Directory plugin.”


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

EUC Design Series: Horizon 7 Strategy for Desktop Evolution to IoT Revolution

TJBy TJ Vatsa

Introduction

Mobility and end-user computing (EUC) are evolving at a very rapid pace. With the recent announcements made by VMware around Horizon 7 it becomes all the more important to recalibrate and remap the emerging innovation trends to your existing enterprise EUC and application rationalization strategies. For business and IT leaders, burning questions emerge:

  • “What are these EUC innovations leading to, and why should it matter to my organization?”
  • “What is the end-user desktop in the EUC realm evolving into, and are these innovations a precursor to an IoT (Internet of Things) revolution?”
  • “What outcomes might we expect if we were to adopt these innovations in our organizations?”
  • “How do we need to restructure our existing EUC/mobility team to fully leverage the mobility evolution?”

Now there are enough questions to get your creative juices flowing! Let’s dive right in.

The What

Desktop virtualization revolutionized how end-user desktops with their applications and data were securely managed within the guard rails of a secure data center. These were essentially Generation1 (Gen1) desktops that were persistent (AKA full clone) desktops within a virtual machine (VM) container. While the benefit was mainly secure encapsulation within a data center, the downside was cumbersome provisioning with a bloated storage footprint. For instance, if you had one persistent desktop with a 50 GB base image and 100 users, you would be looking at 5,000 GB—or 5 TB—of storage. In an enterprise where we have thousands of users with unique operating system and application requirements, the infrastructure capital expenditures (CAPEX) and the associated operational expenditures (OPEX) would be through the roof.

The preceding scenario was solved by the Generation2 (Gen2) virtual desktops, which were classified as non-persistent (AKA linked clone) desktops. Gen2 desktops relied on a parent base-image (AKA a replica), and the resulting linked clones referenced this replica for all read operations, and had delta disks to store any individual writes. These desktops benefited from faster process automation using a Composer server (AKA desktop provisioning) that generated linked clones referencing a base replica image. This resulted in a significant reduction in the storage footprint and faster desktop provisioning times. This also aided in reducing the CAPEX and OPEX levels incurred in Gen1 desktops. However, the downside of desktop boot-up times was still not fully resolved because they are dependent on the storage media being used. Boot-up times were faster with flash storage and comparatively slower with spinning media storage. The OPEX associated with application management was still not fully resolved despite application virtualization technologies offered by various vendors. It still required management of multiple patches for desktop images and applications.

The panacea offered by the new Horizon 7 has accelerated the virtual desktop evolution to Generation3 (Gen3) desktops. Evolution to Gen3 results in just-in-time desktops and application stack delivery. This means you only have to patch the desktop once, clone it with its running state, and dynamically attach the application stack using VMware’s App Volumes. Gen3 virtual desktops from VMware have the benefits of Gen2 desktops, but without the operational overhead, resulting in reduced CAPEX and OPEX. Here is an infographic detailing the evolution:

TVatsa_Clone Desktop VM

Gen3 desktops pave the way for a Generation4+ (Gen4+) mobility platform that leverages VMware’s Enterprise Mobility Management (EMM) platform and the EUC platform into Workspace ONE, capable of tapping into all of the possibilities of mobility-enabled IoT solutions. The potential generated by these solutions is capable of being tapped across various vertical industries—healthcare, financial, retail, education, manufacturing, government and consumer packaged goods—creating an IoT revolution in days to come.

The Why

The innovations listed in the preceding section have the potential of transforming an enterprise’s business, IT and financial outcomes. The metrics to quantify these outcomes are best measured in the resulting CAPEX and OPEX reductions. The reduction in these expenditures not only fosters business agility as in accelerated M&A, but also enhances an organization’s workforce efficiency. The proof is in the pudding. Here is a sample snapshot of the outcomes from a healthcare customer:

TVatsa_Healthcare Customer Diagram

The How

While the mobility evolution and its leap to an IoT revolution is imminent with the promise of anticipated outcomes as mentioned earlier, the question still lingers: How do you align the roles within your organization to ride the wave of mobility transformation?

Here is a sample representation of the recommended roles for an enterprise mobility center of excellence (COE):

TVatsa_COE

Here is the description of field recommendations in terms of mandatory and recommended roles for an enterprise EUC/mobility transformation:

TVatsa_Proposed Org Roles

Conclusion

Given the rate at which enterprise mobility is evolving towards IoT, it is only a matter of time when every facet of our lives, from our work to home environments, will be fully transformed by this tectonic mobility driven IoT transformation. VMware’s mobility product portfolio, in combination with VMware’s experienced Professional Services Organization (PSO), can help you transform your enterprise onward in this revolutionary journey. VMware is ever-ready to be your trusted partner in this “DARE” endeavor. Until next time, go VMware!


TJ Vatsa is a principal architect and member of CTO Ambassadors at VMware representing the Professional Services organization. He has worked at VMware for more than five years and has more than 20 years of experience in the IT industry. During this time he has focused on enterprise architecture and applied his extensive experience in professional services and R&D to cloud computing, VDI infrastructure, SOA architecture planning and implementation, functional/solution architecture, enterprise data services and technical project management.

VMware User Environment Manager 9.0 – What’s New

Dale CarterBy Dale Carter

Earlier this month VMware released a new version of User Environment Manager that brings some new and exciting features, not only to User Environment Manager, but also to the Horizon Suite. To learn about the new features in Horizon 7 you can see my blog here.

Here, I would like to highlight the new main features of VMware User Environment Manager 9.0

Smart Policies

The new Smart Policies offer more granular control of what users can do when they connect to their virtual desktop or applications. With the first release of Smart Policies you will be able to manage these capabilities based on the following conditions:

  • Horizon Conditions
    • View Client Info (IP and name)
    • Endpoint location (Internal/External)
    • Tags
    • Desktop Pool name
  • Horizon Capabilities
    • Clipboard
    • Client drive
    • USB
    • Printing
    • PCoIP bandwidth profiles

For more information on these capabilities, see my more detailed blog here.

It should be noted that to use Smart Policies you will need Horizon 7 View and User Environment Manager 9. You will also need the latest View Agent and Clients installed to take advantage of these new features. Also note that these policies only work with the PCoIP and BLAST Extreme protocols, and not RDP.

Application Authorization (Application Blocking)

This feature gives administrators the ability to white- or black-list applications or folders. In the example below you can see that some applications are allowed and some will be blocked.

Application Blocking

Using this feature with User Environment Managers Conditions will not only give administrators great control over what applications users can use, but also how they can be used. An example would be if a user is on the internal network they have access to company-specific applications; however, if they accessed their desktops from an external network then these applications would not be available.

With a simple check of a box, administrators have a very simple model for enforcing applications that the users are authorized to use, and using conditions in this way could be result in a different set of applications depending on where the user connects from.

Enable Application Blocking

ThinApp Support

When clicking on the DirectFlex tab of an application you will now see the new check box to Enable ThinApp Support for that application.

Enable ThinApp Support

When this is selected you will be able to manage what happens within the ThinApp “bubble” from within User Environment Manager, rather than doing this by setting specific values during the ThinApp capture process, or afterward via a script. This integration generalizes the approach that packagers can take when choosing isolation or encapsulation. It allows them to not have to force the knowledge of each and every configuration during the capture process by setting isolation modes or creating separate packages for different application configurations.

You should also note that you do not need to configure a separate application within User Environment Manager to take advantage of this. If the box is checked the flex agent will notice if the application is natively installed or accessible via ThinApp, and automatically apply the correct settings.

Manage Personal Data

User Environment Manager now has the ability to easily manage personal data. This would include things like My Documents, My Music, My Pictures, etc.

The example below shows how easy this is to configure.

Personal Data Folder Redirection

Office 2016 Support

User Environment Manager 9.0 now supports Office 2016. As you can see from the example below this also includes Skype for Business and OneDrive. Just like with earlier versions these can all be added with the Easy Start button.

File Structure

New User Environment Manager Conditions

As part of the new deep integration with Horizon 7, User Environment Manager has added a number of new conditions that can be pulled from Horizon 7. These include Pool-Name, Tags, and client location – such as internal or external.

Horizon Client Property


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

VMware Horizon 7 New Features

Dale CarterBy Dale Carter

With the release of VMware Horizon 7, I thought I would highlight some of the new features that have been added with this release.

Blast Extreme Protocol

With the update to Blast Extreme, VMware has upgraded the Blast Extreme protocol to the same level as PCoIP and RDP. Now you will be able to use the Blast Extreme protocol when connecting via HTML5, and also when you connect to a virtual desktop or RDSH app using your VMware Horizon client on any device.

DCarter_Edit LocalA

Just as with PCoIP and RDP, VMware Horizon Administrators will be able to configure the Blast Extreme protocol as the default protocol for both desktop and application pools.

DCarter_Edit Global Entitlement

Blast Extreme will not only be available for standard desktop and application pools but also global pools when configured with Cloud Pod Architecture.

VMware Instant Clone Technology

VMware Instant Clone is the long awaited technology built on VMware Fork technology that was previewed at VMworld. VMware has been working on it for some time. VMware Instant Clone helps to create the just-in-time desktop. It allows for a new virtual desktop to be created in seconds, and thousands of virtual desktops to be created in a very short time. This is one of the best features of the VMware Horizon 7 release, and I believe that VMware Horizon administrators are going to love creating desktop pools using this new Instant Clone technology.

For information on configuring the new VMware Horizon Instant Clone technology, see my blog here.

Cloud Pod Architecture

The two main updates to Cloud Pod Architecture are scale and home site improvements. I have written two new blogs to cover these new updates:

Cloud Pod Architecture New Features

Update to How CPA Home Sites Work with VMware Horizon 7

Smart Policies

The new Smart Policies are a way to have more granular control of what users can access when they connect to their virtual desktop or applications. With the first release of Smart Policies, you will be able to set the following policies based on certain conditions:

  • VMware Horizon Conditions
    • View client info (IP and name)
    • Endpoint location (Internal/external)
    • Tags
    • Desktop pool name
  • VMware Horizon Capabilities
    • Clipboard
    • Client drive
    • USB
    • Printing
    • PCoIP bandwidth profiles

For more information on these capabilities see my more detailed blog here .

To use Smart Policies, you will need VMware Horizon 7 and User Environment Manager 9. You will also need the latest view agent and clients installed to take advantage of these new features. The other thing to note is that these policies only work with the PCoIP and Blast Extreme protocols and not RDP.

Desktop Pool Deletion

The Desktop Pool Deletion feature is often a request from customers who want to stop administrators from deleting a desktop pool that currently has active desktops within it. With VMware Horizon 6.x and earlier versions, it was possible for an administrator to accidentally delete a desktop pool and all the VM’s within that pool. This new feature, when enabled, will stop that from happening. To enable this feature, follow the instructions in my blog here.

These are just some of the new features that have been released with VMware Horizon 7. For a full list of the new features, check out the release notes.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

3 Reasons VMware Horizon 7 Will Make You Smile

Michael BradleyBy Michael Bradley

The June 2014 release of VMware Horizon® 6 brought with it a long list of exciting new features. Cloud Pod Architecture (CPA), RDS hosted desktop and applications, and integration with VMware vSAN were just a few of the headlines that sent desktop administrators rushing to upgrade.

Although the new features marked huge advances in availability and scalability, they came with certain, shall we say, nuisances. These nuisances had a way of popping up at the most inopportune times, and although not showstoppers by any stretch of the imagination, could become very irritating very quickly. Now, I’m the kind of guy who is easily irritated by nuisances, so, seeing the list of features coming with Horizon 7 made me smile. With this upcoming release, VMware is introducing enhancements that fix three of the items on my personal list of nuisances in VMware Horizon 6. Let’s take a look.

Cloud Pod Architecture Home Sites

The introduction of Cloud Pod Architecture was a huge step forward in providing true high availability and scalability for a VMware Horizon 6 virtual desktop infrastructure. The ability to easily span pools across multiple data centers had been something that VMware customers had been requesting for some time. For the most part, Cloud Pod Architecture did exactly what it was designed to do. However, there was one small thing about it that really irritated me: home sites.

A home site is the affinity between a user and a Cloud Pod Architecture site. Home sites ensure that users always receive desktops from a particular data center, even when they are traveling. Home sites were a nice idea, and worked wonderfully, in most circumstances.

What I found to be irritating was the fact that if resources were unavailable in the user’s assigned home site, Cloud Pod Architecture would stop searching for available desktop/app sessions and deny access to the user, even if there were resources available in an alternate site.

HomeSites

The good news is that, with the release of VMware Horizon 7, this behavior has changed. When a user who is assigned a home site logs in to VMware Horizon, Cloud Pod Architecture will search for available resources in that user’s home site. However, if no available resources can be found, Horizon will search other eligible sites and, if found, assign an available desktop/app session to the user.

Certificate Single Sign-On

This problem is not uncommon to users logging into a VMware Horizon® View™ environment using RADIUS, RSA’s SecurID, or even VMware Identity Manager™. In each of these situations, it is possible that the users may not enter their active directory (AD) credentials, and, although VMware Horizon “trusts” that user, they may be forced to enter their AD credentials in order to access their Windows desktop. This is dependent on the 2 form factor authentication requirements and implementation.

This will change with the introduction of certificate SSO. In VMware Horizon 7, certificate SSO allows users to authenticate to a Windows desktop without requiring AD credentials or a smartcard. Authentication is based on a patented process whereby a short lived certificate is created specifically for the user allowing authentication to a singular Windows session, which then logs the user in. In all cases, the user will have previously been authenticated through another service using other “non AD mechanisms,” such as biometrics, SecurID, RADIUS, or VMware Identity Manager. The VMware Horizon 7 session is launched using security assertion markup language (SAML), and the SAML assertion will include a reference to the user’s UPN, which is then used to generate a custom certificate for the logon process.

Desktop Pool Deletion

It’s the stuff of nightmares. A VDI administrator working in the VMware Horizon administrator console accidently clicks “Delete” on the desktop pool that contains the desktops for every executive in the company. As the administrator watches each desktop delete, all he can do is update his resume and wait for the hammer to fall. If you’ve woken up in a cold sweat with this recurring nightmare, then you are in luck.

With the release of VMware Horizon 7, administrators can only delete desktop pools that are empty. If you try to delete a pool that contains desktops, a message will be displayed, instructing the administrator that the pool contains desktops. In order to delete a desktop pool, you must disable provisioning, and then delete all of the desktops from inventory first. This makes it virtually impossible to accidently delete a desktop pool, allowing desktop administrators everywhere to sleep a little easier.

DeletePool

So, VMware Horizon 7 doesn’t fix nuisances like traffic jams, global warming, or nuclear proliferation, but I’m excited to see its new features and enhancements, and I’m pleased to say that there are plenty more where they came from.


Michael Bradley, a VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for almost 20 years. He is also a VCP5-DCV, VCAP4-DCD, VCP4-DT, VCP5-DT, and VCAP-DTD, as well as an Airwatch Enterprise Mobility Associate.

Hybrid Cloud and Hybrid Cloud Manager

Michael_FrancisBy Michael Francis

Disclaimer: This blog is not a technical deep dive on Hybrid Cloud Manager; it talks to the components of the product and the design decisions around the product. It assumes the reader has knowledge of the product and its architecture.

Recently, I have been involved with the design and deployment of Hybrid Cloud Manager for some customers. It has been a very interesting exercise to work through the design and the broader implications.

Let’s start with a brief overview of Hybrid Cloud Manager. Hybrid Cloud Manager is actually comprised of a set of virtual appliances that reside both on-premises and in vCloud Air. The product is divided into a management plane, control plane, and data plane.

  • The management plane is instantiated by a plugin in the vSphere Web Client.
  • The control plane is instantiated by the Hybrid Cloud Manager virtual appliance.
  • The data plane is instantiated by a number of virtual appliances – Cloud Gateway, Layer 2 Concentrator, and the WAN Opto appliance.

The diagram below illustrates these components and their relationships to each other on-premises and the components in vCloud Air.

MFrancis_Logical Architecture Hybrid Cloud Manager

Figure 1 – Logical Architecture Hybrid Cloud Manager

The Hybrid Cloud Manager provides virtual machine migration capability, which is built on two functions: virtual machine replication[1] and Layer 2 network extension. The combination of these functions provides an organization with the ability to migrate workloads without the logistical and technical issues traditionally associated with migrations to a public cloud; specifically, the outage time to copy on-premises virtual machines to a public cloud, and virtual machine re-addressing.

During a recent engagement that involved the use of Hybrid Cloud Manager, it became very obvious that even though this functionality simplifies the migration, it does not diminish the importance of the planning and design effort prior to any migration exercises. Let me explain.

Importance of Plan and Design

When discussing a plan, I am really discussing the importance of a discovery that deeply analyses
on-premises virtual workloads. This is critical, as the Hybrid Cloud Manager creates such a seamless extension to the on-premises environment, we need to understand:

  • Which workloads will be migrated
  • Which networks the workloads reside on
  • What compute isolation requirements exist
  • How and where network access control is instantiated on-premises

Modification of a virtual network topology in Public Cloud can be a disruptive operation; just as it is in the data center. Introducing an ability to stretch layer 2 network segments into the Public Cloud and migrating out of a data center into Public Cloud increases the number of networks and the complexity of the topology of the networks in the Public Cloud. So the more planning that can be done early the less likely disruptions to services will need to occur later.

One of the constraints in the solution revolves around stretching layer 2 network segments. A Layer 2 network segment located on-premises can be ‘stretched’ to one virtual data center in vCloud Air. So we have some implications of which workloads exist on a network segment, and which vCloud Air virtual data centers will be used to host the workloads on the on-premises network segment. This obviously influences the creation of virtual data centers in vCloud Air, and the principals defined in the design, which influence when additional virtual data centers are stood up – compared with growing an existing virtual data center.

Ideally, an assessment of on-premises workloads would be performed prior to any hybrid cloud design effort. This assessment would be used to size subsequent vCloud Air virtual data centers; plus, it would discover information about the workload resource isolation that drives the need for workload separation into multiple virtual data centers. For instance, the requirement to separate test/development workloads from production workloads with a ‘hard’ partition would be one example of a requirement that would drive a virtual data center design.

During this discovery we would also identify which workloads reside on which networks, and which networks require ‘stretching’ into vCloud Air. This would surface any issues we may face due to the constraint that we can only stretch a Layer 2 segment into one virtual data center.[2] This assessment really forms the ‘planning’ effort in this discussion.

Design Effort

The design effort involves designs for vCloud Air and Hybrid Cloud Manager. I believe the network design of vCloud Air is a critical element. We need to determine whether to use:

  • Dynamic routing or static routing
  • Subnet design and its relationship to routing summarization
  • Routing paths to the Internet
  • Estimated throughputs required for any virtual routing devices
  • Other virtual network services
  • Egress optimization functionality from Hybrid Cloud Manager
  • And finally, we need to determine where security network access points are required

The other aspect is the design of the virtual compute containers, such as virtual data centers in vCloud Air. The design for vCloud Air should define the expected virtual data center design over the lifecycle of the solution. It would define the compute resource assignment to each virtual data center initially, and over the lifecycle as anticipated growth is factored in. During the growth of use, the requirements for throughput will increase on the networking components in vCloud Air, so the design should articulate guidance around when an increase in size of virtual routing devices will need to occur.

The vCloud Air platform is an extension of the on-premises infrastructure. It is a fundamental expectation that operations teams have visibility into the health of the infrastructure, and that capacity planning of infrastructure is taking place. Similarly, there is a requirement to ensure that the vCloud Air platform and associated services are healthy and capacity managed. We should be able to answer the question, “Are my virtual data center routing devices of the right size, and is their throughput sufficient for the needs of the workloads hosted in vCloud Air?” Ideally we should have a management platform that treats vCloud Air as an extension to our on-premises infrastructure.

This topic could go much deeper, and there are many other considerations as well, such as, “Should I place some management components in vCloud Air,” or, “Should I have a virtual data center in vCloud Air specifically assigned to host these management components?”

I believe today many people take an Agile approach to their deployment of public cloud services, such as networking and virtual compute containers. But I believe if you are implementing such a hybrid interface as offered by Hybrid Cloud Manager, there is real benefit to a longer term view to the design of vCloud Air services to minimise risk if we paint ourselves into a corner in the future.

Some Thoughts on Hybrid Cloud Manager Best Practices

Before wrapping up this blog, I wanted to provide some thoughts on some of the design decisions regarding Hybrid Cloud Manager.

In a recent engagement we considered best practices for placement of appliances, and we came up with the following design decisions.

MFrancis_Design Decision 1

MFrancis_Design Decision 2

MFrancis_Design Decision 3

Key Takeaways

The following are the key takeaways from this discussion:

  • As Hybrid Cloud Manager provides a much more seamless extension of the on-premises data center, deeper thought and consideration needs to be put into the design of the vCloud Air public cloud services.
  • To effectively design vCloud Air services for Hybrid Cloud requires a deep understanding of the on-premises workloads, and how they will leverage the hybrid cloud extension.
  • Network design and ongoing network access controlling operational changes need to be considered.
  • Management and monitoring of the vCloud Air services acts as an extension of the data center needs to be included in the scope of a Hybrid Cloud solution.

[1] Leverages the underlying functionality of vSphere Replication; but isn’t a full vSphere Replication architecture.

[2] This constraint could be overcome; however, the solution would require configurations that would make other elements of the design sub-optimal; for example, disabling the use of egress optimization.


Michael Francis is a Principal Systems Engineer at VMware, based in Brisbane.

Configuring NSX SSL VPN-Plus

Spas_KaloferovBy Spas Kaloferov

One of the worst things you can do is to buy a great product like VMware NSX Manager and not use its vast number of functionalities. If you are one of those people and want to “do better” then this article is for you. Will take a look how to configure SSL VPN-Plus functionality in VMware NSX. With SSL VPN-Plus, remote users can connect securely to private networks behind a NSX Edge gateway. By doing so remote users can access servers and applications in the private networks.

Consider a software development company that has made design decision and is planning to extend it’s existing network infrastructure and allow remote users access to some segments of it’s internal network. To accomplish this the company will be utilizing the already existing VMware NSX Manager network infrastructure platform to create a Virtual Private Network (VPN).

The company has identified the following requirements for their VPN implementation:

  • The VPN solution should utilize SSL certificate for communication encryption and be used with standard Web browser.
  • The VPN solution should use Windows Active Directory (AD) as identity source to authenticate users.
  • Only users within a given AD organizational unit (OU) should be granted access to the VPN.
  • Users should be utilizing User Principal Names (UPN’s) to authenticate to the VPN.
  • Only users who have accounts with specific characteristics, like those having an Employee ID associated with their account, should be able to authenticate to the VPN.

If you have followed one of my previous articles Managing VMware NSX Edge and Manager Certificates, you have already made the first step towards configuring SSL VPN-Plus.

Configuring SSL VPN-Plus is a straightforward process, but fine tuning it’s configuration to meet your needs might sometimes be a bit tricky. Especially when configuring Active Directory for authentication. We will look into a couple of examples how to use the Login Attribute Name and Search Filter parameters fine grain and filter the users who should be granted VPN access.

Edit Authentication Server tab on VMware NSX Edge:

SKaloferov Edit Authentication Server

Please visit Configuring NSX SSL VPN-Plus to learn more about the configuration.


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

“Network Functions Virtualization (NFV) for Dummies” Blog Series – Part 2

Common Use Cases for the Transition to NFV

Gary HamiltonBy Gary Hamilton

In my previous blog posting I discussed the question – What is NFV? This blog post looks at the network functions that will be delivered as virtual network functions (VNFs), instead of as hardware appliances. SDx Central offers a nice article that helps with some of these definitions.

In very simple terms (remember this is a blog series for IT people, not network experts), a network function is a network capability that provides application and service support, and can be delivered as a composite whole, like an application. SDx Central does a good job in the aforementioned article of grouping these network functions by categorizing them into umbrella or macro use cases. By leveraging these macro use cases, I am able to provide a layman’s description of what these use cases are attempting to achieve, and the service support they deliver. The macro use cases I will focus on in my explanation are “virtual customer edge” and “virtual core and aggregation”, because these are the two use cases that are generally being tackled first from a NFV perspective.

Use Case – Connecting a remote office (using vCPE)

In layman terms, the SDx Central “customer edge” use case focuses on how to connect a remote office, or remote branch, to a central data centre network, and extending the central data centre’s network services into that remote office. In order to deliver this connectivity, a CPE (customer premises equipment) device is used. Generally, the types of device used would be a router, switch, gateway, firewall, etc., (these are all CPEs) providing anything from Layer 2 QoS (quality of service) services to Layer 7 intrusion detection. The Layer 2 and Layer 7 references are from the OSI Model. vCPE (virtual customer premises equipment) is the virtual CPE, delivered using the NFV paradigm.

The following diagram was taken from the ETSI Use Case document (GS NFV 001 v1.1.1 2013-10), referring to the vCPE device as vE-CPE. (I’ll discuss the significance of ETSI in a future blog post)

This diagram illustrates how vCPEs are used to connect the remote branch offices to the central data centre. It also illustrates that it is OK to mix non-virtualised CPEs with vCPEs in an infrastructure. Just as in the enterprise IT cloud world, the data and applications leveraging the virtual services are not aware – and do not care – whether these services are virtual or physical. The only thing that matters is whether the non-functional requirements (NFRs) of the application are effectively met. Those NFRs include requirements like performance and availability.

This particular use case has two forms, or variants –

  • Remote vCPE (or customer-premise deployed)
  • Centralised vCPE (deployed within the data centre)

The diagram below shows examples of both variants, where vCPEs are deployed in branch offices, as well as centrally. The nature of the application being supported, and its NFRs, would normally dictate placement requirements. A satellite/cable TV set-top box is a consumer example of a “customer-premise deployed” CPE.

GHamilton Customer Premise Deployed CPE

Use Case – Virtualising the mobile core network (using vIMS)

The SDx Central “Virtual core and aggregation” use cases are focused on the mobile core network (Evolved Packet Core – EPC) and IP Multimedia Subsystem (IMS). In layman terms, this is about the transportation of packets across a mobile operator’s network. This is focused on mobile telephony.

IMS is an architectural network framework for the delivery of telecommunications services using IP (internet protocol). When IMS was conceived in the 1990s by the 3rd Generation Partnership Project (3GPP), it was intended to provide an easy way for the worldwide deployment of telecoms networks that would interface with the existing public switched telephone network (PSTN), thereby providing flexibility, expandability, and the easy on-boarding of new services from any vendor. It was also hoped that IMS would provide a standard for the delivery of voice and multimedia services. This vision has fallen short in reality.

IMS is a standalone system, designed to act as a service layer for applications. Inherent in its design, IMS provides an abstraction layer between the application and the underlining transport layer, as shown in the following diagram of the 3GPP/TISPAN IMS architecture overview.

An example of an application based on IMS is VoLTE, which stands for “Voice over 4G LTE” wireless network. Examples of VoLTE applications are Skype and Apple’s FaceTime.

GHamilton 3GPP and TISPAN

Use Case – Virtualising the mobile core network (using vEPC)

While IMS is about supporting applications by providing application server functions, like session management and media control, EPC is about the core network, transporting voice, data and SMS as packets.

EPC (Evolved Packet Core) is another initiative from 3GPP for the evolution of the core network architecture for LTE (Long-Term Evolution – 4G). The 3GPP website provides a very good explanation of its evolution, and description of LTE here.

In summary, EPC is a packet-only network for data, voice and SMS, using IP. The following diagram shows the evolution of the core network, and the supporting services.

  • GSM (2G) relied on circuit-switching networks (the aforementioned PSTN)
  • GPRS and UMTS (3G) are based on a dual-domain network concept, where:
    • Voice and SMS still utilise a circuit-switching network
    • But data uses a packet-switched network
  • EPS (4G) is fully dependent on a packet-switching network, using IP.

GHamilton Voice SMS Data

Within an EPS service, EPC provides the gateway services and user management functions as shown in the following diagram. In this simple architecture:

  • A mobile phone or tablet (user equipment – UE) is connected to an EPC over an LTE network (a radio network) via an eNodeB (a radio tower base station).
  • A Serving GW transports IP data traffic (the user plane) between the UE and external network.
  • The PDN GW is the interface between the EPC and external network, for example, the Internet and/or an IMS network, and allocates IP addresses to the UEs. PDN stands for Public Data Network.
    • In a VoLTE architecture, a PCRF (Policy and Charging Rule Function) component works with the PDN GW, providing real-time authorisation of users, and setting up the sessions in the IMS network.
  • The HSS (Home Subscriber Server) is a database with user-related and subscriber-related information. It supports user authentication and authorisation, as well as call and session setup.
  • The MME (Mobility Management Entity) is responsible for mobility and security management on the control plane. It is also responsible for the tracking of the UE in idle-mode.

GHamilton EPC E-UTRAN

In summary, EPC, IMS and CPE are all network functions that deliver key capabilities that we take for granted in our world today. EPC and IMS support the mobile services that have become a part of our daily lives, and frankly, we probably would not know what to do without them. CPE supports the network interconnectivity that is a part of the modern business world. These are all delivered using very specialised hardware appliances. The NFV movement is focused on delivering these services using software running on virtual machines, running on a cloud, instead of using hardware appliances.

There are huge benefits to this movement.

  • It will be far less expensive to utilise shared, commodity infrastructure for all services, versus expensive, specialised appliances that cannot be shared.
  • Operational costs are far less expensive because the skills to support the infrastructure are readily available in the market.
  • It costs far less to bring a new service on-board, because it entails deploying some software in VMs, versus the acquisition and deployment of specialised hardware appliances.
  • It costs far less to fail. If a new service does not attract the expected clientele, the R&D and deployment costs of that service will be far less with NFV than in the traditional model.
  • It will be significantly faster to bring new services to market. Writing and testing new software is much faster than building and testing new hardware.
  • The costs of the new virtual network functions (VNFs) will be less because the entry point is far lower because it is now about developing software versus building a hardware appliance. We see evidence of this, where a lot of new players have come into the Network Equipment Provider (NEP) market (the suppliers of the VNFs), therefore creating more competition, which drives down prices.

All sounds great. But, we have to be honest, there are serious questions to be answered/addressed –

  • Can the VNFs deliver the same level of service as the hardware appliances?
  • Can the Telco operators successfully transform their current operating models to support this new NFV paradigm?
  • Can a cloud meet the non-functional requirements (NFRs) of the VNFs?
  • Are the tools within the cloud fit for purpose for Telco grade workloads and services?
  • Are there enough standards to support the NFV movement?

All great questions that I will try to answer in future blogs. The European Telecommunications Standard Institute (ETSI), an independent, not-for-profit organisation that develops standards via consensus of their members, has been working on the answers to some of these questions. Others are being addressed by cloud vendors, like VMware.


Gary Hamilton is a Senior Cloud Management Solutions Architect at VMware and has worked in various IT industry roles since 1985, including support, services and solution architecture; spanning hardware, networking and software. Additionally, Gary is ITIL Service Manager certified and a published author. Before joining VMware, he worked for IBM for over 15 years, spending most of his time in the service management arena, with the last five years being fully immersed in cloud technology. He has designed cloud solutions across Europe, the Middle East and the US, and has led the implementation of first of a kind (FOAK) solutions. Follow Gary on Twitter @hamilgar.