Home > Blogs > VMware Consulting Blog > Tag Archives: IT Services

Tag Archives: IT Services

VMware User Environment Manager Deployed in 60 Minutes or Less

By Dale Carter

With the VMware acquisition of Immidio, announced in February 2015, VMware has now released VMware User Environment Manager (UEM). In the last several weeks I have been doing some internal testing with UEM and looking at the different things the software can do, and how this will help administrators manage users and improve the user experience.

After the acquisition was complete I kept hearing internal conversations about just how easy UEM is to deploy and get up-and-running, as there is no extra infrastructure needed to configure UEM. All that is required to configure UEM is:

  • A couple of file shares
  • Configuration of group policy objects (GPOs) on the User organizational unit (OU)
  • Installation of UEM agent and manager software

Unlike a lot of other management software, VMware UEM only requires that the software is installed on one or more administrator desktops. There is no management server component other than network file shares and configuration of a few GPOs.

Given the simplicity of the installation, I decided to document how easy it is to get an enterprise-ready solution deployed in less than 60 minutes. Now, this is a basic deployment for 50 linked clone virtual machines, but you’ll see just how easy it is to deploy and configure them. For an enterprise with many sites, decisions need to be made about configuring network shares and where to place them on the network. But most of the work, as you will see, can easily be accomplished in 60 minutes or less. Read the VMware User Environment Manager Guide below:

DCarter UEM 5

 


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

Managing VMware NSX Edge and Manager Certificates

Spas_KaloferovBy Spas Kaloferov

di·ver·si·ty

“Diversity” was the first word that came to my mind when I joined VMware. I noticed the wide variety of different methods and processes utilized to replace certificates on the different VMware appliance products. For example, with VMware vRealizeTM OrchestratorTM, users must undergo a manual process to replace the certificate, but with VMware vRealizeTM AutomationTM administrators have a graphical user interface (GUI) option, and with VMware NSX ManagerTM there is another completely different GUI option to request and change for the certificate of the product.

 

Figure 1. SSL Certificates tab on the VMware NSX ManagerTM 

SSL Certificates tab on the VMware NSX Manager

This variety of certificate replacement methods and techniques is understandable as all of these VMware products are a result of different acquisitions. Although these products are great in their own unique ways, the lack of a common, smooth and user-friendly certificate replacement methodology has always filled the administrators and consultants with anxiety.

This anxiety often leads to certificate configuration issues among the majority of VMware family members, partners and end users. As a member of this family—and also of the majority—I recently felt this anxiety when I had to replace my VMware NSX Manager and NSX EdgeTM certificates.

pas·sion

I must say that up to the point where I had to replace these certificates, I had pretty awesome experiences installing and configuring VMware NSX Manager, and even developed advanced services like network load balancing. But I hit a minor roadblock with the certificates, and my passion to kick down any road block until it turns to dust wasn’t going to leave me alone.

ex·e·cu·tion

I got in touch with some of my awesome colleagues and NSX experts to get me back on the good experience track of NSX. As expected, they did (not that I have ever doubted them). Now, I was exploring the advanced VMware NSX Manager capabilities with full power – like SSL VPN-Plus where I had to again configure a certificate for my perimeter gateway edge device.

Figure 2. Server Settings tab of the SSL VPN-Plus setting on the VMware NSX EdgeTM

Server Settings tab of the SSL VPN-Plus setting on the VMware NSX Edge

This time I wasn’t anxious because I now had the certificate replacement process under control.

cus·to·mer

As our customers are core to our mission, we want to empower them by freeing them from certificate replacement challenges so they can spend their time and energy on more pressing technological issues. To help empower other passionate enthusiasts, and help keep them on the good experience track of NSX, I’ve decided to describe the certificate replacement processes I’ve been using and share them in a blog post to make them available to everyone.

com·mu·ni·ty

We are all connected. We approach each other with open minds and humble hearts. We serve by dedicating our time, talent, and energy – creating a thriving community together. Please visit Managing NSX Edge and Manager Certificates to learn more about the certificate replacement process.


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

Using Super Metrics to Populate Widgets in VMware vRealize Operations Manager

Jeremy WheelerBy Jeremy Wheeler

When setting up dashboards in VMware vRealizeTM Operations ManagerTM, I’ve found a lot of customers are trying to locate specific metrics, such as how much memory is available to a cluster after honoring N+1 and 80 percent max memory utilization per host. These types of metrics can be located through a “super metric,” but in many cases you need to edit the XML file(s) correlated to the widget before you can present the super metric to the GUI widget.

In VMware’s previous version of VMware vCenterTM Operations ManagerTM, XML files were used heavily when a specific widget interaction was needed. With VMware vRealize Operations Manager, the process of injecting super metrics into an XML file has changed. This blog specifically talks about the steps needed to populate a widget with your super metric. View the document here: vRealize Operations Management Supermetrics and XML Editing.

For more information, be sure to check out the following VMware Education courses:

 

vRealize with Operations Management Supermetrics_Jeremy Wheeler


Jeremy Wheeler is an experienced senior consultant and architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013

vSphere Datacenter Design – vCenter Architecture Changes in vSphere 6.0 – Part 2

jonathanm-profileBy Jonathan McDonald

In Part 1 the different deployment modes for vCenter and Enhanced Linked Mode were discussed. In part 2 we finish this discussion by addressing different platforms, high availability and recommended deployment configurations for vCenter.

Mixed Platforms

Prior to vSphere 6.0, there was no interoperability between vCenter for Windows and the vCenter Server Linux Appliance. After a platform was chosen, a full reinstall would be required to change to the other platform. The vCenter Appliance was also limited in features and functionality.

With vSphere 6.0, they are functionally the same, and all features are available in either deployment mode. With Enhanced Linked Mode both versions of vCenter are interchangeable. This allows you to mix vCenter for Windows and vCenter Server Appliance configurations.

The following is an example of a mixed platform environment:

JMcDonald pt 2 (1)

This mixed platform environment provides flexibility that has never been possible with the vCenter Platform.

As with any environment, the way it is configured is based on the size of the environment (including expected growth) and the need for high availability. These factors will generally dictate the best configuration for the Platform Services Controller (PSC).

High Availability

Providing high availability protection to the Platform Services Controller adds an additional level of overhead to the configuration. When using an embedded Platform Services Controller, protection is provided in the same way that vCenter is protected, as it is all a part of the same system.

Availability of vCenter is critical due to the number of solutions requiring continuous connectivity, as well as to ensure the environment can be managed at all times. Whether it is a standalone vCenter Server, or embedded with the Platform Services Controller, it should run in a highly available configuration to avoid extended periods of downtime.

Several methods can be used to provide higher availability for the vCenter Server system. The decision depends on whether maximum downtime can be tolerated, failover automation is required, and if budget is available for software components.

The following table lists methods available for protecting the vCenter Server system and the vCenter Server Appliance when running in embedded mode.

Redundancy Method Protects
vCenter Server system?
Protects
vCenter Server Appliance?
Automated protection using vSphere HA Yes Yes
Manual configuration and manual failover, for example, using a cold standby. Yes Yes
Automated protection using Microsoft Clustering Services (MSCS) Yes No

If high availability is required for an external Platform Services Controller, protection is provided by adding a secondary backup Platform Services Controller, and placing them both behind a load balancer.

The load balancer must support Multiple TCP Port Balancing, HTTPS Load Balancing, and Sticky Sessions.  VMware has currently tested several load balancers including F5 and Netscaler, however does not directly support these products. See the vendor documentation regarding configuration details for any load balancer used.

Here is an example of this configuration using a primary and a backup node.

JMcDonald pt 2 (2)

With vCenter 6.0, connectivity to the Platform Services Controller is stateful, and the load balancer is only used for its failover ability. So active-active connectivity is not recommended for both nodes at the same time, or you risk corruption of the data between nodes.

Note: Although it is possible to have more than one backup node, it is normally a waste of resources and adds a level of complexity to the configuration for little gain. Unless there is an expectation that more than a single node could fail at the same time, there is very little benefit to configuring a tertiary backup node.

Scalability Limitations

Prior to deciding the configuration for vCenter, the following are the scalability limitations for the different configurations. These can have an impact on the end design.

Scalability Maximum
Number of Platform Services Controllers per domain

8

Maximum PSCs per vSphere Site, behind a single load balancer

4

Maximum objects within a vSphere domain (Users, groups, solution users)

1,000,000

Maximum number of VMware solutions connected to a single PSC

4

Maximum number of VMware products/solutions per vSphere domain

10

Deployment Recommendations

Now that you understand the basic configuration details for vCenter and the Platform Services Controller, you can put it all together in an architecture design. The choice of a deployment architecture can be a complex task depending on the size of the environment.

The following are some recommendations for deployment. But please note that VMware recommends virtualizing all the vCenter components because you gain the benefits of vSphere features such as VMware HA. These recommendations are provided for virtualized systems; physical systems need to be protected appropriately.

  • For sites that will not use Enhanced Linked Mode, use an embedded Platform Services Controller.
    • This provides simplicity in the environment, including a single pane-of-glass view of all servers while at the same time reducing the administrative overhead of configuring the environment for availability.
    • High availability is provided by VMware HA. The failure domain is limited to a single vCenter Server, as there is no dependency on external component connectivity to the Platform Services Controller.
  • For sites that will use Enhanced Linked Mode use external Platform Service Controllers.
    • This configuration uses external Platform Services controllers and load balancers (recommended for high availability). The number of controllers depends on the size of the environment:
      • If there are two to four VMware solutions – You will only need a single Platform Services Controller if the configuration is not designed for high availability; two Platform Services Controllers will be required for high availability behind a single load balancer.
      • If there are four to eight VMware solutions – Two Platform Services Controllers must be linked together if the configuration is not designed for high availability; four will be required for a high-availability configuration behind two load balancers (two behind each load balancer).
      • If there are eight to ten VMware solutions – Three Platform Services Controllers should be linked together for a high-availability configuration; and six will be required for high availability configured behind three load balancers (two behind each load balancer).
    • High availability is provided by having multiple Platform Services Controllers and a load balancer to provide failure protection. In addition to this, all components are still protected by VMware HA. This will limit the failure implications of having a single Platform Services Controller, assuming they are running on different ESXi hosts.

With these deployment recommendations hopefully the process of choosing a design for vCenter and the Platform Services Controller will be dramatically simplified.

This concludes this blog series. I hope this information has been useful and that it demystifies the new vCenter architecture.

 


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments.

vSphere Datacenter Design – vCenter Architecture Changes in vSphere 6.0 – Part 1

jonathanm-profileBy Jonathan McDonald

As a member of VMware Global Technology and Professional Services at VMware I get the privilege of being able to work with products prior to its release. This not only gets me familiar with new changes, but also allows me to question—and figure out—how the new product will change the architecture in a datacenter.

Recently, I have been working on exactly that with vCenter 6.0 because of all the upcoming changes in the new release. One of my favorite things about vSphere 6.0 is the simplification of vCenter and associated services. Previously, each individual major service (vCenter, Single Sign-On, Inventory Service, the vSphere Web Client, Auto Deploy, etc.) was installed individually. This added complexity and uncertainty in determining the best way to architect the environment.

With the release of vSphere 6.0, vCenter Server installation and configuration has been dramatically simplified. The installation of vCenter now consists of only two components that provide all services for the virtual datacenter:

  • Platform Services Controller – This provides infrastructure services for the datacenter. The Platform Services Controller contains these services:
    • vCenter Single Sign-On
    • License Service
    • Lookup Service
    • VMware Directory Service
    • VMware Certificate Authority
  • vCenter Services – The vCenter Server group of services provides the remainder of the vCenter Server functionality, which includes:
    • vCenter Server
    • vSphere Web Client
    • vCenter Inventory Service
    • vSphere Auto Deploy
    • vSphere ESXi Dump Collector
    • vSphere Syslog Collector (Microsoft Windows)/VMware Syslog Service (Appliance)

So, when deploying vSphere 6.0 you need to understand the implications of these changes to properly architect the environment, whether it is a fresh installation, or an upgrade. This is a dramatic change from previous releases, and one that is going to be a source of many discussions.

To help prevent confusion, my colleagues in VMware Global Support, VMware Engineering, and I have developed guidance on supported architectures and deployment modes. This two-part blog series will discuss how to properly architect and deploy vCenter 6.0.

vCenter Deployment Modes

There are two basic architectures that can be used when deploying vSphere 6.0:

  • vCenter Server with an Embedded Platform Services Controller – This mode installs all services on the same virtual machine or physical server as vCenter Server. The configuration looks like this:

JMcDonald 1

This is ideal for small environments, or if simplicity and reduced resource utilization are key factors for the environment.

  • vCenter Server with an External Platform Services Controller – This mode installs the platform services on a system that is separate from where vCenter services are installed. Installing the platform services is a prerequisite for installing vCenter. The configuration looks as follows:

JMcDonald 2

 

This is ideal for larger environments, where there are multiple vCenter servers, but you want a single pane-of-glass for the site.

Choosing your architecture is critical, because once the model is chosen, it is difficult to change, and configuration limits could inhibit the scalability of the environment.

Enhanced Linked Mode

As a result of these architectural changes, Platform Services Controllers can be linked together. This enables a single pane-of-glass view of any vCenter server that has been configured to use the Platform Services Controller domain. This feature is called Enhanced Linked Mode and is a replacement for Linked Mode, which was a construct that could only be used with vCenter for Windows. The recommended configuration when using Enhanced Linked Mode is to use an external platform services controller.

Note: Although using embedded Platform Services Controllers and enabling Enhanced Linked Mode can technically be done, it is not a recommended configuration. See List of Recommended topologies for vSphere 6.0 (2108548) for further details.

The following are some recommend options on how—and how not to—configure Enhanced Linked Mode.

  • Enhanced Linked Mode with an External Platform Services Controller with No High Availability (Recommended)

In this case the Platform Services Controller is configured on a separate virtual machine, and then the vCenter servers are joined to that domain, providing the Enhanced Linked Mode functionality. The configuration would look this way:

JMcDonald 3

 

There are benefits and drawbacks to this approach. The benefits include:

  • Fewer resources consumed by the combined services
  • More vCenter instances are allowed
  • Single pane-of-glass management of the environment

The drawbacks include:

  • Network connectivity loss between vCenter and the Platform Service Controller can cause outages of services
  • More Windows licenses are required (if on a Windows Server)
  • More virtual machines to manage
  • Outage on the Platform Services Controller will cause an outage for all vCenter servers connected to it. High availability is not included in this design.
  • Enhanced Linked Mode with an External Platform Services Controller with High Availability (Recommended)

In this case the Platform Services Controllers are configured on separate virtual machines and configured behind a load balancer; this provides high availability to the configuration. The vCenter servers are then joined to that domain using the shared Load Balancer IP address, which provides the Enhanced Linked Mode functionality, but is resilient to failures. This configuration looks like the following:

JMcDonald 4

There are benefits and drawbacks to this approach. The benefits include:

  • Fewer resources are consumed by the combined services
  • More vCenter instances are allowed
  • The Platform Services Controller configuration is highly available

The drawbacks include:

  • More Windows licenses are required (if on a Windows Server)
  • More virtual machines to manage
  • Enhanced Linked Mode with Embedded Platform Services Controllers (Not Recommended)

In this case vCenter is installed as an embedded configuration on the first server. Subsequent installations are configured in embedded mode, but joined to an existing Single Sign-On domain.

Linking embedded Platform Services Controllers is possible, but is not a recommended configuration. It is preferred to have an external configuration for the Platform Services Controller.

The configuration looks like this:

JMcDonald 5

 

  • Combination Deployments (Not Recommended)

In this case there is a combination of embedded and external Platform Services Controller architectures.

Linking an embedded Platform Services Controller and an external Platform Services Controller is possible, but again, this is not a recommended configuration. It is preferred to have an external configuration for the Platform Services Controller.

Here is as an example of one such scenario:

JMcDonald 6

  • Enhanced Linked Mode Using Only an Embedded Platform Services Controller (Not Recommended)

In this case there is an embedded Platform Services Controller with vCenter Server linked to an external standalone vCenter Server.

Linking a second vCenter Server to an existing embedded vCenter Server and Platform Services Controller is possible, but this is not a recommended configuration. It is preferred to have an external configuration for the Platform Services Controller.

Here is an example of this scenario:

JMcDonald 7

 

Stay tuned for Part 2 of this blog post where we will discuss the different platforms for vCenter, high availability and different deployment recommendations.


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments.

DevOps and Performance Management

Michael_Francis

By Michael Francis

Continuing on from Ahmed’s recent blog on DevOps, I thought I would share an experience I had with a customer regarding performance management for development teams.

Background

I was working with an organization that is essentially an independent software vendor (ISV) in a specific vertical; their business is writing software in the gambling sector, and⎯in some cases⎯hosting that software to deliver services to their partners. It is a very large revenue stream for them, and their development expertise and software functionality is their differentiation.

Due to historical stability issues and lack of trust between the application development teams and the infrastructure team, the organization introduced into the organization a new VP of Infrastructure and an Infrastructure Chief Architect a number of years previous. They focused on changing the process and culture − and also aligning the people. They took our technology and implemented an architecture that aligned with our best practices with the primary aim of delivering a stable, predictable platform.

This transformation of people/process and technology provided a stable infrastructure platform that soon improved the trust and credibility of the infrastructure team with the applications development teams for their test and development requirements.

Challenges

The applications team in this organization, as you would expect, carries significant influence. Even though the applications team had come to trust virtual infrastructure for test and development, they still had reservations about a private cloud model for production. Their applications had significant demands on infrastructure and needed to guarantee transactions per second rates committed across multiple databases; any latency could cause significant processing issues, and therefore, loss of revenue. Visibility across the stack was a concern.

The applications team  responsible for this critical in-house developed application designed the application to instrument it’s performance by writing out flat files on each server with application-specific information about transaction commit times and other application specific performance information.

Irrelevant of complete stack visibility, the applications team responsible for this application was challenged with how to monitor the performance of this custom distributed application performance data from a central point. The applications team also desired some means of understanding normal performance data levels, as well as a way to gain insight into the stack to see where any abnormality originated.

Due to the trust that had developed with the infrastructure team, they engaged with them to determine whether the infrastructure team had any capability to support their performance monitoring needs.

Solution

The infrastructure team was just beginning to review their needs for performance and capacity management tools for their Private Cloud. The team had implemented a proof-of-concept of vCenter Operations Manager and found its visualizations useful; so they asked us to work with the applications team to determine whether we could digest this custom performance information.

We started by educating them on the concept of a dynamic learning monitoring system. It had to allow hard thresholds to be set, but also⎯more importantly⎯determine the spectrum of normal behavior based upon data pattern prediction algorithms for an application; both as a whole and each of its individual components.

We discussed the benefits of a data analytics system that could take a stream of data, and
irrespective of the data source, create a monitored object from it. The data analytics system had to be able to assign the data elements in the stream to metrics, start determining normality, provide a comparison to any hard thresholds, and provide the visualization.

The applications team was keen to investigate and so our proof-of-concept expanded to include the custom performance data from this in-house developed application.

The Outcome

The screenshot below shows VMware vCenter Operations Manager. It shows the Resource Type screen that allows us to define a customer Resource Type, which allows us to represent the application-specific metrics and the application itself.

MFrancis1

To get the data into vCenter Operation Manager we simply wrote a script that opened the flat file on each of the servers participating in the application; it read the file and then posted the information into vCenter Operations Manager using its HTTP POST adapter. This adapter provides the ability to post data from any endpoint that needs to be monitored; because of this vCenter Operations Manager is a very flexible tool.

In this instance we posted into vCenter Operation Manager a combination of application-specific counters and Windows Management Instrumentation (WMI) counters from the Windows operating system platform the apps run on. This is shown in the following screenshot.

MFrancis2

You can see the Resource Kind is something I called vbs_vcops_httpost, which is not a ‘standard’ monitored object in vCenter Operations Manager; the product has created this based on the data stream I was pumping into it. I just needed to tell vCenter Operations Manager what metrics it should monitor from the data stream – which you can see in the following screenshot.

 MFrancis3

For each attribute (metric) we can configure whether hard thresholds are used and whether vCenter Operations Manager should use that metric as an indicator of normality. We refer to the normality as dynamic thresholds.

Once we have identified which metrics we want to mark, we can create spectrums of normality for them and affect the health of the application, which allows us to create visualizations. The screenshot below shows an example of a simple visualization. It shows the applications team a round-trip time metric plotted over time, alongside a standard windows WMI performance counter for CPU.

MFrancis4

In introducing the capabilities to monitor custom in-house developed applications using combinations of application-specific custom metrics, a standard guest operating system and platform metrics, the DevOps team now has visibility into the health of the whole stack. This enables them to see the impact of code changes against different layers of the stack so they can compare the before and after from the perspective of the spectrum of normality for varying key metrics.

This capability from a cultural perspective brought the applications development team and infrastructure team onto the same page; both teams gain an appreciation of any performance issues through a common view.

In my team we have developed services that enable our customers to adopt and mature a performance and capacity management capability for the hybrid cloud, which⎯in my view―is one of the most challenging considerations for hybrid cloud adoption.

 


Michael Francis is a Principal Systems Engineer at VMware, based in Brisbane.

Application Delivery Strategy: A Key Piece in the VDI Design Puzzle

By Michael Bradley and Hans Bader

Let’s face it: applications are the bane of a desktop administrator’s existence. It seems there is always something that makes the installation and management of an application difficult and challenging. Whether it’s a long list of confusing and conflicting requirements or a series of software and hardware incompatibilities, management of applications is one of the more difficult aspects of an administrator’s job.

It’s not surprising that application delivery and management is one of the key areas that often gets overlooked when planning and deploying a virtual desktop infrastructure (VDI), such as VMware’s Horizon View 6. This often-overlooked aspect is a common pitfall hindering many VDI implementations. A great deal of work and effort goes into ensuring that desktop images are optimized, the correct corporate security settings are applied to the operating system, the underlying architecture is built to scale appropriately, and the guaranteed end-user performance is acceptable. These are all important goals that require attention, but the application delivery strategy is frequently missed, forgotten, or even ignored.

Before we go further, let’s take a moment to define application delivery. A long time ago in a cube farm far, far away, application delivery was all about getting the applications installed on the desktop. But with the emergence of new technologies the definition has evolved. Software application delivery is no longer solely about the installation; it has taken on a broader meaning. In today’s end-user environment, application delivery is more about providing the end-user with access to the applications they need. In today’s modern enterprise, end-user access can come in many different forms. Some of the most common examples are:

  • Installing applications directly on the virtual desktop, either manually or by using software such as Microsoft SCCM.
  • Application virtualization using VMware ThinApp or Microsoft’s App-V.
  • Delivering the applications to the desktop using technologies such as VMware App Volumes or Liquidware Labs’ FlexApp.
  • Application presentation using RDS Hosted Applications in VMware Horizon 6.

All these examples are application delivery mechanisms. Each one can solve a different application deployment problem, and each can be used alone or in conjunction with a complimentary one. For example, using App Volumes to delivery ThinApps.

An application delivery strategy should be an integral part of your VDI design; it is just as crucial as the physical infrastructure, like storage, networking, processing and the virtual infrastructure. It is perfectly alright to have a top-notch VDI, but if you can’t deliver new and existing applications to your end-users in a fast and efficient manner, you might be spinning your bits and bytes. Your end-users need applications delivered efficiently and quickly, or the VDI project becomes a bottleneck. The prime factor to remember about VDI is it forces you to change the way you operate. Features―such as VMware’s Linked Clone technology―can change the application delivery paradigm that many desktop administrators have grown accustomed to in a physical PC world. Let’s face it: how effective is it to push and install applications to linked clone desktops every time a desktop refreshes or recomposes?

To this end, if an application delivery strategy is so important, why is it often missed or ignored? There are three primary reasons for this:

  • First, it is simply forgotten, or the VDI designers simply don’t realize they need to consider it as part of the design.
  • Second, application delivery is often considered too big of a challenge, and no one wants to tackle it when they’re already facing tight deadlines on a VDI project.
  • Third, and probably most commonly heard in enterprise environments, is there is already a mechanism in place for application delivery for physical PCs, so it is assumed that what exists will suffice.

Once the need for an application delivery strategy is established, you need to determine what goes into one. First, you need to consider all tiers of your applications: tier one, tier two, tier-n. With that be sure to identify which are most common. Determine which applications need to be provided to all end-users versus which ones go to just a small subset. That will help determine what could be installed in the base image, as opposed to being delivered by some other mechanism. For instance, Microsoft Office may be an application that would be included in the base image for all users, but a limited use accounting package may only be required for the accounting team, and therefore delivered another way.

Next, consider the delivery mechanism for your virtual desktops. Are they all full virtual machine desktops – or linked clone desktops? Determining which type you are using will play a major part in what your application delivery strategy looks like. If you are using all full virtual machine desktops―which deserves serious consideration―then you could effectively continue to use the existing application delivery strategy you use for physical PCs. But using linked clones could cause your existing application delivery strategy to become a bottleneck.

Then, you need to consider what technology will work best for you and your applications. Will application virtualization such as ThinApp be a suitable mechanism? Or, perhaps using RDS Hosted Applications in Horizon 6 is a more viable option for application delivery. You may even find the best option is a combination of technologies. You should take time to evaluate the pros and cons of each option to ensure the needs of your end-users are met ‒ and with efficiency. One question you should ask is, “Do my end-users have the ability to install their own applications?” If the answer is “yes,” you need to ensure you either change corporate policy or select a technology that supports user-installed applications. Keep in mind that an application delivery strategy can vary for different types of users.

Finally, you should consider how to handle one-off situations. There will always be the one user, or a small group of users, who require a specialized application that falls outside the realm of your standard application delivery mechanisms. Determining how to address those instances are rare but inevitable, but as a desktop administrator, it will help you respond quickly to the needs of your end-users.

A good VDI implementation is only successful if the end-users can perform their assigned tasks. Nine times out of ten, that requires access to applications. Ensuring you have a strategy in place to ensure delivery of the right applications to the right end-users is vital to the success of any VDI implementation.


Michael Bradley

Michael Bradley, a VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for almost 20 years. He is also a VCP5-DCV, VCAP4-DCD, VCP4-DT, VCP5-DT, and VCAP-DTD, as well as an Airwatch Enterprise Mobility Associate.

 

Hans Bader

Hans Bader Consulting Architect, VMware EUC. Hans has over 20 years of IT experience and joined VMware in 2009. With a focus on helping organizations being operationally ready, he works with customers to avoid common mistakes.  He is a strong advocate for proactive load testing of environment before allowing users access.  Hans has won numerous consulting awards within VMware.

vCloud Automation Center Disaster Recovery

Gary BlakeBy Gary Blake

Prior to the release of vCloud Automation Center (vCAC) v5.2 there was no awareness or understanding of vCenter Site Recovery Manager protecting virtual machines. However, with the introduction of vCAC v5.2, VMware now provides enhanced integration so vCAC can correctly discover the relationship between the primary and recovery virtual machines.

These enhancements consist of what may be considered minor modifications, but they are fundamental enough to ensure vCenter Site Recovery Manager (SRM) can be successfully implemented to deliver disaster recovery of virtual machines managed by vCAC.

GBlake 1

 

So What’s Changed?

When a virtual machine is protected by SRM a Managed Object Reference ID (or MoRefID) is created against the virtual machine record in the vCenter Server database.

Prior to SRM v5.5 a single virtual machine property was created on the placeholder virtual machine object in the recovery site vCenter Server database called “ManagedBy:SRM,placeholderVM,” but vCAC did not inspect this value, so it would attempt to add a second duplicate entry into its database. With the introduction of 5.2, when a data collection is run, vCAC now ignores virtual machines with this value set, thus avoiding the duplicate entry attempt.

In addition, SRM v5.5 introduced a second managed-by-property value that is placed on the virtual machine vCenter Server database record called “ManagedBy:SRM,testVM.” When a test recovery process is performed and data collection is run at the recovery site, vCAC inspects this value and ignores virtual machines with this set. This too avoids creating a duplicate entry in the vCAC database.

With the changes highlighted above, SRM v5.5 and later—and vCAC 5.2 and later—can now be implemented in tandem with full awareness of each other. However, one limitation still remains when moving a virtual machine into recovery or re-protect mode: vCAC does not properly recognize the move. To successfully perform these machine operations and continue managing the machine lifecycle, you must use the Change Reservation operation – which is still a manual task.

Introducing the CloudClient

In performing the investigation around the enhancements between SRM and vCAC just described, and on uncovering the need for the manual change of reservation, I spent some time with our Cloud Solution Engineering team discussing the need for finding a way to automate this step. They were already developing a tool called CloudClient, which is essentially a wrapper for our application programming interfaces that allows simple command line-driven steps to be performed, and suggested this could be developed to support this use case.

Conclusion

In order to achieve fully functioning integration between vCloud Automation Center (5.2 or later) and vCenter Site Recovery Manager, adhere to the following design decisions:

  • Configure vCloud Automation Center with endpoints for both the protected and recovery sites.
  • Perform a manual/automatic change reservation following a vCenter Site Recovery Manager planned for disaster migration.

GBlake2

 

Frequently Asked Questions

Q. When I fail over my virtual machines from the protected site to the recovery site, what happens if I request the built-in vCAC machine operations?

A. Once you have performed a Planned Migration or a Disaster Recovery process, as long as you have changed the reservation within the vCAC Admin UI for the virtual machine, machine operations will be performed in the normal way on the recovered virtual machine.

Q. What happens if I do not perform the Change Reservation step to a virtual machine once I’ve completed a Planned Migration or Disaster Recovery processand I then attempt to perform the built-in vCAC machine operations on the virtual machine?

A. Depending on which tasks you perform, some things are blocked by vCAC, and you see an error message in the log such as “The method is disabled by ‘com.vmware.vcDR’” and some actions look like they are being processed, but nothing happens. There are also a few actions that are processed regardless of the virtual machine failure scenario; these are Change Lease and Expiration Reminder.

Q. What happens if I perform a re-provision action on a virtual machine that is currently in a Planned Migration or Disaster Recovery state?

A. vCAC will re-provision the virtual machine in the normal manner, where the hostname and IP address (if assigned through vCAC) will be maintained. However, the SRM recovery plan will now fail if you attempt to re-protect the virtual machine back to the protected site as the original object being managed is replaced. It’s recommended that—for blueprints where SRM protection is a requirement—you disable the ‘Re-provision’ machine operation.


Gary Blake is a VMware Staff Solutions Architect & CTO Ambassador

App Volumes AppStacks vs. Writable Volumes

By Dale Carter, Senior Solutions Architect, End-User Computing

With the release of VMware App Volumes I wanted to take the time to explain the difference between AppStacks and Writable Volumes, and how the two need to be designed as you start to deploy App Volumes.

The graphic below shows the traditional way to manage your Windows desktop, as well as the way things have changed with App Volumes and the introduction of “Just-in-time” apps.

DCarter AppVolumes v Writable Volumes 1

 

So what are the differences between AppStacks and Writable Volumes?

AppStacks

An AppStack is a virtual disk that contains one or more applications that can be assigned to a user as a read-only disk. A user can have one or many AppStacks assigned to them depending on how the IT administrator manages the applications.

When designing for AppStacks it should be noted that an AppStack is deployed in a one-to-many configuration. This means that at any one time an AppStack could be connected to one or hundreds of users.

DCarter AppVolumes v Writable Volumes 2

 

When designing storage for an AppStack it should also be noted that App Volumes do not change the IOPS required for an application, but it does consolidate the IOPS to a single virtual disk. So like any other virtual desktop technology it is critical to know your applications and their requirements; it is recommended to do an application assessment before moving to a large-scale deployment. Lakeside Software and Liquidware Labs both publish software for doing application assessments.

For example, if you know that on average the applications being moved to an AppStack use 10 IOPS, and that the AppStack has 100 users connected to it, you will require 1,000 IOPS average (IOPS pre-user x number of users) to support that AppStack; you can see it is key to designing your storage correctly for AppStacks.

In large-scale deployments it may be recommended to create copies of AppStacks and place them across storage LUNs, and assign a subset of users to each AppStack for best performance.

DCarter AppVolumes v Writable Volumes 3

 

Writable Volumes

Like AppStacks, a Writable Volume is also a virtual disk, but unlike AppStacks a Writable Volume is configured in a one-to-one configuration, and each user has their own assigned Writeable Volume.

DCarter AppVolumes v Writable Volumes 4

 

When an IT administrator assigns a Writable Volume to a user, the first thing the IT administrator will need to decide is what type of data the user will be able to store in the Writable Volumes. There are three choices :

  • User Profile Data Only
  • User Installed Applications Only
  • Both Profile Data and User Installed Applications

It should be noted that App Volumes are not a Profile Management tool, but can be used alongside any currently used User-Environment Management tool.

When designing for Writable Volumes, the storage requirement will be different than it is when designing for AppStacks. Where an AppStack will require all Read I/O, a Writable Volume will require both Read and Write I/O. The IOPS for a Writable Volume will also vary per user depending on the individual user and how they use their data; it will also vary depending on the type of data the IT administrator allows the user to store in their Writable Volume.

IT administrators should monitor their users and how they access their Writable Volume; this will help them manage how many Writable Volumes can be configured on a single storage LUN.

Hopefully this blog helps describe the differences between AppStacks and Writable Volumes, and the differences that should be taken into consideration when designing for each.

I would like to thank Stephane Asselin for his input on this blog.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

App Volumes AppStack Creation

Dale CarterBy Dale Carter, Senior Solutions Architect, End-User Computing

VMware App Volumes provide just-in-time application delivery to virtualized desktop environments. With this real-time application delivery system, applications are delivered to virtual desktops through VMDK virtual disks, without modifying the VM or applications themselves. Applications can be scaled out with superior performance, at lower costs, and without compromising end-user experience.

In this blog post I will show you how easy it is to create a VMware App Volumes AppStack and how that AppStack can then be easily deployed to up to hundreds of users

When configuring App Volumes with VMware Horizon View an App Volumes AppStack is a read-only VMDK file that is added to a user’s virtual machine, and then the App Volumes Agent merges the two or more VMDK files so the Microsoft Windows operating system sees the files as just one drive. This way the applications look to the Windows OS as if they are natively installed and not on a separate disk.

To create an App Volumes AppStack follow these simple steps.

  1. Log in to the App Volumes Manager Web interface.
  2. Click Volumes.
    DCarter Volumes
  3. Click Create AppStack.
    DCarter AppStack
  4. Give the AppStack a name. Choose the storage location and give it a description (optional). Then click Create.
    DCarter Create AppStack
  5. Choose to either Perform in the background or Wait for completion and click Create.
    DCarter Create
  6. vCenter will now create a new VMDK for the AppStack to use.
  7. Once vCenter finishes creating the VMDK the AppStack will show up as Un-provisioned. Click the + sign.
    DCarter
  8. Click Provision
    .
    DCarter Provision
  9. Search for the desktop that will be used to install the software. Select the Desktop and click Provision.
    DCarter Provision AppStack
  10. Click Start Provisioning.
    DCarter Start Provisioning
  11.  vCenter will now attach the VMDK to the desktop.
  12. Open the desktop that will be used for provisioning the new software. You will see the following message: DO NOT click OK. You will click OK after the install of the software.
    DCarter Provisioning Mode
  13. Install the software on the desktop. This can be just one application or a number of applications. If reboots are required between installs that is OK. App Volumes will remember where you are after the install.
  14. Once all of the software has been installed click OK.
    DCarter Install
  15. Click Yes to confirm and reboot.
    DCarter Reboot
  16. Click OK.
    DCarter 2
  17. The desktop will now reboot. After the reboot you must log back in to the desktop.
  18. After you log in you must click OK. This will reconfigure the VMDK on the desktop.
    DCarter Provisioning Successful
  19. You can now connect to the App Volumes Manager Web interface and see that the AppStack is ready to be assigned.
    DCarter App Volumes Manager

Once you have created the AppStack you can assign the AppStack to an Active Directory object. This could be a user, computer or user group.

To assign an AppStack to a user, computer or user group, follow these simple steps.

  1. Log in to the App Volumes Manager Web interface.
  2. Click Volumes.
    DCarter Volumes Dashboard
  3. Click the + sign by the AppStack you want to assign.
  4. Click Assign.
    DCarter Assign
  5. Search for the Active Director object. Select the user, computer, OU or user group to assign the AppStack to. Click Assign.
    DCarter Assign Dashboard
  6. Choose either to assign the AppStack at the next login or immediately, and click Assign.
    DCarter Active Director
  7. The users will now have the AppStack assigned to them and will be able to launch the applications as they would any normal application.
    DCarter AppStack Assign

By following these simple steps you will be able to quickly create an AppStack and simply deploy that AppStack to your users.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy