Home > Blogs > VMware Consulting Blog

Begin Your Journey to vRealize Operations Manager

By Brent Douglas

In early December, VMware launched an exciting new array of updates to its products. For some products, this update was a refinement of already widely used functionality and capabilities. For other products, the December release marked a new direction and new path forward. One such product is vRealize Operations Manager.

With VMware’s acquisition of Integrien’s patented real-time performance analytics solution in August 2010, VMware added a powerful tool to its arsenal of virtualization management solutions. This tool, vCenter Operations Manager, enabled customers to begin managing beyond “what my environment is doing now” and into “what my environment will be doing in 30 minutes—and beyond?” In essence, with vCenter Operations Manager, customers gained a tool that could predict―and ultimately prevent―the phone from ringing.

Since August 2010, vCenter Operations Manager received bug fixes, regular updates, and new features and capabilities. Even with those, the VMware product designers and engineers knew they could produce a new version of the product that captured and extended the capabilities of vCenter Operations Manager. On December 9, VMware released that tool—vRealize Operations Manager.

In many respects, vRealize is a new product from the ground up. Due to the differences between vCenter Operations Manager v5.x and vRealize Operations Manager v6.x, current users of vCenter Operations Manager cannot simply apply a v6.x update to existing environments. For customers with little historical data or default policies, the best course forward may be to just install and begin using vRealize Operations Manager. Other customers, with deep historical data and advanced configuration/policies, the best path forward is likely a migration of existing data and configuration information from their vCenter Operations Manager v5.x instance.

A full discussion of migration planning and procedures is available in the vRealize Operations Manager Customization and Administration Guide. This guide also outlines many common vCenter Operations Manager scenarios and suggests migration paths to vRealize Operations Manager.

Important note: In order to migrate data and/or configuration information from an existing vCenter Operations Manager instance, the instance must be at least v.5.8.1 at a minimum, and preferably v5.8.3 or higher.

Question 1: Should any portion of my existing vCenter Operations Manager instance(s) be migrated?

VMware believes you are a candidate for a full migration (data and configuration information) if you can answer “yes” to any one of the following:

  • Have you operationalized capacity planning in vCenter Operations Manager 5.8.x?
    • Actively reclaiming waste
    • Reallocating resources
  • Have you operationalized vCenter Operations Manager to be performance- and health monitoring-based?
  • Do you act upon the performance alerts that are generated by vCenter Operations Manager?
  • Is any aspect of data in vCenter Operations Manager feeding another production system?
    • Raw metrics, alerts, reports, emails, etc
  • Do you have a company policy to retain monitoring data?
    • Does your current vCenter Operations Manager instance fall into this category (e.g., it’s running in TEST)?

VMware believes you are a candidate for a configuration-only migration if you answer “yes” to any one of the following:

  • Are you happy with your current configuration?
    • Dashboards
    • Policies
    • Users
    • Super Metrics

— AND —

  • I do not need to save the data I have collected
    • Running in a test environment or proof-of-concept you have refined and find useful
    • Not really using the data yet

If you answered “no” to these questions, you should install and try vRealize Operations Manager today. You are ready to go with a fresh install without migrating any existing data or configuration information.

Question 2: If some portion of an existing vCenter Operations Manager instance is to be migrated, who should perform the migration?

vRealize Operations Manager is capable of migrating existing data and configuration information from an existing vCenter Operations Manager instance. However, complicating factors may require an in-depth look by a VMware services professional to ensure a successful migration. The following table outlines some of the complicating factors and suggests paths forward.

Consulting_blog_table_012815

 

That’s it! With a bit of upfront planning you can be well on your journey to vRealize Operations Manager! The information above addresses the “big hitters” for planning a migration to vRealize Operations Manager from vCenter Operations Manager. As mentioned, a full discussion of migration planning and procedures is available in the vRealize Operations Manager Customization and Administration Guide.

On a personal note, I am excited about vRealize Operations Manager. Although vCenter Operations Manager served VMware and its customers well for many years, it is time for something new and exciting. I encourage you to try vRealize Operations Manager today. This post represents information produced in collaboration with David Moore, VMware Professional Services, and Dave Overbeek, VMware Technical Marketing team. I thank them for their contributions and continued focus on VMware and its customers.


Brent Douglas is a VMware Cloud Technical Solutions Architect

DevOps and Performance Management

Michael_Francis

By Michael Francis

Continuing on from Ahmed’s recent blog on DevOps, I thought I would share an experience I had with a customer regarding performance management for development teams.

Background

I was working with an organization that is essentially an independent software vendor (ISV) in a specific vertical; their business is writing software in the gambling sector, and⎯in some cases⎯hosting that software to deliver services to their partners. It is a very large revenue stream for them, and their development expertise and software functionality is their differentiation.

Due to historical stability issues and lack of trust between the application development teams and the infrastructure team, the organization introduced into the organization a new VP of Infrastructure and an Infrastructure Chief Architect a number of years previous. They focused on changing the process and culture − and also aligning the people. They took our technology and implemented an architecture that aligned with our best practices with the primary aim of delivering a stable, predictable platform.

This transformation of people/process and technology provided a stable infrastructure platform that soon improved the trust and credibility of the infrastructure team with the applications development teams for their test and development requirements.

Challenges

The applications team in this organization, as you would expect, carries significant influence. Even though the applications team had come to trust virtual infrastructure for test and development, they still had reservations about a private cloud model for production. Their applications had significant demands on infrastructure and needed to guarantee transactions per second rates committed across multiple databases; any latency could cause significant processing issues, and therefore, loss of revenue. Visibility across the stack was a concern.

The applications team  responsible for this critical in-house developed application designed the application to instrument it’s performance by writing out flat files on each server with application-specific information about transaction commit times and other application specific performance information.

Irrelevant of complete stack visibility, the applications team responsible for this application was challenged with how to monitor the performance of this custom distributed application performance data from a central point. The applications team also desired some means of understanding normal performance data levels, as well as a way to gain insight into the stack to see where any abnormality originated.

Due to the trust that had developed with the infrastructure team, they engaged with them to determine whether the infrastructure team had any capability to support their performance monitoring needs.

Solution

The infrastructure team was just beginning to review their needs for performance and capacity management tools for their Private Cloud. The team had implemented a proof-of-concept of vCenter Operations Manager and found its visualizations useful; so they asked us to work with the applications team to determine whether we could digest this custom performance information.

We started by educating them on the concept of a dynamic learning monitoring system. It had to allow hard thresholds to be set, but also⎯more importantly⎯determine the spectrum of normal behavior based upon data pattern prediction algorithms for an application; both as a whole and each of its individual components.

We discussed the benefits of a data analytics system that could take a stream of data, and
irrespective of the data source, create a monitored object from it. The data analytics system had to be able to assign the data elements in the stream to metrics, start determining normality, provide a comparison to any hard thresholds, and provide the visualization.

The applications team was keen to investigate and so our proof-of-concept expanded to include the custom performance data from this in-house developed application.

The Outcome

The screenshot below shows VMware vCenter Operations Manager. It shows the Resource Type screen that allows us to define a customer Resource Type, which allows us to represent the application-specific metrics and the application itself.

MFrancis1

To get the data into vCenter Operation Manager we simply wrote a script that opened the flat file on each of the servers participating in the application; it read the file and then posted the information into vCenter Operations Manager using its HTTP POST adapter. This adapter provides the ability to post data from any endpoint that needs to be monitored; because of this vCenter Operations Manager is a very flexible tool.

In this instance we posted into vCenter Operation Manager a combination of application-specific counters and Windows Management Instrumentation (WMI) counters from the Windows operating system platform the apps run on. This is shown in the following screenshot.

MFrancis2

You can see the Resource Kind is something I called vbs_vcops_httpost, which is not a ‘standard’ monitored object in vCenter Operations Manager; the product has created this based on the data stream I was pumping into it. I just needed to tell vCenter Operations Manager what metrics it should monitor from the data stream – which you can see in the following screenshot.

 MFrancis3

For each attribute (metric) we can configure whether hard thresholds are used and whether vCenter Operations Manager should use that metric as an indicator of normality. We refer to the normality as dynamic thresholds.

Once we have identified which metrics we want to mark, we can create spectrums of normality for them and affect the health of the application, which allows us to create visualizations. The screenshot below shows an example of a simple visualization. It shows the applications team a round-trip time metric plotted over time, alongside a standard windows WMI performance counter for CPU.

MFrancis4

In introducing the capabilities to monitor custom in-house developed applications using combinations of application-specific custom metrics, a standard guest operating system and platform metrics, the DevOps team now has visibility into the health of the whole stack. This enables them to see the impact of code changes against different layers of the stack so they can compare the before and after from the perspective of the spectrum of normality for varying key metrics.

This capability from a cultural perspective brought the applications development team and infrastructure team onto the same page; both teams gain an appreciation of any performance issues through a common view.

In my team we have developed services that enable our customers to adopt and mature a performance and capacity management capability for the hybrid cloud, which⎯in my view―is one of the most challenging considerations for hybrid cloud adoption.

 


Michael Francis is a Principal Systems Engineer at VMware, based in Brisbane.

Automating Security Policy Enforcement with NSX Service Composer

Romain DeckerBy Romain Decker

Over the past decade, IT organizations have gained significant benefits as a direct result of compute virtualization, permitting a reduction in physical complexity and an increase in operational efficiency. It also allowed for dynamic re-purposing of underlying resources to quickly and optimally meet the needs of an increasingly dynamic business.

In dynamic cloud data centers, application workloads are provisioned, moved and decommissioned on demand. In legacy network operating models, network provisioning is slow and workload mobility is limited. While compute virtualization has become the new norm, network and security models remained unchanged in data centers.

NSX is VMware’s solution to virtualize network and security for your software-defined data center. NSX network virtualization decouples the network from hardware and places it into a software abstraction layer, thus delivering for networking what VMware has already delivered for compute and storage.

Inside NSX, the Service Composer is a built-in tool that defines a new model for consuming network and security services; it allows you to provision and assign firewall policies and security services to applications in real time in a virtual infrastructure. Security policies are assigned to groups of virtual machines, and the policy is automatically applied to new virtual machines as they are added to the group.

RDecker 1

From a practical point of view, NSX Service Composer is a configuration interface that gives administrators a consistent and centralized way to provision, apply and automate network security services like anti-virus/malware protection, IPS, DLP, firewall rules, etc. Those services can be available natively in NSX or enhanced by third-party solutions.

With NSX Service Composer, security services can be consumed more efficiently in the software-defined data center. Security can be easily organized by dissociating the assets you want to protect from the policies that define how you want to protect them.

RDecker 2

Security Groups

A security group is a powerful construct that allows static or dynamic grouping based on inclusion and exclusion of objects such as virtual machines, vNICs, vSphere clusters, logical switches, and so on.

If a security group is static, the protected assets are a limited set of specific objects, whereas dynamic membership of a security group can be defined by one or multiple criteria, like vCenter containers (data centers, port groups and clusters), security tags, Active Directory groups, regular expressions on virtual machine names, and so on. When all criteria are met, virtual machines are immediately moved to the security group automatically.

In the example below, any virtual machine with a name containing “web”―AND running in “Capacity Cluster A”―will belong to this security group.

RDecker 3

 

Security group considerations:

  • Security groups can have multiple security policies assigned to them.
  • A virtual machine can live in multiple security groups at the same time.
  • Security groups can be nested inside other security groups.
  • You can include AND exclude objects from security groups.
  • Security group membership can change constantly.
  • If a virtual machine belongs to multiple security groups, the services applied to it depend on the precedence of the security policy mapped to the security groups.

Security Policies

A security policy is a collection of security services and/or firewall rules. It can contain the following:

  • Guest Introspection services (applies to virtual machines) – Data Security or third-party solution provider services such as anti-virus or vulnerability management services.
  • Distributed Firewall rules (applies to vNIC) – Rules that define the traffic to be allowed to/from/within the security group.
  • Network introspection services (applies to virtual machines) – Services that monitor your network such as IPS and network forensics.

Security services such as vulnerability management, IDS/IPS or next-generation firewalling can be inserted into the traffic flow and chained together.

Security policies are applied according to their respective weight: a security policy with a higher weight has a higher priority. By default, a new policy is assigned the highest weight so it is at the top of the table (but you can manually modify the default suggested weight to change the order).

Multiple security policies may be applied to a virtual machine because either (1) the security group that contains the virtual machine is associated with multiple policies, or, (2) the virtual machine is part of multiple security groups associated with different policies. If there is a conflict between services grouped with each policy, the weight of the policies determine the services that will be applied to the virtual machine.

For example: If policy A blocks incoming HTTP and has a weight value of 1,000, while policy B allows incoming HTTP with a weight value of 2,000, incoming HTTP traffic will be allowed because policy B has a higher weight.

The mapping between security groups and security policies results in a running configuration that is immediately enforced. The relationships between all objects can be observed in the Service Composer Canvas.

RDecker 4

 

Each block represents a security group with its associated security policies, Guest Introspection services, firewall rules, network introspection services, and the virtual machines belonging to the group or included security groups.

NSX Service Composer offers a way to automate the consumption of security services and their mapping to virtual machines using a logical policy, and it makes your life easier because you can rely on it to manage your firewall policies; security groups allow you to statically or dynamically include or exclude objects into a container, which can be used as a source or destination in a firewall rule.

Firewall rules defined in security policies are automatically adapted (based on the association between security groups and policies) and integrated into NSX Distributed Firewall (or any third-party firewall). As virtual machines are automatically added and removed from security groups during their lifecycle, the corresponding firewall rules are enforced when needed. With this association, your imagination is your only limit!

In the screenshot below, firewall rules are applied via security policies to a three-tier application; since the security group membership is dynamic, there is no need to modify firewall rules when virtual machines are added to the application (in order to scale-out, for example).

RDecker 5

 

Provision, Apply, Automate

Service Composer is one of the most powerful features of NSX: it simplifies the application of security services to virtual machines within the software-defined data center, and allows administrators to have more control over―and visibility into―security.

Service Composer accomplishes this by providing a three-step workflow:

      • Provision the services to be applied:
        • Registering the third-party service with NSX Manager (if you are not using the out-of-the-box security services available)
        • Deploying the service by installing if necessary the components required for that service to operate into each ESXi host (“Networking & Security > Installation > Service Deployments” tab)
    • Apply and visualize the security services to defined containers by applying the security policies to security groups.
    • Automate the application of these services by defining rules and criteria that specify the circumstances under which each service will be applied to a given virtual machine.

Possibilities around the NSX Service Composer are tremendous; you can create an almost infinite number of associations between security groups and security policies to efficiently automate the how security services will be consumed in the software-defined data center.

You can, for example, combine service composer capabilities and VMware vRealize Automation Center to achieve secure, automated, on-demand micro-segmentation. Another example is a quarantine workflow, where― after a virus detection―a virtual machine is automatically and immediately moved to a quarantine security group, whose security policies can take action, like remediation, strengthened firewall rules and traffic steering.


Romain Decker is a Technical Solutions Architect in the Professional Services Engineering team and is based in France.

Application Delivery Strategy: A Key Piece in the VDI Design Puzzle

By Michael Bradley and Hans Bader

Let’s face it: applications are the bane of a desktop administrator’s existence. It seems there is always something that makes the installation and management of an application difficult and challenging. Whether it’s a long list of confusing and conflicting requirements or a series of software and hardware incompatibilities, management of applications is one of the more difficult aspects of an administrator’s job.

It’s not surprising that application delivery and management is one of the key areas that often gets overlooked when planning and deploying a virtual desktop infrastructure (VDI), such as VMware’s Horizon View 6. This often-overlooked aspect is a common pitfall hindering many VDI implementations. A great deal of work and effort goes into ensuring that desktop images are optimized, the correct corporate security settings are applied to the operating system, the underlying architecture is built to scale appropriately, and the guaranteed end-user performance is acceptable. These are all important goals that require attention, but the application delivery strategy is frequently missed, forgotten, or even ignored.

Before we go further, let’s take a moment to define application delivery. A long time ago in a cube farm far, far away, application delivery was all about getting the applications installed on the desktop. But with the emergence of new technologies the definition has evolved. Software application delivery is no longer solely about the installation; it has taken on a broader meaning. In today’s end-user environment, application delivery is more about providing the end-user with access to the applications they need. In today’s modern enterprise, end-user access can come in many different forms. Some of the most common examples are:

  • Installing applications directly on the virtual desktop, either manually or by using software such as Microsoft SCCM.
  • Application virtualization using VMware ThinApp or Microsoft’s App-V.
  • Delivering the applications to the desktop using technologies such as VMware App Volumes or Liquidware Labs’ FlexApp.
  • Application presentation using RDS Hosted Applications in VMware Horizon 6.

All these examples are application delivery mechanisms. Each one can solve a different application deployment problem, and each can be used alone or in conjunction with a complimentary one. For example, using App Volumes to delivery ThinApps.

An application delivery strategy should be an integral part of your VDI design; it is just as crucial as the physical infrastructure, like storage, networking, processing and the virtual infrastructure. It is perfectly alright to have a top-notch VDI, but if you can’t deliver new and existing applications to your end-users in a fast and efficient manner, you might be spinning your bits and bytes. Your end-users need applications delivered efficiently and quickly, or the VDI project becomes a bottleneck. The prime factor to remember about VDI is it forces you to change the way you operate. Features―such as VMware’s Linked Clone technology―can change the application delivery paradigm that many desktop administrators have grown accustomed to in a physical PC world. Let’s face it: how effective is it to push and install applications to linked clone desktops every time a desktop refreshes or recomposes?

To this end, if an application delivery strategy is so important, why is it often missed or ignored? There are three primary reasons for this:

  • First, it is simply forgotten, or the VDI designers simply don’t realize they need to consider it as part of the design.
  • Second, application delivery is often considered too big of a challenge, and no one wants to tackle it when they’re already facing tight deadlines on a VDI project.
  • Third, and probably most commonly heard in enterprise environments, is there is already a mechanism in place for application delivery for physical PCs, so it is assumed that what exists will suffice.

Once the need for an application delivery strategy is established, you need to determine what goes into one. First, you need to consider all tiers of your applications: tier one, tier two, tier-n. With that be sure to identify which are most common. Determine which applications need to be provided to all end-users versus which ones go to just a small subset. That will help determine what could be installed in the base image, as opposed to being delivered by some other mechanism. For instance, Microsoft Office may be an application that would be included in the base image for all users, but a limited use accounting package may only be required for the accounting team, and therefore delivered another way.

Next, consider the delivery mechanism for your virtual desktops. Are they all full virtual machine desktops – or linked clone desktops? Determining which type you are using will play a major part in what your application delivery strategy looks like. If you are using all full virtual machine desktops―which deserves serious consideration―then you could effectively continue to use the existing application delivery strategy you use for physical PCs. But using linked clones could cause your existing application delivery strategy to become a bottleneck.

Then, you need to consider what technology will work best for you and your applications. Will application virtualization such as ThinApp be a suitable mechanism? Or, perhaps using RDS Hosted Applications in Horizon 6 is a more viable option for application delivery. You may even find the best option is a combination of technologies. You should take time to evaluate the pros and cons of each option to ensure the needs of your end-users are met ‒ and with efficiency. One question you should ask is, “Do my end-users have the ability to install their own applications?” If the answer is “yes,” you need to ensure you either change corporate policy or select a technology that supports user-installed applications. Keep in mind that an application delivery strategy can vary for different types of users.

Finally, you should consider how to handle one-off situations. There will always be the one user, or a small group of users, who require a specialized application that falls outside the realm of your standard application delivery mechanisms. Determining how to address those instances are rare but inevitable, but as a desktop administrator, it will help you respond quickly to the needs of your end-users.

A good VDI implementation is only successful if the end-users can perform their assigned tasks. Nine times out of ten, that requires access to applications. Ensuring you have a strategy in place to ensure delivery of the right applications to the right end-users is vital to the success of any VDI implementation.


Michael Bradley

Michael Bradley, a VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for almost 20 years. He is also a VCP5-DCV, VCAP4-DCD, VCP4-DT, VCP5-DT, and VCAP-DTD, as well as an Airwatch Enterprise Mobility Associate.

 

Hans Bader

Hans Bader Consulting Architect, VMware EUC. Hans has over 20 years of IT experience and joined VMware in 2009. With a focus on helping organizations being operationally ready, he works with customers to avoid common mistakes.  He is a strong advocate for proactive load testing of environment before allowing users access.  Hans has won numerous consulting awards within VMware.

vCloud Automation Center Disaster Recovery

Gary BlakeBy Gary Blake

Prior to the release of vCloud Automation Center (vCAC) v5.2 there was no awareness or understanding of vCenter Site Recovery Manager protecting virtual machines. However, with the introduction of vCAC v5.2, VMware now provides enhanced integration so vCAC can correctly discover the relationship between the primary and recovery virtual machines.

These enhancements consist of what may be considered minor modifications, but they are fundamental enough to ensure vCenter Site Recovery Manager (SRM) can be successfully implemented to deliver disaster recovery of virtual machines managed by vCAC.

GBlake 1

 

So What’s Changed?

When a virtual machine is protected by SRM a Managed Object Reference ID (or MoRefID) is created against the virtual machine record in the vCenter Server database.

Prior to SRM v5.5 a single virtual machine property was created on the placeholder virtual machine object in the recovery site vCenter Server database called “ManagedBy:SRM,placeholderVM,” but vCAC did not inspect this value, so it would attempt to add a second duplicate entry into its database. With the introduction of 5.2, when a data collection is run, vCAC now ignores virtual machines with this value set, thus avoiding the duplicate entry attempt.

In addition, SRM v5.5 introduced a second managed-by-property value that is placed on the virtual machine vCenter Server database record called “ManagedBy:SRM,testVM.” When a test recovery process is performed and data collection is run at the recovery site, vCAC inspects this value and ignores virtual machines with this set. This too avoids creating a duplicate entry in the vCAC database.

With the changes highlighted above, SRM v5.5 and later—and vCAC 5.2 and later—can now be implemented in tandem with full awareness of each other. However, one limitation still remains when moving a virtual machine into recovery or re-protect mode: vCAC does not properly recognize the move. To successfully perform these machine operations and continue managing the machine lifecycle, you must use the Change Reservation operation – which is still a manual task.

Introducing the CloudClient

In performing the investigation around the enhancements between SRM and vCAC just described, and on uncovering the need for the manual change of reservation, I spent some time with our Cloud Solution Engineering team discussing the need for finding a way to automate this step. They were already developing a tool called CloudClient, which is essentially a wrapper for our application programming interfaces that allows simple command line-driven steps to be performed, and suggested this could be developed to support this use case.

Conclusion

In order to achieve fully functioning integration between vCloud Automation Center (5.2 or later) and vCenter Site Recovery Manager, adhere to the following design decisions:

  • Configure vCloud Automation Center with endpoints for both the protected and recovery sites.
  • Perform a manual/automatic change reservation following a vCenter Site Recovery Manager planned for disaster migration.

GBlake2

 

Frequently Asked Questions

Q. When I fail over my virtual machines from the protected site to the recovery site, what happens if I request the built-in vCAC machine operations?

A. Once you have performed a Planned Migration or a Disaster Recovery process, as long as you have changed the reservation within the vCAC Admin UI for the virtual machine, machine operations will be performed in the normal way on the recovered virtual machine.

Q. What happens if I do not perform the Change Reservation step to a virtual machine once I’ve completed a Planned Migration or Disaster Recovery processand I then attempt to perform the built-in vCAC machine operations on the virtual machine?

A. Depending on which tasks you perform, some things are blocked by vCAC, and you see an error message in the log such as “The method is disabled by ‘com.vmware.vcDR’” and some actions look like they are being processed, but nothing happens. There are also a few actions that are processed regardless of the virtual machine failure scenario; these are Change Lease and Expiration Reminder.

Q. What happens if I perform a re-provision action on a virtual machine that is currently in a Planned Migration or Disaster Recovery state?

A. vCAC will re-provision the virtual machine in the normal manner, where the hostname and IP address (if assigned through vCAC) will be maintained. However, the SRM recovery plan will now fail if you attempt to re-protect the virtual machine back to the protected site as the original object being managed is replaced. It’s recommended that—for blueprints where SRM protection is a requirement—you disable the ‘Re-provision’ machine operation.


Gary Blake is a VMware Staff Solutions Architect & CTO Ambassador

Create a One-Click IT Command Center Operations Dashboard using vRealize Operations for Horizon

Anand VaneswaranBy Anand Vaneswaran

In my previous post, I examined creating a custom dashboard in vRealize Operations for Horizon that displayed my current cluster capacity metrics in my virtual desktop infrastructure (VDI) environment. This helped provide insight into the current utilization and performance. In the final post of this three-part blog series, I’ve provided instructions on creating a one-click IT command center operations dashboard. Many enterprises tend to centralize their IT command center operations in an effort to coordinate multiple technology focus areas such as network, storage, Microsoft Exchange, etc. and bring them together under one roof. The idea is to be able to see, respond to, and resolve incidents that cause production environment outages that have a wide range of implications. It is also to increase efficiency to speed up response times, and create a centralized view of the overall environment. In a production VDI environment, the onus would then fall on the command center to be able to incorporate VDI as a technology focus area. In this blog I’ll explain how to create a one-click dashboard to focus on certain key stats that are central to the VMware View environment, and help the command center personnel in times of outages.

As I have stated in previous posts, these are examples that can either be replicated in their entirety, or be used as a jumping-off point in an effort to construct a custom dashboard with stats that are most germane to your environment and command center personnel.

Additionally, as I have done in previous posts, I’m going to rely on a combination of “heat map” and “generic scoreboard” widgets. I’m also going to introduce a widget type known as “resources” in this dashboard. In total there should be nine widgets:

  • Four generic scoreboard widgets
  • Three heat map widgets
  • Two resources widgets

The final output should look like this:

AVaneswaran 1

I then want to configure my widgets so the following details are presented:

Heat maps

  • The overall health of my ESXi hosts running full clone VDI workloads
  • The overall health of my ESXi hosts running linked clone VDI workloads
  • The overall health of my Horizon View infrastructure servers. These servers include my Connection Servers, Security Servers, vCenter Server, View Composer server, etc.
  • The number of available virtual machines for linked clone pools. This is an important stat for considering the environment, so you can find out the maximum number of desktops in the linked clone pools.

Generic scoreboard widgets

  • I want to check to see if my Connection Servers are enabled and accepting incoming connections. This particular stat will take on added significance in View environments running a 5+2 Connection Servers replicated ring.
  • The number of concurrent external connections currently accessing the environment.
  • And finally, a multi-purpose widget to provide the following data:
    • The total number of connected concurrent sessions.
    • The total number of overall virtual machines in the infrastructure.
    • The average bandwidth utilization per session. Horizon View desktops can experience anywhere from 150 Kbps–350 Kbps for task and knowledge workers utilizing apps such as browsers, Microsoft Office, and basic productivity apps. This figure increases with high graphics, printing, peripheral device, and audio and video usage. However, if I’m confident the environment is running a fairly uniform set of workloads, this stat is useful for monitoring the entire environment. If the pools are built based on varying use cases with different workload profiles, it might be a good idea to generate this stat on a per-pool or per- use case basis.
    • Outgoing and incoming packet loss on the network vLAN segment running my VDI workloads.
    • Total network utilization on the vLAN segment running VDI workloads.
    • Total bandwidth utilization on the network vLAN segment running VDI workloads.

Resources widgets

  • This widget calculates the average PCoIP round-trip latency in milliseconds. This is one of the most important and oft-monitored stats in a Horizon View environment with centralized infrastructure that also serves end-users accessing the environment from distant locations across the WAN. I want to achieve this result on a per-pool basis, and am only concerned about external connections coming in from WAN locations. In addition, I’m particularly interested in low-bandwidth, high-latency remote sites, but not so much with internal users connected to the corporate network with regulated and guaranteed bandwidth. Finally, I want to configure this first widget for my automated full clone pools.
  • Next, we replicate the aforementioned widget, only this time for automated, linked clone pools.

The widgets can be arranged in the dashboard however you choose.

To start, I’m going to configure a full clone widget to display the health of my ESXi hosts running full clone desktops, and I want to place the heat map widget on the far left side of a three-column dashboard.

AVaneswaran 2

The key here is to filter by the hosts running full clone desktop workloads. This is achievable with a custom resource tag I’ve created in my environment. I’ve demonstrated the technique to create such a resource tag in the first of this three-part blog series. The configured widget should look like this:

AVaneswaran 3

Repeat this procedure for another widget for linked clone desktop pools, and filter by the hosts running linked clone workloads.

AVaneswaran 4

The configured widget will look like this:

AVaneswaran 5

Next, I want to configure the following widget to display the health of my View infrastructure servers, and I want to place this in between the first two widgets along the top portion of the dashboard. It is important to place the infrastructure server resources in custom resource tags so they are filtered by said resources.

AVaneswaran 6

Here is the configured widget:

AVaneswaran 7

Next, a generic scoreboard widget is placed underneath the heat map widget we just configured. This widget will display the number of enabled connection servers that accept incoming connections.

AVaneswaran 8

When complete it will look like this:

AVaneswaran 9

The next step is a generic dashboard that displays just the total number of tunneled sessions through the View Security Server.

AVaneswaran 10

And here is the end result:

AVaneswaran 11

We now want a heat map that displays the number of available virtual machines in the automated linked clone pools. In order to ensure production pools are more or less consistent during peak times, we need a heat map that shows the maximum number of desktops and total sessions.

AVaneswaran 12

Once again, the trick is to filter by a resource tag for your automated linked clone pools; the heat map will look like this:

AVaneswaran 13

Next I want to work on a generic scoreboard that gives me the following details:

-        Total number of current concurrent sessions

-        Total number of overall virtual machines in my environment

-        Workload percentage on the DHCP vLAN, which is serving all VDI desktops IPs

-        A super metric that calculates the average bandwidth utilization in Kbps

-        Outbound and inbound DCHP vLAN packet errors

The widget should be configured like this:

AVaneswaran 14

Super metrics are required to calculate the average bandwidth utilization per session, and the total DHCP vLAN bandwidth utilization. Here is the super metric for calculating the average bandwidth utilization per session.

AVaneswaran 15

We also need a super metric to calculate desktop DHCP vLAN total bandwidth utilization.

AVaneswaran 16

Finally, configure the two resources widgets. The first widget goes on the bottom left of the dashboard, and is configured as follows:

AVaneswaran 17

The end result will appear like this:

AVaneswaran 18

Make sure to filter by the custom resource tag containing only full clone pools. Replicate this process, step-by-step, on the bottom right-hand side of the dashboard, but this time for linked clone pools.

And here is the final dashboard!

AVaneswaran 19

In conclusion, here are a few takeaways from this blog:

  • IT command centers are sometimes challenged with knowing exactly who to assign an issue to, and typically can use more visibility than they can get. In providing this custom dashboard through AD Security Group dashboard sharing, you can help your command center personnel get exactly the kind of visibility into the environment they need to aid them in their decision-making process.
  • The examples provided in this three-part blog series show you the extent to what you can achieve in vRealize Operations for Horizon in a time-efficient manner. Fundamentally, it’s a matter of knowing what data you want to display, and if done correctly―as demonstrated in these blogs―data manipulation becomes extremely easy.

Now, I’ve barely scratched the surface of VMware vRealize Operations Manager capabilities in these blog posts; there is so much more that has not yet been discussed. I just wanted to focus on a set of custom dashboards, where each one is designed to achieve a very specific purpose. The methods detailed in these blog posts only demonstrate one approach – but there are others. These show just some of the ways vRealize Operations Manager can be explored, data can be mined, and ways you can gain a view into the environment.


Anand Vaneswaran is a Senior Technology Consultant with the End User Computing group at VMware. He is an expert in VMware Horizon (with View), VMware ThinApp, VMware vCenter Operations Manager, VMware vCenter Operations Manager for Horizon, and VMware Horizon Workspace. Outside of technology, his hobbies include filmmaking, sports, and traveling.

The Complexity of Data Center Blueprinting

GKarakasBy Gabor Karakas

Data centers are wildly complicated in nature and grow in an organic fashion, which fundamentally means that very few people in the organization understand the IT landscape in its entirety. Part of the problem is that these complex ecosystems are built up over long periods of time (5–10 years) with very little documentation or global oversight; therefore, siloed IT teams have the freedom to operate according to different standards – if there are any. Oftentimes new contractors or external providers replace these IT teams, and knowledge transfer rarely happens, so the new workforce might not understand every aspect of the technology they are tasked to operate, and this creates key issues as well.

GKarakas 1

 

Migration or consolidation activities can be initiated due to a number of reasons:

  • Reduction of the complexity in infrastructure by consolidating multiple data centers into a single larger one.
  • The organization simply outgrew the IT infrastructure, and moving the current hardware into a larger data center makes more sense from a business or technological perspective.
  • Contract renegotiations fail and significant cost reductions can result from moving to another provider.
  • The business requires higher resiliency; by moving some of the hardware to a new data center and creating fail-proof links in between the workloads, disasters can be avoided and service uptime can be significantly improved.

When the decision is made to move or consolidate the data center for business or technical reasons, a project is kicked off with very little understanding into the moving parts of the elements to be changed. Most organizations realize this a couple of months into the project, and usually find the best way forward is to ask for external help. This help usually comes from the joint efforts of multiple software and consultancy firms to deliver a migration plan that identifies and prioritizes workloads, and creates a blueprint of all their vital internal and external dependencies.

A migration plan is meant to contain at least the following details of identified and prioritized groups of physical or virtual workloads:

  • The applications they contain or serve
  • Core dependencies (such as NTP, DNS, LDAP, Anti-virus, etc.)
  • Capacity and usage trends
  • Contact details for responsible staff members

Any special requirements that can be obtained either via discovery or by interviewing the right people

GKarakas 2

In reality, creating such a plan is very challenging, and there can be many pitfalls. The following are common problems that can surface during development of a migration plan:

Technical Problems

It is vital that communication is strong between all involved, technical details are not overlooked, and all information sources are identified correctly. Issues can develop such as:

  • · Choosing the right tool (VMware Application Dependency Planner as an example)
  • · Finding the right team to implement and monitor the solution
  • · Reporting on the right information, which can prove difficult

Technical and human information sources are equally important, as automated discovery methods can only identify certain patterns; people need to put the extra intelligence behind this information. It is also important to note that a discovery process can take months, during which time the discovery infrastructure needs to function at its best, without interruption to data flows nor appliances.

Miscommunication

As previously stated, team communication is vital. There is a constant need to:

  • Verify discovery data and tweak technical parameters
  • Involve the application team in frequent validation exercises

It is important to accurately identify and document deliverables before starting a project, as misalignment with these goals can cause delays or failures further down the timeline.

Politics

With major changes in the IT landscape, there are also Human Resource-related matters to handle. Depending on the nature of the project, there are potential issues:

  • The organization’s move to another data center might abandon previous suppliers
  • IT staff might be left without a job

It can be part of an outsourcing project that moves certain operations or IT support outside the organization

GKarakas 3

Some of these people will need to help in the execution of the project, so it is crucial to treat them with respect and to make sure sensitive information is closely guarded. The blueprinting team members will probably know what the outcome of the project will bring for suppliers and the customer’s IT team. If some of this information is released, the project can be compromised with valuable information and time lost.

Blueprint Example

When delivering a migration blueprint, each customer will have different demands, but in most cases, the basic request will be the same: to provide a set of documents that contain all servers and applications, and show how they are dependent on each other. Most of the time, customers will also ask for visual maps of these connections, and it is the consultant’s job to make sure these demands are reasonable. There is only so much that can be visualized in a map that is understandable, so it is best to limit the number of servers and connections to about 10–20 per map. The following complex image is an example of just a single server with multiple running services discovered.

 

GKarakas Figure 1

Figure 1. A server and its services visualized in VMware’s ADP discovery tool

Beyond putting individual applications and servers on an automated map, there can also be demand for visualizing application-to-application connectivity, and this will likely involve manipulating data correctly. Some dependencies can be visualized, but others might require a text-based presentation.

The following is an example of a fictional setup, where multiple applications talk to each other―just like in the real world. Both visual and text-based representations are possible, and it is easy to see that for overview and presentation purposes, a visual map is more suitable. However, when planning the actual migration, the text-based method might prove more useful.

GKarakas Figure 2

Figure 2. Application dependency map: visual representation

GKarakas Figure 3

Figure 3. Application dependency map: raw discovery data

GKarakas Figure 4

Figure 4. Application dependency map: raw data in pivot table

It is easy to see that a blueprinting project can be a very challenging exercise with multiple caveats and pitfalls. So, careful planning and execution is required with strong communication between everyone involved.

This is the first in a series of articles that will give detailed overview, implementation and reporting methods on data center blueprinting.


Gabor Karakas is a Technical Solutions Architect in the Professional Services Engineering team and is based in the San Francisco Bay Area.

SDDC is the Future

Michael_Francis

 

 

By Michael Francis

 

VMware’s Transformative Growth

Over the last eight years at VMware I have observed so much change, and in my mind it has been transformative change. I think about my 20 years in IT and the changes I have seen, and feel the emergence of virtualization of x86 hardware will be looked upon as one of the most important catalysts for change in information technology history. It has modified the speed of service delivery, the cost of that delivery and subsequently has enabled innovative business models for computing – such as cloud computing.

I have been part of the transformation of our company in these eight years; we’ve grown from being a single-product infrastructure company to what we are today – an application platform company. Virtualization of compute is now mainstream. We have broadened virtualization to storage and networking, bringing the benefits realized for compute to these new areas. I don’t believe this is incremental value or evolutionary. I think this broader virtualization―coupled with intelligent, business policy-aware management systems―will be so disruptive to the industry that it will be considered a separate milestone potentially, on par with x86 virtualization.

Where We Are Now

Here is why I think the SDDC is significant:

  • The software-defined data center (SDDC) brings balance back to the ongoing discussion between the use of public and private computing.
  • It enables the attributes of agility, reduced operational and capital costs, lower security risk, and a new of stack management visibility.
  • SDDC not only modifies the operational and consumption model for computing infrastructure, but it also modifies the way computing infrastructure is designed and built.
  • Infrastructure is now a combination of software and configuration. It can be programmatically generated based on a specification; hyper-converged infrastructure is one example of this.

As a principal architect in VMware’s team responsible for the generation of tools and intellectual property that can assist our Professional Services and Partners to deliver VMware SDDC solutions, the last point is especially interesting and the one I want to spend some time on.

How We Started

As an infrastructure-focused project resource and lead over the past two decades, I have become very familiar developing design documents and ‘as-built’ documentation. I remember rolling out Microsoft Windows NT 4.0 in 1996 on CDs. There was a guide that showed me what to click and in what order to do certain steps. There was a lot of manual effort, opportunity for human error, inconsistencies between builds, and a lot of potential for the built item to vary significantly from the design specification.

Later, in 2000, I was a technical lead for a systems integrator; we had standard design document templates and ‘as-built’ document templates, and consistency and standardization had become very important. A few of us worked heavily with VBScript, and we started scripting the creation of Active Directory configurations such as Sites and Services definitions, OU structures and the like. We dreamed of the day when we could do a design diagram, click ‘build’, and have scripts build what was in the specification. But we couldn’t get there. The amount of work to develop the scripts, maintain them, and modify them as elements changed was too great. That was when we focused on the operating stack and a single vendor’s back office suite; imagine trying to automate a heterogeneous infrastructure platform.

It’s All About Automated Design

Today we have the ability to leverage the SDDC as an application programming interface (API) that abstracts not only the hardware elements below and can automate the application stack above― but can abstract the APIs of ecosystem partners.

This means I can write to one API to instantiate a system of elements from many vendors at all different layers of the stack, all based on a design specification.

Our dream in the year 2000 is something customers can achieve in their data centers with SDDC today. To be clear – I am not referring to just configuring the services offered by the SDDC to support an application, but also to standing up the SDDC itself. The reality is, we can now have a hyper-converged deployment experience where the playbook of the deployment is driven by a consultant-developed design specification.

For instance, our partners and our professional services organization has access to what we refer to as the SDDC Deployment Tool (an imaginative name, I know) (or SDT for short). This tool can automate the deployment and configuration of all the components that make up the software-defined data center. The following screenshot illustrates this:

MFrancis1

 

Today this tool deploys the SDDC elements in a single use case configuration.

In VMware’s Professional Services Engineering group we have created a design specification for an SDDC platform. It is modular and completely instantiated in software. Our Professional Services Consultants and Partners can use this intellectual property to design and build the SDDC.

What Comes Next?

I believe our next step is to architect our solution design artifacts so the SDDC itself can be described in a format that allows software―like SDT―to automatically provision and configure the hardware platform, the SDDC software fabric, and the services of the SDDC to the point where it is ready for consumption.

A consultant could design the specification of the SDDC infrastructure layer and have that design deployed in a similar way to hyper-converged infrastructure―but allowing the customer to choose the hardware platform.

As I mentioned at the beginning, the SDDC is not just about technology, consumption and operations: it provides the basis for a transformation in delivery. To me a good analogy right now is the 3D printer. The SDDC itself is like the plastic that can be molded into anything; the 3D printer is the SDDC deployment tool, and our service kits would represent the electronic blueprint the printer reads to then build up the layers of the SDDC solution for delivery.

This will create better and more predictable outcomes and also greater efficiency in delivering the SDDC solutions to our customers as we treat our design artifacts as part of the SDDC code.


Michael Francis is a Principal Systems Engineer at VMware, based in Brisbane.

App Volumes AppStacks vs. Writable Volumes

Dale-Carter-150x150By Dale Carter, Senior Solutions Architect, End-User Computing

With the release of VMware App Volumes I wanted to take the time to explain the difference between AppStacks and Writable Volumes, and how the two need to be designed as you start to deploy App Volumes.

The graphic below shows the traditional way to manage your Windows desktop, as well as the way things have changed with App Volumes and the introduction of “Just-in-time” apps.

DCarter AppVolumes v Writable Volumes 1

 

So what are the differences between AppStacks and Writable Volumes?

AppStacks

An AppStack is a virtual disk that contains one or more applications that can be assigned to a user as a read-only disk. A user can have one or many AppStacks assigned to them depending on how the IT administrator manages the applications.

When designing for AppStacks it should be noted that an AppStack is deployed in a one-to-many configuration. This means that at any one time an AppStack could be connected to one or hundreds of users.

DCarter AppVolumes v Writable Volumes 2

 

When designing storage for an AppStack it should also be noted that App Volumes do not change the IOPS required for an application, but it does consolidate the IOPS to a single virtual disk. So like any other virtual desktop technology it is critical to know your applications and their requirements; it is recommended to do an application assessment before moving to a large-scale deployment. Lakeside Software and Liquidware Labs both publish software for doing application assessments.

For example, if you know that on average the applications being moved to an AppStack use 10 IOPS, and that the AppStack has 100 users connected to it, you will require 1,000 IOPS average (IOPS pre-user x number of users) to support that AppStack; you can see it is key to designing your storage correctly for AppStacks.

In large-scale deployments it may be recommended to create copies of AppStacks and place them across storage LUNs, and assign a subset of users to each AppStack for best performance.

DCarter AppVolumes v Writable Volumes 3

 

Writable Volumes

Like AppStacks, a Writable Volume is also a virtual disk, but unlike AppStacks a Writable Volume is configured in a one-to-one configuration, and each user has their own assigned Writeable Volume.

DCarter AppVolumes v Writable Volumes 4

 

When an IT administrator assigns a Writable Volume to a user, the first thing the IT administrator will need to decide is what type of data the user will be able to store in the Writable Volumes. There are three choices :

  • User Profile Data Only
  • User Installed Applications Only
  • Both Profile Data and User Installed Applications

It should be noted that App Volumes are not a Profile Management tool, but can be used alongside any currently used User-Environment Management tool.

When designing for Writable Volumes, the storage requirement will be different than it is when designing for AppStacks. Where an AppStack will require all Read I/O, a Writable Volume will require both Read and Write I/O. The IOPS for a Writable Volume will also vary per user depending on the individual user and how they use their data; it will also vary depending on the type of data the IT administrator allows the user to store in their Writable Volume.

IT administrators should monitor their users and how they access their Writable Volume; this will help them manage how many Writable Volumes can be configured on a single storage LUN.

Hopefully this blog helps describe the differences between AppStacks and Writable Volumes, and the differences that should be taken into consideration when designing for each.

I would like to thank Stephane Asselin for his input on this blog.


Dale Carter, a VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.

App Volumes AppStack Creation

Dale-Carter-150x150By Dale Carter, Senior Solutions Architect, End-User Computing

VMware App Volumes provide just-in-time application delivery to virtualized desktop environments. With this real-time application delivery system, applications are delivered to virtual desktops through VMDK virtual disks, without modifying the VM or applications themselves. Applications can be scaled out with superior performance, at lower costs, and without compromising end-user experience.

In this blog post I will show you how easy it is to create a VMware App Volumes AppStack and how that AppStack can then be easily deployed to up to hundreds of users

When configuring App Volumes with VMware Horizon View an App Volumes AppStack is a read-only VMDK file that is added to a user’s virtual machine, and then the App Volumes Agent merges the two or more VMDK files so the Microsoft Windows operating system sees the files as just one drive. This way the applications look to the Windows OS as if they are natively installed and not on a separate disk.

To create an App Volumes AppStack follow these simple steps.

  1. Log in to the App Volumes Manager Web interface.
  2. Click Volumes.
    DCarter Volumes
  3. Click Create AppStack.
    DCarter AppStack
  4. Give the AppStack a name. Choose the storage location and give it a description (optional). Then click Create.
    DCarter Create AppStack
  5. Choose to either Perform in the background or Wait for completion and click Create.
    DCarter Create
  6. vCenter will now create a new VMDK for the AppStack to use.
  7. Once vCenter finishes creating the VMDK the AppStack will show up as Un-provisioned. Click the + sign.
    DCarter
  8. Click Provision
    .
    DCarter Provision
  9. Search for the desktop that will be used to install the software. Select the Desktop and click Provision.
    DCarter Provision AppStack
  10. Click Start Provisioning.
    DCarter Start Provisioning
  11.  vCenter will now attach the VMDK to the desktop.
  12. Open the desktop that will be used for provisioning the new software. You will see the following message: DO NOT click OK. You will click OK after the install of the software.
    DCarter Provisioning Mode
  13. Install the software on the desktop. This can be just one application or a number of applications. If reboots are required between installs that is OK. App Volumes will remember where you are after the install.
  14. Once all of the software has been installed click OK.
    DCarter Install
  15. Click Yes to confirm and reboot.
    DCarter Reboot
  16. Click OK.
    DCarter 2
  17. The desktop will now reboot. After the reboot you must log back in to the desktop.
  18. After you log in you must click OK. This will reconfigure the VMDK on the desktop.
    DCarter Provisioning Successful
  19. You can now connect to the App Volumes Manager Web interface and see that the AppStack is ready to be assigned.
    DCarter App Volumes Manager

Once you have created the AppStack you can assign the AppStack to an Active Directory object. This could be a user, computer or user group.

To assign an AppStack to a user, computer or user group, follow these simple steps.

  1. Log in to the App Volumes Manager Web interface.
  2. Click Volumes.
    DCarter Volumes Dashboard
  3. Click the + sign by the AppStack you want to assign.
  4. Click Assign.
    DCarter Assign
  5. Search for the Active Director object. Select the user, computer, OU or user group to assign the AppStack to. Click Assign.
    DCarter Assign Dashboard
  6. Choose either to assign the AppStack at the next login or immediately, and click Assign.
    DCarter Active Director
  7. The users will now have the AppStack assigned to them and will be able to launch the applications as they would any normal application.
    DCarter AppStack Assign

By following these simple steps you will be able to quickly create an AppStack and simply deploy that AppStack to your users.


Dale Carter, a VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.