Home > Blogs > VMware Consulting Blog > Tag Archives: vCenter

Tag Archives: vCenter

Virtualization and VMware Virtual SAN … the Old Married Couple

Don’t Mistake These Hyper-Converged Infrastructure Technologies as Mutually Exclusive

Jonathan McDonaldBy Jonathan McDonald

I have not posted many blogs recently as I’ve been in South Africa. I have however been hard at work on the latest release of VMware vSphere 6.0 Update 2 and VMware Virtual SAN 6.2. Some amazing features are included that will make life a lot easier and add some exciting new functionality to your hyper-converged infrastructure. I will not get into these features in this post, because I want to talk about one of the bigger non-technical questions that I get from customers and consultants alike. This is not one that is directly tied to the technology or architecture of the products. It is the idea that you can go into an environment and just do Virtual SAN, which from my experience is not true. I would love to know if your thoughts and experiences have shown you the same thing.

Let me first tell those of you who are unaware of Virtual SAN that I am not going to go into great depth about the technology. The key is that, as a platform, it is hyper-converged, meaning it is included with the ESXi hypervisor. This makes it radically simple to actually configure—and, more importantly, use—once it is up and running.

My hypothesis is that 80 to 90% of what you have to do to design for Virtual SAN focuses on the Virtualization design, and not so much on Virtual SAN.  This is not to say the Virtual SAN design is not important, but virtualization has to be integral to the design when you are building for it. To prove this, take a look at what the standard tasks are when creating the design for the environment:

  1. Hardware selection, racking, configuration of the physical hosts
  2. Selection and configuration of the physical network
  3. Software installation of the VMware ESXi hosts and VMware vCenter server
  4. Configuration of the ESXi hosts
    • Networking (For management traffic, and for VMware vSphere vMotion, at a minimum)
    • Disks
    • Features (VMware vSphere High Availability, VMware vSphere Distributed Resource Scheduler, VMware vSphere vMotion, at a minimum)
  5. Validation and testing of the configuration

If I add the Virtual SAN-specific tasks in, you have a holistic view of what is required in most greenfield configurations:

  1. Configuration of the Virtual SAN network
  2. Turning on Virtual SAN
  3. Creating new policies (optional, as the default is in place once configured)
  4. Testing Virtual SAN

As you can see, my first point shows that the majority of the work is actually virtualization and not Virtual SAN. In fact, as I write this, I am even more convinced of my hypothesis. The first three tasks alone are really the heavy hitters for time spent. As a consultant or architect, you need to focus on these tasks more than anything. Notice above where I mention “configure” in regards to Virtual SAN, and not installation; this is because it is already a hyper-converged element installed with ESXi. Once you get the environment up and running with ESXi hosts installed, Virtual SAN needs no further installation, simply configuration. You turn it on with a simple wizard, and, as long as you have focused on the supportability of the hardware and the underlying design, you will be up and running quickly. Virtual SAN is that easy.

Many of the arguments I get are interesting as well. Some of my favorites include:

  • “The customer has already selected hardware.”
  • “I don’t care about hardware.”
  • “Let’s just assume that the hardware is there.”
  • “They will be using existing hardware.”

My response is always that you should care a great deal about the hardware. In fact, this is by far the most important part of a Virtual SAN engagement. With Virtual SAN, if the hardware is not on the VMware compatibility list, then it is not supported. By not caring about hardware, you risk data loss and the loss of all VMware support.

If the hardware is already chosen, you should ensure that the hardware being proposed, added, or assumed as in place is proper. Get the bill of materials or the quote, and go over it line-by-line if that’s what’s needed to ensure that it is all supported.

Although the hardware selection is slightly stricter than with an average design, it is much the same as any traditional virtualization engagement in how you come to the situation. Virtual SAN Ready nodes are a great approach and make this much quicker and simpler, as they offer a variety of pre-configured hardware to meet the needs of Virtual SAN. Along with the Virtual SAN TCO Calculator it makes the painful process of hardware selection a lot easier.

Another argument I hear is “If I am just doing Virtual SAN, that is not enough time.” Yes, it is. It really, really is. I have been a part of multiple engagements for which the first five tasks above are already completely done. All we have to do is come in and turn on Virtual SAN. In Virtual SAN 6.2, this is made really easy with the new wizard:

JMcDonald_Configure VSAN

Even with the inevitable network issues (not lying here; every single time there is a problem with networking), environmental validation, performance testing, failure testing, testing virtual machine creation workflows, I have never seen it take more than a week to do this piece for a single cluster regardless of size of configuration. In many cases, after three days, everything is up and running and it is purely customer validation that is taking place. As a consultant or architect, don’t be afraid of the questions customers ask in regards to performance and failures. Virtual SAN provides mechanisms to easily test the environment as well as see as what “normal” is.

Here are two other arguments I hear frequently:

  • “We have never done this before.”
  • “We don’t have the skillset.”

These claims are probably not 100% accurate. If you have used VMware, or you are a VMware administrator, you are probably aware of the majority of what you have to do here. For Virtual SAN, specifically, this is where the knowledge needs to be grown. I suggest a training, or a review of VMworld presentations for Virtual SAN, to get familiar with this piece of technology and its related terminology. VMware offers training that will get you up to speed on hyper-converged infrastructure technologies, and the new features of VMware vSphere 6.0 Update Manager 2 and Virtual SAN 6.2.

For more information about free learnings, check out the courses below:

In addition, most of the best practices you will see are not unfamiliar since they are vCenter- or ESXi-related. Virtual SAN Health gives an amazing overview that is frequently refreshed, so any issues you may be seeing are reported here; this also takes a lot of the guess work out of the configuration tasks as you can see from the screenshot below, as many, if not all of, the common misconfigurations are shown.

JMcDonald_VSAN Health

In any case, I hope I have made the argument that Virtual SAN is mostly a virtualization design that just doesn’t use traditional SANs for storage.  Hyper-converged infrastructure is truly bringing change to many customers. This is, of course, just my opinion, and I will let you judge for yourself.

Virtual SAN has quickly become one of my favorite new technologies that I have worked with in my time at VMware, and I am definitely passionate about people using it to change the way they do business. I hope this helps in any engagements that you are planning as well as to prioritize and give a new perspective to how infrastructure is being designed.


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments

VMware vRealize Operations Python Adapter – A Hidden Treasure

Jeremy WheelerBy Jeremy Wheeler

Even more power comes out of VMware vRealize Operations when enabling the vRealize Operations Python Adapter, adding additional intelligent monitoring and action capabilities.

To do this, execute the following steps:

Image 1:

JWheeler Image 1

  1. Select ‘Solutions’
  2. Select ‘VMware vSphere’
  3. Select ‘vCenter Python Adapter’

Add your vCenters, and match what you configured under the ‘vCenter Adapter’ section above #3 in image 1.

What Does This Do for Me?

When viewing the default dashboard ‘Recommendations’ you might see something such as the following in your ‘Top Risk Alerts For Descendants’

Image 2:

JWheeler Image 2

By selecting the alert, you will be presented with another dialog to dig into, which is an object we should inspect:

Image 3:

JWheeler Image 3

After I select ‘View Details’ it will present me with the object details of the virtual machine ‘av_prov1’.

Image 4:

JWheeler Image 4

Without Python Adapters configured you will not see the ‘Set Memory for VM’ button; with it configured it will be visible under the ‘Recommendations’ section.

Image 5:

JWheeler Image 5

After selecting ‘Set Memory for VM’ you will be presented with a new dialog (Image 5). Here we can see what the new memory recommendation would be and adjust or apply it. Additionally, if you want the changes to happen now, you can select Power-Off/Snapshot. Without powering off the virtual machine, vRealize Operations will attempt to hot-add the additional memory if the OS will support it.

Image 6:

JWheeler Image 6

Once you select ‘Begin Action’ you will see the dialog in Image 6.


Jeremy Wheeler is an experienced senior consultant and architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013

VMware Certificate Authority, Part 3: My Favorite New Feature of vSphere 6.0 – The New!

jonathanm-profileBy Jonathan McDonald

In the last blog, I left off right after the architecture discussion. To be honest, this was not because I wanted to but more because I couldn’t say anything more about it at the time. As of September 10, vSphere 6.0 Update 1 has been released with some fantastic new features in this area that make the configuration of customized certificates even easier. At this point what is shown is a tech preview, however it shows the direction that the development is headed in the future. It is amazing when things just work out and with a little bit of love, an incredibly complex area becomes much easier.

In this release, there is a UI that has been released for configuration of the Platform Services Controller. This new interface can be accessed by navigating to:

https://psc.domain.com/psc

When you first navigate here, a first time setup screen may be shown:

JMcDonald 1

To set up the configuration, login with a Single Sign-On administrator account, and the actual setup will run and be complete in short order. Subsequently when you login, the screen is plain and similar to the login of the vSphere Web Client:

JMcDonald 2
After login, the interface appears as follows:

JMcDonald 3

As you can see, it provides a ton of new and great functionality, including a GUI for installation of certificates! I will not be talking about the other features except to say there is some pretty fantastic content in there, including the single sign-on configuration, as well as appliance-specific configurations. I only expect this to grow in the future, but it is definitely amazing for a first start.

Let’s dig in to the certificate stuff.

Certificate Store

When navigating to the Certificate Store link, it allows you to see all of the different certificate stores that exist on the VMware Certificate Authority System:

JMcDonald 4This gives the option to view the details of all the different stores that are on the system, as well as view details, and add or remove entry details of each of the entries available:

JMcDonald 5
This is very useful when troubleshooting a configuration or for auditing/validating the different certificates that are trusted on the system.

Certificate Authority

Next up: the Certificate Authority option, which shows a view similar to the following:

JMcDonald 6

This area shows the Active, Revoked, Expired and Root Certificate for the VMware Certificate Authority. It also provides the option to be able to show details of each certificate for auditing or review purposes:

JMcDonald 7

In addition to providing a review, the Root Certificate Tab also allows the additional functionality of replacing the root certificate:

JMcDonald 8

When you go here to do just that, you are prompted to input the new Certificate and Private Key:

JMcDonald 9

Once processed the new certificate will show up in the list.

Certificate Management

Finally, and by far the most complex, is the Certificate Management screen. When you first click this, you will need to enter the single sign-on credentials for the server you want to connect to. In this case, it is to the local Platform Services Controller:

JMcDonald 10

Once logged in the interface looks as follows:

JMcDonald 11

Don’t worry, however, the user or server is not a one-time thing and can be changed by clicking the logout button. This interface allows the Machine Certificates and Solution User Certificates to be viewed, renewed and changed as appropriate.

If the renew button is clicked the certificate will be renewed from VMware Certificate Authority.JMcDonald 12

Once complete the following message is presented:

JMcDonald Renewal

If the certificate is to be replaced it is similar to the process of replacing the root certificate:

JMcDonald Root

Remember that the root certificate must be valid or replaced first or the installation will fail. Finally, the last screenshot I will show is the Solution Users Screen:

JMcDonald Solutions

The notable difference here is that there is a Renew All button, which will allow for all the solution user certificates to be changed.

This new interface for certificates is the start of something amazing, and I can’t wait to see the continued development in the future. Although it is still a tech preview, from my own testing it seems to work very well. Of course my environment is a pretty clean one with little environmental complexity which can sometimes show some unexpected results.

For further details on the exact steps you should take to replace the certificates (including all of the command line steps, which are still available as per my last blog) see, Replacing default certificates with CA signed SSL certificates in vSphere 6.0 (2111219).

I hope this blog series has been useful to you – it is definitely something I am passionate about so I can write about it for hours! I will be writing next about my experiences at VMworld and hopefully to help address the most common concerns I heard from customers while there.


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments

 

VMware Certificate Authority – My Favorite New Feature of vSphere 6.0 – Part 1 – The History

jonathanm-profileBy Jonathan McDonald

Anyone who knows me knows I have a particular passion (however misplaced that may be…) for the VMware Certificate Story. This multi-part blog series will discuss a bit of background on the certificate story, what you need to know about the architectural design of it in an environment and some new features you may not know about.

Let me start off by saying this passion started off several years ago when I was in Global Support Services, and I realized that too few people had an understanding about certificates. I am not even talking certificates in the context of VMware, but in general. This was compounded when we released vSphere 5.1 due to the fact that strict certificate checking was enabled.

A Bit of History

Although certificates were used for securing communication prior to vSphere 5.1, they were self-signed and there was no verification performed to ensure the certificate was valid. Therefore, for example, the certificate could be expired, or be used for multiple services at the same time (such as for the vCenter Server service and the vCenter Inventory service). This is obviously not a good practice, but it nevertheless was allowed.

When vCenter Single Sign-On was released with vSphere 5.1, it enforced strict certificate checking. This included not only the certificate uniqueness, but such information as the validity period of the certificates as well. Therefore, if any of the components were not using a unique and valid certificate, they would not be accepted when registering the different services as solutions in Single Sign-On. This would turn out to be a pretty large issue as upgrades would fail with very little detail as to why.

That being said, if all services in vCenter 5.1 and 5.5 have their certificate replaced, seven unique certificates are required:

  • vCenter Single Sign-On
  • vCenter Inventory Service
  • vCenter Server
  • vSphere Web Client
  • vSphere Web Client Log Browser
  • vCenter Update Manager
  • vCenter Orchestrator

The process to change the certificates is not straightforward and caused a significant amount of trouble amongst customers and global support services alike. This is when we raised it as a concern internally and helped to get a short-, medium- and long-term plan in place in time to make it easier to replace certificates when required. The plan was as follows:

  • Short term – We ensured the KB articles relating to certificate replacement were accurate and easy to follow.
  • Medium term – We helped in the development of the SSL Certificate Automation Tool, which dramatically reduced the number of steps and made it fairly easy to replace the certificates.
  • Long term – We forced focus on the issue so a solution could be built into the product.

Prior to moving from VMware Support to Professional Services Engineering we had released the tool and the larger plan was in place. The following are two blog posts I did for the tool:

http://blogs.vmware.com/kb/2013/04/introducing-the-vcenter-certificate-automation-tool-1-0.html

http://blogs.vmware.com/kb/2013/05/ssl-certificate-automation-tool-version-1-0-1.html

With vSphere 6.0 the larger long-term solution is finally coming to fruition with the introduction of the VMware Certificate Authority. It solves many of the problems that were seen.

Introduction to the VMware Certificate Authority

With vSphere 6.0, the base product installs an internal certificate authority (CA) called the VMware Certificate Authority. This is a part of the Platform Services Controller installation and has changed the architecture significantly for the better. No longer are the default certificates self-signed, rather, they are issued and signed by the VMware Certificate Authority.

This works in one of two ways:

  • VMware Certificate Authority acts as the root certificate authority. This is the default configuration and allows for an out-of-the-box configuration that is fully signed. All the clients need to do is to trust the root certificate and the communication is fully trusted.
  • VMware Certificate Authority acts as an Intermediate CA, integrating into an existing CA infrastructure in the environment. This allows for certificates to be issued that are already trusted throughout the environment.

In each of these two modes, it acts in the same way granting certificates to not only the solutions connected to the management infrastructure, but to ESXi hosts as well. This occurs when the solution/host is added to vCenter server. By default, communication is secure and trusted, and therefore, everything on the management network that was previously difficult to secure is trusted.

Introduction to the VMware Endpoint Certificate Store

In addition to the certificate authority itself, vSphere 6 certificates are now managed and stored in a “wallet.” This wallet is called the VMware Endpoint Certificate Store (VECS). The benefit here is certificates and private keys are no longer stored on disk in various locations. They are centrally managed in VECS on every vSphere node. This allows for a greatly simplified configuration for the environment because you no longer need to update trusts when the certificates are replaced because it is done automatically by VECS.

The VECS is installed on all platform services controller installation, including both embedded and external configurations.

JMcDonald Certificate Authority 1

The following different stores for certificates are used:

  • The Machine Certificates store contains the Machine SSL Certificate and private key, which is used for the Reverse Proxy, discussed next.
  • The Root CA Certificates store contains trusted root certificates and revocation lists, from any VMware Certificate Authority in the environment, or the third-party certificate authority being used. Solutions use this store to verify certificates.
  • The Solution User Certificates store contains the certificates and private keys for any solutions such as vCenter and the vSphere Web Client.

A single location for all certificates is a welcome change to the previous versions.

The Reverse Proxy – (Machine SSL Certificate)

Finally, before we get into the recommended architectures, the Reverse Proxy is the last thing I want to introduce. The change here addresses one of the biggest problems that was seen in previous versions of vCenter, which is that there are so many different services installed that require SSL communication. To be honest, the real challenge here is not the number of services rather, trying to get signed certificates all of them from the SSL administrator for the same host.

To combat this, solution users were consolidated with vCenter 6.0 to four vpxd, vpxd-, vsphere-webclient and machine. This is to say that where possible many of the various listening ports on the vCenter Server have been replaced with a single Reverse Web Proxy for communication. The Reverse Web Proxy uses the newly created Machine SSL Certificate to secure communication. Therefore, all communication to the different services is routed to the appropriate service based on the type of request through the reverse proxy. This can be seen in the figure below.

JMcDonald Certificate Authority 2

It is still possible to change the certificate of the solution users behind it, however, these are only used internally and do not necessarily need to be changed. More on this in the next part of this series.

With all of this background detail out of the way, I think it is a good place to pause. Part 2 of this article will discuss the architecture decision points and some configuration details. On another note, I am actually going to be off to VMworld very soon and will be manning the VMware booth, as well as speaking at several sessions! It is unlikely I will get part 2 done before then, but if you have any questions look for me (downstairs at the Solution Exchange, in the VMware Booth, at the Technology Advisor station), and we can talk more on this and other topics with the other members of my team!


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments.

Perform Proactive Load Testing to Build a Successful Environment

Hans BaderBy Hans Bader

So, your company has bought a new set of hardware, referenced the latest white papers and reference architectures, and now will get the virtual machine (VM) densities promised, right? Well, maybe not. White papers and reference architectures are great starting points for designing and building your environment, but unless you are running the same workloads, your mileage may vary. The key to knowing what your infrastructure will support is to proactively perform load testing – before going into production.

Successful load testing is a considerable amount of work; it involves creating synthetic workloads, and understanding the metrics and the impact on the end-user experience. Holistic load testing will bring in different teams: storage, networking, compute, application development, software distribution and virtual infrastructure. Each of these teams has a stake in ensuring a good end-user experience.

Manage, Understand, and Set Expectations

Understand that the performance of a virtual desktop is all about the performance the end user (your customer) is seeing and perceiving. Gathering all the metrics from VMware vCenterTM, PCOIP logs and storage IOPS are all important, but ultimately it is the end-user’s perception and experience that is most important. It is easy for an administrator to say, “The VM has 2 GB out of 4 GB of memory free,” but if the user is experiencing poor performance due to network contention, the end user is still unhappy.

You must set the proper expectations and understand what you can test. Generating CPU and memory load inside the guest is relatively easy with tools such as Iometer. Iometer does a great job of generating compute load, but does not provide any user experience metrics. With remote desktops the challenge becomes testing PCOIP and client-desktop communication.

Have Your Plan in Place

Have your testing methodology, objectives and metrics documented in advance. It is important to develop your test design before starting the actual load testing process. Think it through completely; map the information flow for the entire load test process, entry points and process dependencies. If you are going to create a view pool of 1,000 desktops, will the LAN segment where you will be creating the desktop have enough IP addresses available? Do you know that anti-virus updates are a known pain point? Include these in your testing scenarios. Also include software updates if applicable.

Understand what is going to be tested and how the testing will impact end users. The end-user experience with virtual machines is more than just performance graphs of the VM in your vCenter inventory. Are you testing a local install of Microsoft Word, or a larger client-server based application? Many of the applications running in a virtual desktop are dependent on systems (databases, web services, etc.) that exist outside the desktop. Do you have an information flow diagram that shows all the systems an application may interact with? Do you know where the choke points are? Adequate desktop resources are not sufficient if you are load testing 1,000 desktops running a CRM application – but the environment can only scale to 750 users.

Your End Users Can Help You

During testing do not rely solely upon metrics: your testing must include “eyes on the glass.” Have actual users run through the test scenarios to understand how—as the load increases—the user experience may be impacted. An end user can establish what a good baseline is, what acceptable performance is, and when the end-user experience starts to degrade. These subjective user perceptions can be roughly mapped to network metrics, storage latency or memory usage.

Documented Test Plans

Leverage existing test plans where possible. Many times there are existing test plans for applications that have been developed in-house. These are company- specific and require domain subject matter experts to create and execute on. Utilizing these people can decrease the time and effort required to create and document your current test plans.

Test What is Real

This very important concept is often overlooked. Don’t simply consider CPU and memory consumption of a virtual machine. Running CPU Busy and generating 100 percent CPU usage inside a VM is not realistic. To generate accurate user experience loads you must use appropriate tools, such as:

Proper load testing of your new environment means testing both your architectural and physical designs. It is important to understand how the user load may impact your initial physical design. The number of hosts per cluster, desktops deployed per data store, and network connectivity all come into play. You may find you have been overly conservative in your resource assumptions; but you can change your cluster sizing and therefore obtain greater desktop densities.

During your load testing, use this time to understand the impact on typical administrative tasks while running the hosts. For example, how long does it take to spin up a new pool of 500 desktops when you are running a load test with 1,000 desktops? Or how long does it take to put a host in maintenance mode when it has 80 desktops running? The outcomes of these ancillary tests may change the way you administer your environment.

Expose the Weak Links

What if, during your load testing, you break something? Perhaps you’ll run out of DHCP addresses, the KMS server and your hosts start swapping, LUNS run out of space, and VMs crash. These events should not be considered failures, but rather successful tests. These events show you where to focus attention prior to the next load test so real users do not experience these problems during live operations. Yes, load testing can be a lot of work, and take a considerable amount of effort to do effectively, but the end results are worth it: end users and administrators are happy.

Plan for Remediation

Exposing a weak link during load testing is not a failure, but a positive result. You should ensure your testing plan has time built in to address any weaknesses that are uncovered or that you may have time to test again. The amount of time that has to be added depends on the amount of load that broke the system. If load testing early on with fewer users exposed a lack of DHCP addresses this is a relatively easy fix to a DHCP scope. On the other hand, if testing at full predicted load uncovered a storage performance bottleneck, the time to procure additional storage, install and configure could be much longer.

Testing Scenarios

Your first fully automated test should be a single system test—a single test to ensure your test plan runs through to completion. With no resource contention and no over-commitment on the hosts, this is your baseline. This should also be correlated with an actual user single system test, ensuring the user experience is what is expected.

For the second test, ramp up to 50 percent of what the calculated capacity is. This gives enough wiggle room so you can determine if your design assumptions are accurate. Do you have enough IP addresses? Is storage able to keep up? How are the memory stats?

Run a third test at 100 percent calculated capacity. This is where getting real users into the system is critical. How long does it take to login? Are the test scenarios within the acceptable parameters? Is the user experience acceptable? Have you met all your design criteria and business requirements?

Finally, a fourth test at more than 100 percent expected capacity should be run. Add more desktops, start a full anti-virus scan, perform a software update. No matter how well we design, we always have to plan for the worst-case scenarios. The unexpected removal of a host from a cluster dramatically impacts capacity. Put a host in maintenance mode or reboot it without putting it in maintenance mode. How does your environment perform under these extreme conditions?

“We must contemplate some extremely unpleasant possibilities, just because we want to avoid them.”

– Albert Wohlstetter, American nuclear strategist, 1960

For more information, be sure to check out the following VMware Education Courses:


Hans Bader Consulting Architect, VMware EUC. Hans has over 20 years of IT experience and joined VMware in 2009. With a focus on helping organizations being operationally ready, he works with customers to avoid common mistakes. He is a strong advocate for proactive load testing of environment before allowing users access. Hans has won numerous consulting awards within VMware.

vSphere Datacenter Design – vCenter Architecture Changes in vSphere 6.0 – Part 2

jonathanm-profileBy Jonathan McDonald

In Part 1 the different deployment modes for vCenter and Enhanced Linked Mode were discussed. In part 2 we finish this discussion by addressing different platforms, high availability and recommended deployment configurations for vCenter.

Mixed Platforms

Prior to vSphere 6.0, there was no interoperability between vCenter for Windows and the vCenter Server Linux Appliance. After a platform was chosen, a full reinstall would be required to change to the other platform. The vCenter Appliance was also limited in features and functionality.

With vSphere 6.0, they are functionally the same, and all features are available in either deployment mode. With Enhanced Linked Mode both versions of vCenter are interchangeable. This allows you to mix vCenter for Windows and vCenter Server Appliance configurations.

The following is an example of a mixed platform environment:

JMcDonald pt 2 (1)

This mixed platform environment provides flexibility that has never been possible with the vCenter Platform.

As with any environment, the way it is configured is based on the size of the environment (including expected growth) and the need for high availability. These factors will generally dictate the best configuration for the Platform Services Controller (PSC).

High Availability

Providing high availability protection to the Platform Services Controller adds an additional level of overhead to the configuration. When using an embedded Platform Services Controller, protection is provided in the same way that vCenter is protected, as it is all a part of the same system.

Availability of vCenter is critical due to the number of solutions requiring continuous connectivity, as well as to ensure the environment can be managed at all times. Whether it is a standalone vCenter Server, or embedded with the Platform Services Controller, it should run in a highly available configuration to avoid extended periods of downtime.

Several methods can be used to provide higher availability for the vCenter Server system. The decision depends on whether maximum downtime can be tolerated, failover automation is required, and if budget is available for software components.

The following table lists methods available for protecting the vCenter Server system and the vCenter Server Appliance when running in embedded mode.

Redundancy Method Protects
vCenter Server system?
Protects
vCenter Server Appliance?
Automated protection using vSphere HA Yes Yes
Manual configuration and manual failover, for example, using a cold standby. Yes Yes
Automated protection using Microsoft Clustering Services (MSCS) Yes No

If high availability is required for an external Platform Services Controller, protection is provided by adding a secondary backup Platform Services Controller, and placing them both behind a load balancer.

The load balancer must support Multiple TCP Port Balancing, HTTPS Load Balancing, and Sticky Sessions.  VMware has currently tested several load balancers including F5 and Netscaler, however does not directly support these products. See the vendor documentation regarding configuration details for any load balancer used.

Here is an example of this configuration using a primary and a backup node.

JMcDonald pt 2 (2)

With vCenter 6.0, connectivity to the Platform Services Controller is stateful, and the load balancer is only used for its failover ability. So active-active connectivity is not recommended for both nodes at the same time, or you risk corruption of the data between nodes.

Note: Although it is possible to have more than one backup node, it is normally a waste of resources and adds a level of complexity to the configuration for little gain. Unless there is an expectation that more than a single node could fail at the same time, there is very little benefit to configuring a tertiary backup node.

Scalability Limitations

Prior to deciding the configuration for vCenter, the following are the scalability limitations for the different configurations. These can have an impact on the end design.

Scalability Maximum
Number of Platform Services Controllers per domain

8

Maximum PSCs per vSphere Site, behind a single load balancer

4

Maximum objects within a vSphere domain (Users, groups, solution users)

1,000,000

Maximum number of VMware solutions connected to a single PSC

4

Maximum number of VMware products/solutions per vSphere domain

10

Deployment Recommendations

Now that you understand the basic configuration details for vCenter and the Platform Services Controller, you can put it all together in an architecture design. The choice of a deployment architecture can be a complex task depending on the size of the environment.

The following are some recommendations for deployment. But please note that VMware recommends virtualizing all the vCenter components because you gain the benefits of vSphere features such as VMware HA. These recommendations are provided for virtualized systems; physical systems need to be protected appropriately.

  • For sites that will not use Enhanced Linked Mode, use an embedded Platform Services Controller.
    • This provides simplicity in the environment, including a single pane-of-glass view of all servers while at the same time reducing the administrative overhead of configuring the environment for availability.
    • High availability is provided by VMware HA. The failure domain is limited to a single vCenter Server, as there is no dependency on external component connectivity to the Platform Services Controller.
  • For sites that will use Enhanced Linked Mode use external Platform Service Controllers.
    • This configuration uses external Platform Services controllers and load balancers (recommended for high availability). The number of controllers depends on the size of the environment:
      • If there are two to four VMware solutions – You will only need a single Platform Services Controller if the configuration is not designed for high availability; two Platform Services Controllers will be required for high availability behind a single load balancer.
      • If there are four to eight VMware solutions – Two Platform Services Controllers must be linked together if the configuration is not designed for high availability; four will be required for a high-availability configuration behind two load balancers (two behind each load balancer).
      • If there are eight to ten VMware solutions – Three Platform Services Controllers should be linked together for a high-availability configuration; and six will be required for high availability configured behind three load balancers (two behind each load balancer).
    • High availability is provided by having multiple Platform Services Controllers and a load balancer to provide failure protection. In addition to this, all components are still protected by VMware HA. This will limit the failure implications of having a single Platform Services Controller, assuming they are running on different ESXi hosts.

With these deployment recommendations hopefully the process of choosing a design for vCenter and the Platform Services Controller will be dramatically simplified.

This concludes this blog series. I hope this information has been useful and that it demystifies the new vCenter architecture.

 


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments.

vSphere Datacenter Design – vCenter Architecture Changes in vSphere 6.0 – Part 1

jonathanm-profileBy Jonathan McDonald

As a member of VMware Global Technology and Professional Services at VMware I get the privilege of being able to work with products prior to its release. This not only gets me familiar with new changes, but also allows me to question—and figure out—how the new product will change the architecture in a datacenter.

Recently, I have been working on exactly that with vCenter 6.0 because of all the upcoming changes in the new release. One of my favorite things about vSphere 6.0 is the simplification of vCenter and associated services. Previously, each individual major service (vCenter, Single Sign-On, Inventory Service, the vSphere Web Client, Auto Deploy, etc.) was installed individually. This added complexity and uncertainty in determining the best way to architect the environment.

With the release of vSphere 6.0, vCenter Server installation and configuration has been dramatically simplified. The installation of vCenter now consists of only two components that provide all services for the virtual datacenter:

  • Platform Services Controller – This provides infrastructure services for the datacenter. The Platform Services Controller contains these services:
    • vCenter Single Sign-On
    • License Service
    • Lookup Service
    • VMware Directory Service
    • VMware Certificate Authority
  • vCenter Services – The vCenter Server group of services provides the remainder of the vCenter Server functionality, which includes:
    • vCenter Server
    • vSphere Web Client
    • vCenter Inventory Service
    • vSphere Auto Deploy
    • vSphere ESXi Dump Collector
    • vSphere Syslog Collector (Microsoft Windows)/VMware Syslog Service (Appliance)

So, when deploying vSphere 6.0 you need to understand the implications of these changes to properly architect the environment, whether it is a fresh installation, or an upgrade. This is a dramatic change from previous releases, and one that is going to be a source of many discussions.

To help prevent confusion, my colleagues in VMware Global Support, VMware Engineering, and I have developed guidance on supported architectures and deployment modes. This two-part blog series will discuss how to properly architect and deploy vCenter 6.0.

vCenter Deployment Modes

There are two basic architectures that can be used when deploying vSphere 6.0:

  • vCenter Server with an Embedded Platform Services Controller – This mode installs all services on the same virtual machine or physical server as vCenter Server. The configuration looks like this:

JMcDonald 1

This is ideal for small environments, or if simplicity and reduced resource utilization are key factors for the environment.

  • vCenter Server with an External Platform Services Controller – This mode installs the platform services on a system that is separate from where vCenter services are installed. Installing the platform services is a prerequisite for installing vCenter. The configuration looks as follows:

JMcDonald 2

 

This is ideal for larger environments, where there are multiple vCenter servers, but you want a single pane-of-glass for the site.

Choosing your architecture is critical, because once the model is chosen, it is difficult to change, and configuration limits could inhibit the scalability of the environment.

Enhanced Linked Mode

As a result of these architectural changes, Platform Services Controllers can be linked together. This enables a single pane-of-glass view of any vCenter server that has been configured to use the Platform Services Controller domain. This feature is called Enhanced Linked Mode and is a replacement for Linked Mode, which was a construct that could only be used with vCenter for Windows. The recommended configuration when using Enhanced Linked Mode is to use an external platform services controller.

Note: Although using embedded Platform Services Controllers and enabling Enhanced Linked Mode can technically be done, it is not a recommended configuration. See List of Recommended topologies for vSphere 6.0 (2108548) for further details.

The following are some recommend options on how—and how not to—configure Enhanced Linked Mode.

  • Enhanced Linked Mode with an External Platform Services Controller with No High Availability (Recommended)

In this case the Platform Services Controller is configured on a separate virtual machine, and then the vCenter servers are joined to that domain, providing the Enhanced Linked Mode functionality. The configuration would look this way:

JMcDonald 3

 

There are benefits and drawbacks to this approach. The benefits include:

  • Fewer resources consumed by the combined services
  • More vCenter instances are allowed
  • Single pane-of-glass management of the environment

The drawbacks include:

  • Network connectivity loss between vCenter and the Platform Service Controller can cause outages of services
  • More Windows licenses are required (if on a Windows Server)
  • More virtual machines to manage
  • Outage on the Platform Services Controller will cause an outage for all vCenter servers connected to it. High availability is not included in this design.
  • Enhanced Linked Mode with an External Platform Services Controller with High Availability (Recommended)

In this case the Platform Services Controllers are configured on separate virtual machines and configured behind a load balancer; this provides high availability to the configuration. The vCenter servers are then joined to that domain using the shared Load Balancer IP address, which provides the Enhanced Linked Mode functionality, but is resilient to failures. This configuration looks like the following:

JMcDonald 4

There are benefits and drawbacks to this approach. The benefits include:

  • Fewer resources are consumed by the combined services
  • More vCenter instances are allowed
  • The Platform Services Controller configuration is highly available

The drawbacks include:

  • More Windows licenses are required (if on a Windows Server)
  • More virtual machines to manage
  • Enhanced Linked Mode with Embedded Platform Services Controllers (Not Recommended)

In this case vCenter is installed as an embedded configuration on the first server. Subsequent installations are configured in embedded mode, but joined to an existing Single Sign-On domain.

Linking embedded Platform Services Controllers is possible, but is not a recommended configuration. It is preferred to have an external configuration for the Platform Services Controller.

The configuration looks like this:

JMcDonald 5

 

  • Combination Deployments (Not Recommended)

In this case there is a combination of embedded and external Platform Services Controller architectures.

Linking an embedded Platform Services Controller and an external Platform Services Controller is possible, but again, this is not a recommended configuration. It is preferred to have an external configuration for the Platform Services Controller.

Here is as an example of one such scenario:

JMcDonald 6

  • Enhanced Linked Mode Using Only an Embedded Platform Services Controller (Not Recommended)

In this case there is an embedded Platform Services Controller with vCenter Server linked to an external standalone vCenter Server.

Linking a second vCenter Server to an existing embedded vCenter Server and Platform Services Controller is possible, but this is not a recommended configuration. It is preferred to have an external configuration for the Platform Services Controller.

Here is an example of this scenario:

JMcDonald 7

 

Stay tuned for Part 2 of this blog post where we will discuss the different platforms for vCenter, high availability and different deployment recommendations.


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments.

Begin Your Journey to vRealize Operations Manager

By Brent Douglas

In early December, VMware launched an exciting new array of updates to its products. For some products, this update was a refinement of already widely used functionality and capabilities. For other products, the December release marked a new direction and new path forward. One such product is vRealize Operations Manager.

With VMware’s acquisition of Integrien’s patented real-time performance analytics solution in August 2010, VMware added a powerful tool to its arsenal of virtualization management solutions. This tool, vCenter Operations Manager, enabled customers to begin managing beyond “what my environment is doing now” and into “what my environment will be doing in 30 minutes—and beyond?” In essence, with vCenter Operations Manager, customers gained a tool that could predict―and ultimately prevent―the phone from ringing.

Since August 2010, vCenter Operations Manager received bug fixes, regular updates, and new features and capabilities. Even with those, the VMware product designers and engineers knew they could produce a new version of the product that captured and extended the capabilities of vCenter Operations Manager. On December 9, VMware released that tool—vRealize Operations Manager.

In many respects, vRealize Operations Manager, is a new product from the ground up. Due to the differences between vCenter Operations Manager v5.x and vRealize Operations Manager v6.x, current users of vCenter Operations Manager cannot simply apply a v6.x update to existing environments. For customers with little historical data or default policies, the best course forward may be to just install and begin using vRealize Operations Manager. Other customers, with deep historical data and advanced configuration/policies, the best path forward is likely a migration of existing data and configuration information from their vCenter Operations Manager v5.x instance.

A full discussion of migration planning and procedures is available in the vRealize Operations Manager Customization and Administration Guide. This guide also outlines many common vCenter Operations Manager scenarios and suggests migration paths to vRealize Operations Manager.

Important note: In order to migrate data and/or configuration information from an existing vCenter Operations Manager instance, the instance must be at least v.5.8.1 at a minimum, and preferably v5.8.3 or higher.

Question 1: Should any portion of my existing vCenter Operations Manager instance(s) be migrated?

VMware believes you are a candidate for a full migration (data and configuration information) if you can answer “yes” to any one of the following:

  • Have you operationalized capacity planning in vCenter Operations Manager 5.8.x?
    • Actively reclaiming waste
    • Reallocating resources
  • Have you operationalized vCenter Operations Manager to be performance- and health monitoring-based?
  • Do you act upon the performance alerts that are generated by vCenter Operations Manager?
  • Is any aspect of data in vCenter Operations Manager feeding another production system?
    • Raw metrics, alerts, reports, emails, etc
  • Do you have a company policy to retain monitoring data?
    • Does your current vCenter Operations Manager instance fall into this category (e.g., it’s running in TEST)?

VMware believes you are a candidate for a configuration-only migration if you answer “yes” to any one of the following:

  • Are you happy with your current configuration?
    • Dashboards
    • Policies
    • Users
    • Super Metrics

— AND —

  • I do not need to save the data I have collected
    • Running in a test environment or proof-of-concept you have refined and find useful
    • Not really using the data yet

If you answered “no” to these questions, you should install and try vRealize Operations Manager today. You are ready to go with a fresh install without migrating any existing data or configuration information.

Question 2: If some portion of an existing vCenter Operations Manager instance is to be migrated, who should perform the migration?

vRealize Operations Manager is capable of migrating existing data and configuration information from an existing vCenter Operations Manager instance. However, complicating factors may require an in-depth look by a VMware services professional to ensure a successful migration. The following table outlines some of the complicating factors and suggests paths forward.

Consulting_blog_table_012815

 

That’s it! With a bit of upfront planning you can be well on your journey to vRealize Operations Manager! The information above addresses the “big hitters” for planning a migration to vRealize Operations Manager from vCenter Operations Manager. As mentioned, a full discussion of migration planning and procedures is available in the vRealize Operations Manager Customization and Administration Guide.

On a personal note, I am excited about vRealize Operations Manager. Although vCenter Operations Manager served VMware and its customers well for many years, it is time for something new and exciting. I encourage you to try vRealize Operations Manager today. This post represents information produced in collaboration with David Moore, VMware Professional Services, and Dave Overbeek, VMware Technical Marketing team. I thank them for their contributions and continued focus on VMware and its customers.


Brent Douglas is a VMware Cloud Technical Solutions Architect

DevOps and Performance Management

Michael_Francis

By Michael Francis

Continuing on from Ahmed’s recent blog on DevOps, I thought I would share an experience I had with a customer regarding performance management for development teams.

Background

I was working with an organization that is essentially an independent software vendor (ISV) in a specific vertical; their business is writing software in the gambling sector, and⎯in some cases⎯hosting that software to deliver services to their partners. It is a very large revenue stream for them, and their development expertise and software functionality is their differentiation.

Due to historical stability issues and lack of trust between the application development teams and the infrastructure team, the organization introduced into the organization a new VP of Infrastructure and an Infrastructure Chief Architect a number of years previous. They focused on changing the process and culture − and also aligning the people. They took our technology and implemented an architecture that aligned with our best practices with the primary aim of delivering a stable, predictable platform.

This transformation of people/process and technology provided a stable infrastructure platform that soon improved the trust and credibility of the infrastructure team with the applications development teams for their test and development requirements.

Challenges

The applications team in this organization, as you would expect, carries significant influence. Even though the applications team had come to trust virtual infrastructure for test and development, they still had reservations about a private cloud model for production. Their applications had significant demands on infrastructure and needed to guarantee transactions per second rates committed across multiple databases; any latency could cause significant processing issues, and therefore, loss of revenue. Visibility across the stack was a concern.

The applications team  responsible for this critical in-house developed application designed the application to instrument it’s performance by writing out flat files on each server with application-specific information about transaction commit times and other application specific performance information.

Irrelevant of complete stack visibility, the applications team responsible for this application was challenged with how to monitor the performance of this custom distributed application performance data from a central point. The applications team also desired some means of understanding normal performance data levels, as well as a way to gain insight into the stack to see where any abnormality originated.

Due to the trust that had developed with the infrastructure team, they engaged with them to determine whether the infrastructure team had any capability to support their performance monitoring needs.

Solution

The infrastructure team was just beginning to review their needs for performance and capacity management tools for their Private Cloud. The team had implemented a proof-of-concept of vCenter Operations Manager and found its visualizations useful; so they asked us to work with the applications team to determine whether we could digest this custom performance information.

We started by educating them on the concept of a dynamic learning monitoring system. It had to allow hard thresholds to be set, but also⎯more importantly⎯determine the spectrum of normal behavior based upon data pattern prediction algorithms for an application; both as a whole and each of its individual components.

We discussed the benefits of a data analytics system that could take a stream of data, and
irrespective of the data source, create a monitored object from it. The data analytics system had to be able to assign the data elements in the stream to metrics, start determining normality, provide a comparison to any hard thresholds, and provide the visualization.

The applications team was keen to investigate and so our proof-of-concept expanded to include the custom performance data from this in-house developed application.

The Outcome

The screenshot below shows VMware vCenter Operations Manager. It shows the Resource Type screen that allows us to define a customer Resource Type, which allows us to represent the application-specific metrics and the application itself.

MFrancis1

To get the data into vCenter Operation Manager we simply wrote a script that opened the flat file on each of the servers participating in the application; it read the file and then posted the information into vCenter Operations Manager using its HTTP POST adapter. This adapter provides the ability to post data from any endpoint that needs to be monitored; because of this vCenter Operations Manager is a very flexible tool.

In this instance we posted into vCenter Operation Manager a combination of application-specific counters and Windows Management Instrumentation (WMI) counters from the Windows operating system platform the apps run on. This is shown in the following screenshot.

MFrancis2

You can see the Resource Kind is something I called vbs_vcops_httpost, which is not a ‘standard’ monitored object in vCenter Operations Manager; the product has created this based on the data stream I was pumping into it. I just needed to tell vCenter Operations Manager what metrics it should monitor from the data stream – which you can see in the following screenshot.

 MFrancis3

For each attribute (metric) we can configure whether hard thresholds are used and whether vCenter Operations Manager should use that metric as an indicator of normality. We refer to the normality as dynamic thresholds.

Once we have identified which metrics we want to mark, we can create spectrums of normality for them and affect the health of the application, which allows us to create visualizations. The screenshot below shows an example of a simple visualization. It shows the applications team a round-trip time metric plotted over time, alongside a standard windows WMI performance counter for CPU.

MFrancis4

In introducing the capabilities to monitor custom in-house developed applications using combinations of application-specific custom metrics, a standard guest operating system and platform metrics, the DevOps team now has visibility into the health of the whole stack. This enables them to see the impact of code changes against different layers of the stack so they can compare the before and after from the perspective of the spectrum of normality for varying key metrics.

This capability from a cultural perspective brought the applications development team and infrastructure team onto the same page; both teams gain an appreciation of any performance issues through a common view.

In my team we have developed services that enable our customers to adopt and mature a performance and capacity management capability for the hybrid cloud, which⎯in my view―is one of the most challenging considerations for hybrid cloud adoption.

 


Michael Francis is a Principal Systems Engineer at VMware, based in Brisbane.

vCloud Automation Center Disaster Recovery

Gary BlakeBy Gary Blake

Prior to the release of vCloud Automation Center (vCAC) v5.2 there was no awareness or understanding of vCenter Site Recovery Manager protecting virtual machines. However, with the introduction of vCAC v5.2, VMware now provides enhanced integration so vCAC can correctly discover the relationship between the primary and recovery virtual machines.

These enhancements consist of what may be considered minor modifications, but they are fundamental enough to ensure vCenter Site Recovery Manager (SRM) can be successfully implemented to deliver disaster recovery of virtual machines managed by vCAC.

GBlake 1

 

So What’s Changed?

When a virtual machine is protected by SRM a Managed Object Reference ID (or MoRefID) is created against the virtual machine record in the vCenter Server database.

Prior to SRM v5.5 a single virtual machine property was created on the placeholder virtual machine object in the recovery site vCenter Server database called “ManagedBy:SRM,placeholderVM,” but vCAC did not inspect this value, so it would attempt to add a second duplicate entry into its database. With the introduction of 5.2, when a data collection is run, vCAC now ignores virtual machines with this value set, thus avoiding the duplicate entry attempt.

In addition, SRM v5.5 introduced a second managed-by-property value that is placed on the virtual machine vCenter Server database record called “ManagedBy:SRM,testVM.” When a test recovery process is performed and data collection is run at the recovery site, vCAC inspects this value and ignores virtual machines with this set. This too avoids creating a duplicate entry in the vCAC database.

With the changes highlighted above, SRM v5.5 and later—and vCAC 5.2 and later—can now be implemented in tandem with full awareness of each other. However, one limitation still remains when moving a virtual machine into recovery or re-protect mode: vCAC does not properly recognize the move. To successfully perform these machine operations and continue managing the machine lifecycle, you must use the Change Reservation operation – which is still a manual task.

Introducing the CloudClient

In performing the investigation around the enhancements between SRM and vCAC just described, and on uncovering the need for the manual change of reservation, I spent some time with our Cloud Solution Engineering team discussing the need for finding a way to automate this step. They were already developing a tool called CloudClient, which is essentially a wrapper for our application programming interfaces that allows simple command line-driven steps to be performed, and suggested this could be developed to support this use case.

Conclusion

In order to achieve fully functioning integration between vCloud Automation Center (5.2 or later) and vCenter Site Recovery Manager, adhere to the following design decisions:

  • Configure vCloud Automation Center with endpoints for both the protected and recovery sites.
  • Perform a manual/automatic change reservation following a vCenter Site Recovery Manager planned for disaster migration.

GBlake2

 

Frequently Asked Questions

Q. When I fail over my virtual machines from the protected site to the recovery site, what happens if I request the built-in vCAC machine operations?

A. Once you have performed a Planned Migration or a Disaster Recovery process, as long as you have changed the reservation within the vCAC Admin UI for the virtual machine, machine operations will be performed in the normal way on the recovered virtual machine.

Q. What happens if I do not perform the Change Reservation step to a virtual machine once I’ve completed a Planned Migration or Disaster Recovery processand I then attempt to perform the built-in vCAC machine operations on the virtual machine?

A. Depending on which tasks you perform, some things are blocked by vCAC, and you see an error message in the log such as “The method is disabled by ‘com.vmware.vcDR’” and some actions look like they are being processed, but nothing happens. There are also a few actions that are processed regardless of the virtual machine failure scenario; these are Change Lease and Expiration Reminder.

Q. What happens if I perform a re-provision action on a virtual machine that is currently in a Planned Migration or Disaster Recovery state?

A. vCAC will re-provision the virtual machine in the normal manner, where the hostname and IP address (if assigned through vCAC) will be maintained. However, the SRM recovery plan will now fail if you attempt to re-protect the virtual machine back to the protected site as the original object being managed is replaced. It’s recommended that—for blueprints where SRM protection is a requirement—you disable the ‘Re-provision’ machine operation.


Gary Blake is a VMware Staff Solutions Architect & CTO Ambassador