Home > Blogs > VMware vSphere Blog > Category Archives: vCenter Server

Category Archives: vCenter Server

Significant Performance Improvements Come to the vSphere Web Client 5.5 Update 3

Over the course of the previous few years we’ve seen a steady improvement in the vSphere Web Client. VMware has been listening to the feedback coming in from our field, partners, and customers. And the feedback is that the vSphere Web Client in vCenter Server 6.0 and 6.0 Update 1 has been a really great step forward in terms of User Experience (UX). With that in mind, many of the improvements of the 6.0 vSphere Web Client have been “backported” to the vSphere Web Client in vCenter Server 5.5 Update 3. The primary scope of the backported functionality was to greatly improve performance while maintaining the consistency of the 5.5 User Interface (UI). So, while vSphere Web Client performance has drastically improved with 5.5 U3, the UI elements have stayed the same which makes it easier for Administrators to continue using the 5.5 Web Client.

Throughout this blog post I’ll highlight some of the enhancements that have been brought to the vSphere Web Client in 5.5 Update 3. This is especially important as we see customers continue to leverage the legacy vSphere Client (also referred to as the legacy C# client). Our goal is to make the Web Client everyone’s primary management tool for vCenter Server & vSphere and continuing to improve performance has been an essential requirement in doing that.

Continue reading

Virtualizing Big Data at VMware IT – Starting Out at Small Scale

The Hadoop-based system running on vSphere that is described here was architected by Rajit Saha, (who provided the material for this blog) and a team from VMware’s IT department.

This article describes the technical infrastructure for a VMware internal IT project that was built and deployed in 2015 for analyzing VMware’s own business data.. Details of the business applications used in the system are not within the scope of this article. The virtualized Hadoop environment and modern analytics project was implemented entirely on the vSphere 6 platform.

The key lesson that we learned from this implementation is that you can start at a small scale with virtualizing big data/Hadoop and then scale the system up over time. You don’t need to wait for a large amount of hardware to become available to get started.

Continue reading

Configuring NSX-v 6.2 as a Load Balancer for the vSphere Platform Services Controller

VMware released NSX-v (NSX for vSphere) 6.2 back on August 20, 2015. With its release the NSX team introduced support to use NSX-v as a load balancer for the vSphere Platform Services Controller (PSC) for highly available deployments (Release Notes). This is a key new feature that enables customers to further leverage existing NSX-v deployments to simplify their vSphere infrastructure while providing additional HA capabilities for the PSC. This can be a fairly straightforward undertaking when there is an existing vCenter being used for management (e.g. a management cluster).

There is a second scenario, however, that requires some consideration. What if you’re deploying a new vSphere and NSX-v environment where a management vCenter does not already exist? Romain Decker, a Solution Architect in VMware’s Software-Defined Datacenter (SDDC) Professional Services Engineering team has put together a great blog post on the VMware Consulting Blog that walks through that exact scenario and provides a step-by-step instruction on how to work around this chicken and egg scenario using the ability to easily repoint a vCenter Server to an alternate PSC in vSphere 6.0 Update 1.

To learn more about configuring  NSX-v as a load balancer for the vSphere Platform Services Controller, read Romain’s full blog post at:

Configuring NSX-v Load Balancer for use with vSphere Platform Services Controller (PSC) 6.0

What is vCenter Server Watchdog?

If you’ve done any research into the high-availability options available for vCenter Server 6.0, hopefully you have had a chance to read the VMware vCenter Server 6.0 Availability Guide written in collaboration with Technical Marketing and Global Support Services as well as KB 1024051. And you might have noticed particular sections that refer to the vCenter Server Watchdog. But what exactly is the vCenter Server Watchdog?

Enabled “out of the box” in 6.0, the vCenter Server Watchdog provides better availability by periodically verifying the status of vCenter Server.  It does this in two ways:

  1. The PID Watchdog monitors the processes running on vCenter Server
  • The API Watchdog uses the vSphere API to monitor the functionality of vCenter Server.

If any services fail, the Watchdog attempts to restart them. If it cannot restart the service because of a host failure, vSphere HA restarts the virtual machine running the service on a new host.

That’s sounds slick, right? Well, let’s dive in and take a look at each of these watchdogs in detail. Continue reading

Architecting Virtual SAP HANA Using VMware Virtual Volumes And Hitachi Storage

VMWorld Recap: SAP HANA and VMware Virtual Volumes

This is a follow up to my earlier VMWorld blog; “Virtualizing SAP HANA Databases Greater Than-1TB On vSphere-5-5”, where I discussed SAP Multi-Temperature Data Management strategies and techniques which can significantly reduce the size and cost associated with SAP HANA’s in-memory footprint. This blog will focus on Software-Defined Storage and the need for VMware Virtual volumes when deploying Mission Critical Applications/Databases like SAP HANA as discussed in my VMWorld session.

Multi-Temperature Data Management Is By Definition Software-Defined Storage

SAP and VMware customers who plan on leveraging multi-temperature strategies, where data is classified by frequency of access as either hot, warm or cold depending on data usage is the essence of Software-Defined Storage. This can also be equated to EMC’s Information Lifecycle Management which examines the value of data to the business over time. To bring the concept of the Software-Defined Data Center and more precisely Software-Defined Storage to reality, see Table 1. This table depicts the various storage options for SAP HANA so customers can create an architecture that aligns with the business and its applications demands.

Table 1: Multi-Temperature Storage Options with SAP HANA


Planning Your Journey To Software-Defined Storage

As we get into the various storage options for SAP HANA, VMware has made it very easy to create and deploy software defined storage in the form of Virtual Volumes. However I want to stress the actual definitions of how the storage should be abstracted is a collaborative task, at a minimum you must involve the storage team, VI-Admins, application owners, and dba’s in order to create an optimized virtual architecture; this should not be a siloed task.

In my previous post I discussed the storage requirements for SAP HANA In-Memory, Dynamic Tiering, Near-Line Storage, and the Archiving Components; one last option I did not cover in Table 1 is Data Aging which is specific to SAP Business Suite. Under normal operations SAP HANA does not preload data into memory, data is loaded upon first access, so the first time you access data its always off disk.

With Data Aging you can essentially mark data so its never loaded into memory and will always reside on disk. This is not available on all modules for Business Suite, so please check with SAP for availability and roadmap with respect to Data Aging.

Essentially this is another SAP HANA feature which enables customers to reduce and manage their memory footprint more efficiently and effectively. The use of Data Aging can change the design requirements of your Software-Defined Storage, if Data Aging becomes more prevalent in your SAP Landscape, VMware Virtual Volumes can be used to address the changing storage requirements of the application by seamlessly migrating data between different classes of software-defined storage or VMDKs.

VMware Virtual Volumes Transform Storage By Aligning With SAP HANA’s Requirements

Now lets get into Virtual Volumes and the problems they solve, with Virtual Volumes the fundamental model is centered around provisioning storage based on the application needs rather than the underlying infrastructure. When deploying SAP HANA using the Tailored Data Center Integration model, the storage KPIs can be quite complex, so how do customers translate latency, throughput for reads – writes – and updates, at various block sizes to the storage layer?

Plus how does a customer address the storage requirements for SAP HANA’s entire data life cycle, whether you are planning on using Dynamic Tiering, with or without Near-Line-Storage and what is the archiving strategy storage requirements as well. Also some of the storage requirements do tie back to the compute layer, as an example with Dynamic Tiering if you plan on using Row Level Versioning there is a compute to memory relationship for storage that comes into play when sizing

Addressing and achieving these design goals using an infrastructure centric model can be quite difficult because you are tied to physical LUNs and trust me, with mission critical databases, you will always have database administrators fighting over LUNs with the lowest numbers because of the concerns around radial density. This leads to tremendous waste when provisioning storage using an infrastructure centric model.

VMware Virtual Volumes significantly reduces the storage design complexity by using an Application Centric model because you are not dealing with storage at the LUN level, instead vSphere admins use policies to express the application requirements to the storage array, then the storage array maps storage containers to the application requirements.

What are VMware Virtual Volumes?

At a high level I’ll go over the architecture and components of Virtual Volumes, this blog is not intended to be a deep dive into Virtual Volumes, instead my goal is to convey that mission critical uses cases for VVOLS and software-defined storage are real. For an excellent white paper on Virtual Volumes see; “VMware vSphere Virtual Volumes Getting Started Guide”.

As shown in Figure 1., Virtual Volumes are a new type of virtual machine object which are created and stored natively on the storage array. The Vendor Provider also known as the VASA Provider, which are the vSphere Storage APIs for Storage Awareness (VASA) that provide the storage awareness services and mediates out of the box communications between vCenterServer and EXi Hosts on one side and the storage system on the other side.

The storage containers are pools of raw storage that a storage system can provide to virtual volumes and unlike LUNS and NFS, they do not require pre-configured volumes on the storage side. Also with virtual volumes you still have the functionality you would expect when using native VMDKs

Virtual Datastores represents a storage container in a vCenter Server instance, so it’s a 1:1 mapping to the storage systems storage container. The ESXi Hosts have no direct access to the virtual volumes on the storage side, so they use a logical I/O proxy called a protocol endpoint and as you would expect VVOLs are compatible with industry standard protocols, iSCSI, NFS, FC, and FCoE

The Published Storage Capabilities will vary by storage vendor depending on which capabilities have been exposed and implemented. In this blog we will be looking at the exposed capabilities of Hitachi Data Systems like latency, throughput, Raid Level, Drive Type/Speed, IOPS, and Snapshot frequency to mention a few.

Figure 1: vSphere Virtual Volumes Architecture and Components


VMware HDS: Creating Storage Containers, Virtual Volumes, and Profiles for Virtual SAP HANA

Now Virtual Volumes are an Industry-wide Initiative, essentially a who’s who of the storage industry are participating in this initiative, however this next section will be representative of the work done with Hitachi Data Systems

And again the guidance here is collaboration when architecting software-defined storage for SAP HANA landscapes and for that matter any mission critical application or database. Because the beauty of software defined storage is once created and architecture correctly you can then provision your virtual machines in an automated and consistent manner.

So in the spirit of collaboration, I got together with Hitachi’s SAP alliance team, their storage team, and database architects and we came up with these profiles, policies, and containers to use when deploying SAP HANA landscapes.

We had several goals when designing this architecture; one was to use virtual volumes to address the entire data life cycle of SAP HANA, the in-memory component, Dynamic Tiering, Near-Line storage, and archiving or any supported combination of the above when creating a SAP HANA landscape. And secondly we wanted to enable rapidly provisioning of SAP HANA landscapes, so we created profiles, policies, and containers which could be used to deploy SAP HANA databases whose in-memory component could range from 512GB to 1TB in size.

I’ll review some of the capabilities HDS exposed which were used for this architecture:

  • Interestingly enough we were able to meet the SAP HANA in-memory KPIs using Hitachi Tier 2 storage which consisted of 10K SAS drives for both log and data files, as well as for the Operating System and the SAP HANA shared file system. This also simplified the design. We then used high density SAS drives for the backup areas
  • We enabled automatic storage managed snapshots for HANA data, log and the OS; and set the Snapshot frequency based on the classifications of Critical, Important, or Best Effort.
  • So snapshots for the data and log were classified as Critical while the OS was classified as Important and the backup area we didn’t snapshot at all
  • We also tagged this storage as certified, capturing the model and serial number, since the SAP HANA in-memory component requires certified storage. We wanted to make sure that when creating HANA VM’s you’re always pulling from certified storage containers.
  • The Dynamic Tiering and NLS storage had similar requirements so could be provisioned from the same containers and since these are disk based columnar databases we selected Tier 1 storage SSDs for the data files based on the random read/write patterns
  • And stuck with SAS drives for the log files since sequential workload don’t benefit much from SSDs. Again because of the disk based access we selected Tier 2 to satisfy the IOPS and Latency requirements.
  • Then finally for the archiving containers we used the lowest cost & highest density storage, pretty much just a file system.

Now there’s just too much information to cover in this effort with HDS but for those of you interested, VMware and Hitachi we will be publishing a Co-Logo White Paper which will be a much deeper dive into how we architected these landscapes so customers can do this almost out of the box.

Deploying VMware Software-Defined Storage With vSphere and Hitachi Command Suite

Example: SAP HANA Dynamic Tiering and Near-Line Storage Tiers. These next couple of screen captures will show how simple virtual volumes are to deploy once architected correctly

Figure 2: Storage Container Creation: SAP HANA DT and NLS Tier


Figure 3: Create Virtual Machine Storage Policies SAP HANA DT/NLS Data/Log File


Figure 4: Create New SAP HANA DT VM Using VVOLS Policies With Hitachi Storage


Addressing Mission Critical Use Cases with VMware Software-Defined Storage

SAP HANA and Multi-Temperature Data Management is the poster child for mission critical software-defined storage use cases. VMware Virtual Volumes solves the complexities and simplifies storage provisioning by using an application centric model rather than an infrastructure centric model.

The SAP HANA in-memory component is not yet certified for production use on vSphere 6.0, however Virtual Volumes can be used for SAP HANA Dynamic Teiring, Near-Line Storage, and Archiving. So my advice to our customers is to start architecting now, get together with your storage admins, VI Admins, application owners, and database administrators to create containers, policies, and profiles correctly so when vSphere 6.0 is certified you are ready to “Run SAP HANA Simple”.



Reconfiguring and Repointing Deployment Models in vCenter Server 6.0 Update 1

In my last blog post, we discussed some of the new features and capabilities found in vCenter Server 6.0 such as how you can quickly and easily update the vCenter Server Appliance 6.0 to Update 1.

Now, it’s time to focus our attention on a two key enhancements found in vCenter Server 6.0 Update 1 – both the appliance and Windows-based form factors:

  • Reconfigure – You can now reconfigure an embedded deployment node to an external deployment model, also known as MxN.reconfigure
  • Repoint – Simplified repointing of a management node in an external deployment model from one external Platform Services Controller to another external Platform Services Controller.
  • repoint

Why is this important?

The reconfiguration enhancement enables you to take an existing embedded deployment and transition it to a more optimal external deployment model – MxN.  There is also the simplified ability to repoint a management node to another Platform Services Controller which enables you to quickly recover from an external Platform Services Controller failure and to distribute load to alternate nodes that are in the same SSO domain.

Before moving forward with either the reconfigure or repoint operations, there is a key set of requirements that you need to meet.

Reconfiguration Requirements

  • The vCenter Server instance must be an embedded deployment model.
  • The target Platform Services Controller must be a replication partner of the existing embedded Platform Services Controller in the same SSO Domain.

Note: In vCenter Server 6.0 Update 1, we only support a single transition from embedded deployment to a external deployment (MxN) model for per SSO domain. See the Known Issues section of the Release Notes for additional details.

Repointing Requirements

  • The vCenter Server instance must be an external deployment model.
  • The target Platform Services Controller must be a replication partner of the existing external Platform Services Controller in the same SSO Domain.

We’ve introduced an update to cmsso-util in vCenter Server 6.0 Update 1. This utility can be found in:

  • VCSA: /bin/cmsso-util
  • Windows: <Drive>:\Program Files\VMware\vCenter Server\bin\cmsso-util

This utility automates the entire process by passing the new required namespace (either reconfigure or repoint) and its arguments.  For example, with the VCSA, the namespaces would be:

  • VCSA: /bin/cmsso-util reconfigure
  • VCSA: /bin/cmsso-util repoint

Okay, so, how do we do it? Well, let’s see both namespace options in action in the vCenter Server Appliance (VCSA). Note that the cmsso-util namespaces and arguments are the same for a vCenter Server 6.0 Update 1 instance installed on Windows.

Continue reading

Updating vCenter Server Appliance 6.0 to Update 1

Earlier this month, we released vSphere 6.0 Update 1. In this update we introduced some awesome new features for vCenter Server. Let’s take a look at some of these just below:

  • Installation and Upgrade using HTML 5 Installer for VCSA: The following installation and upgrade scenarios are now supported for vCenter Server Appliance using its HTML 5 installer:
    1. An installation using HTML 5 installer with a vCenter Server target is supported.
    2. An upgrade using HTML 5 installer with a vCenter Server target is not supported.
    3. An upgrade using command line with a vCenter Server target is supported.
  • Backup and Restore with External Platform Services Controller: vCenter Server deployments with an external PSC (also called MxN) have support for backup and restoration.
  • Appliance Management User Interface: An all new HTML5-based management interface for the appliance at https://<FQDN-or-IP>:5480. 
  • Platform Services Controller Interface: An all new HTML5-based management interface for the Platform Services Controller at https://<FQDN-or-IP>/psc/.  See my earlier blog on the Platform Services Controller Interface.
  • Interoperability: Virtual SAN and SMP-FT are interoperable.
  • Hybrid Cloud Manager: Hybrid Cloud Manager has been updated for vSphere, and can be accessed directly from the vSphere Web Client.
  • VCSA Authentication for Active Directory: VMware vCenter Server Virtual Appliance has been modified to only support AES256-CTS/AES128-CTS/RC4-HMAC encryption for Kerberos authentication between VCSA and Active Directory.
  • Support for SSLv3: Support for SSLv3 has been disabled by default.
  • Customer Experience Improvement Program: The opt-in Customer Experience Improvement Program (CEIP) provides VMware with information that enables VMware to improve the VMware products and services and to fix problems. When you choose to participate in CEIP, VMware will collect technical information listed below about your use of the VMware products and services in CEIP reports on a regular basis. This information does not personally identify you.

One additional feature that we introduced in vCenter Server 6.0 Update 1 is an in-place process for Updates in a major release (e.g. vCenter Server 6.0 to vCenter Server 6.0 Update 1) instead of the migration-based approach that was required in prior VCSA updates (e.g. vCenter Server 5.5 to vCenter Server 5.5 Update 1).

With these new capabilities — and, of course, resolved issues — there’s been a ton of interest in how to update the VCSA to 6.0 Update 1. So, let’s get started and look at the process…

Continue reading

VMware Tools Lifecycle: Why Tools Can Drive You Crazy (and How to Avoid it!)

There has been a lot of buzz around vSphere Lifecycle since VMworld. My last few blog posts on VMware Tools have had a tremendous amount of traffic, so I decided to continue with the theme and give you all what it appears you want more of. So in this post, LET’S TALK TOOLS!

Continue reading

Introducing the Platform Services Controller Interface in vCenter Server 6.0 Update 1

Back in March, we introduced vSphere 6.0 and the new architecture for vCenter Server. With this new architecture you learned about the Platform Services Controller, a new functional component of vCenter that moves beyond just Single-Sign On to include additional platform services such as:

  • Licensing Service
  • Certificate Authority (VMCA)
  • Certificate Store (VECS)
  • Lookup Service for Component Registrations

In the 6.0 release, administration and configuration of the Platform Service Controller was primarily performed by an SSH session, the vSphere Web Client and selecting the node in System Configuration, or through the Direct Console User Interface of the appliance.

In vCenter Server 6.0 Update 1, we’re excited to introduce the next stage of the administration with the Platform Services Controller Interface, a fully HTML5-based interface to administer and configure many of the services that run on the PSC.

Using the Platform Services Controller Interface you can perform tasks, such as:

  • Adding and Editing Users and Groups for Single Sign-On
  • Adding Single Sign-On Identity Sources
  • Configuring Single Sign-On Policies (e.g Password Policies)
  • Adding Certificate Stores
  • Adding and Revoking Certificates

Here is a quick overview of the Platform Services Controller User Interface available in vCenter Server 6.0 Update 1.

Continue reading

Big Data on vSphere with HBase

This article describes a set of performance tests that were conducted on HBase, a popular data management tool that is frequently used with Hadoop, running on VMware vSphere 6 and provisioned by the vSphere Big Data Extensions tool. The work described here was done by Xinhui Li, who is a staff engineer in the Big Data team in VMware’s R&D Labs in Beijing. Xinhui’s biography and background details are given at the end of the article.

What is HBase?

HBase is an Apache project that is designed to handle very large amounts of data on the Hadoop platform. HBase is often described as providing the functionality of a NoSQL database running on top of Hadoop. It combines the scalability of Hadoop, through its use of the Hadoop Distributed File System (HDFS) to store the data, with real-time data access to the data. HBase can handle billions of rows of data and very large numbers of columns. Along with Hadoop, HBase runs on clusters of commodity hardware that form a distributed system. The HBase architecture is made up of RegionServers that run on the worker nodes while the HBase Master Server controls them.

Continue reading