Home > Blogs > VMware vSphere Blog > Category Archives: vCloud Suite

Category Archives: vCloud Suite

Architecting Virtual SAP HANA Using VMware Virtual Volumes And Hitachi Storage

VMWorld Recap: SAP HANA and VMware Virtual Volumes

This is a follow up to my earlier VMWorld blog; “Virtualizing SAP HANA Databases Greater Than-1TB On vSphere-5-5”, where I discussed SAP Multi-Temperature Data Management strategies and techniques which can significantly reduce the size and cost associated with SAP HANA’s in-memory footprint. This blog will focus on Software-Defined Storage and the need for VMware Virtual volumes when deploying Mission Critical Applications/Databases like SAP HANA as discussed in my VMWorld session.

Multi-Temperature Data Management Is By Definition Software-Defined Storage

SAP and VMware customers who plan on leveraging multi-temperature strategies, where data is classified by frequency of access as either hot, warm or cold depending on data usage is the essence of Software-Defined Storage. This can also be equated to EMC’s Information Lifecycle Management which examines the value of data to the business over time. To bring the concept of the Software-Defined Data Center and more precisely Software-Defined Storage to reality, see Table 1. This table depicts the various storage options for SAP HANA so customers can create an architecture that aligns with the business and its applications demands.

Table 1: Multi-Temperature Storage Options with SAP HANA


Planning Your Journey To Software-Defined Storage

As we get into the various storage options for SAP HANA, VMware has made it very easy to create and deploy software defined storage in the form of Virtual Volumes. However I want to stress the actual definitions of how the storage should be abstracted is a collaborative task, at a minimum you must involve the storage team, VI-Admins, application owners, and dba’s in order to create an optimized virtual architecture; this should not be a siloed task.

In my previous post I discussed the storage requirements for SAP HANA In-Memory, Dynamic Tiering, Near-Line Storage, and the Archiving Components; one last option I did not cover in Table 1 is Data Aging which is specific to SAP Business Suite. Under normal operations SAP HANA does not preload data into memory, data is loaded upon first access, so the first time you access data its always off disk.

With Data Aging you can essentially mark data so its never loaded into memory and will always reside on disk. This is not available on all modules for Business Suite, so please check with SAP for availability and roadmap with respect to Data Aging.

Essentially this is another SAP HANA feature which enables customers to reduce and manage their memory footprint more efficiently and effectively. The use of Data Aging can change the design requirements of your Software-Defined Storage, if Data Aging becomes more prevalent in your SAP Landscape, VMware Virtual Volumes can be used to address the changing storage requirements of the application by seamlessly migrating data between different classes of software-defined storage or VMDKs.

VMware Virtual Volumes Transform Storage By Aligning With SAP HANA’s Requirements

Now lets get into Virtual Volumes and the problems they solve, with Virtual Volumes the fundamental model is centered around provisioning storage based on the application needs rather than the underlying infrastructure. When deploying SAP HANA using the Tailored Data Center Integration model, the storage KPIs can be quite complex, so how do customers translate latency, throughput for reads – writes – and updates, at various block sizes to the storage layer?

Plus how does a customer address the storage requirements for SAP HANA’s entire data life cycle, whether you are planning on using Dynamic Tiering, with or without Near-Line-Storage and what is the archiving strategy storage requirements as well. Also some of the storage requirements do tie back to the compute layer, as an example with Dynamic Tiering if you plan on using Row Level Versioning there is a compute to memory relationship for storage that comes into play when sizing

Addressing and achieving these design goals using an infrastructure centric model can be quite difficult because you are tied to physical LUNs and trust me, with mission critical databases, you will always have database administrators fighting over LUNs with the lowest numbers because of the concerns around radial density. This leads to tremendous waste when provisioning storage using an infrastructure centric model.

VMware Virtual Volumes significantly reduces the storage design complexity by using an Application Centric model because you are not dealing with storage at the LUN level, instead vSphere admins use policies to express the application requirements to the storage array, then the storage array maps storage containers to the application requirements.

What are VMware Virtual Volumes?

At a high level I’ll go over the architecture and components of Virtual Volumes, this blog is not intended to be a deep dive into Virtual Volumes, instead my goal is to convey that mission critical uses cases for VVOLS and software-defined storage are real. For an excellent white paper on Virtual Volumes see; “VMware vSphere Virtual Volumes Getting Started Guide”.

As shown in Figure 1., Virtual Volumes are a new type of virtual machine object which are created and stored natively on the storage array. The Vendor Provider also known as the VASA Provider, which are the vSphere Storage APIs for Storage Awareness (VASA) that provide the storage awareness services and mediates out of the box communications between vCenterServer and EXi Hosts on one side and the storage system on the other side.

The storage containers are pools of raw storage that a storage system can provide to virtual volumes and unlike LUNS and NFS, they do not require pre-configured volumes on the storage side. Also with virtual volumes you still have the functionality you would expect when using native VMDKs

Virtual Datastores represents a storage container in a vCenter Server instance, so it’s a 1:1 mapping to the storage systems storage container. The ESXi Hosts have no direct access to the virtual volumes on the storage side, so they use a logical I/O proxy called a protocol endpoint and as you would expect VVOLs are compatible with industry standard protocols, iSCSI, NFS, FC, and FCoE

The Published Storage Capabilities will vary by storage vendor depending on which capabilities have been exposed and implemented. In this blog we will be looking at the exposed capabilities of Hitachi Data Systems like latency, throughput, Raid Level, Drive Type/Speed, IOPS, and Snapshot frequency to mention a few.

Figure 1: vSphere Virtual Volumes Architecture and Components


VMware HDS: Creating Storage Containers, Virtual Volumes, and Profiles for Virtual SAP HANA

Now Virtual Volumes are an Industry-wide Initiative, essentially a who’s who of the storage industry are participating in this initiative, however this next section will be representative of the work done with Hitachi Data Systems

And again the guidance here is collaboration when architecting software-defined storage for SAP HANA landscapes and for that matter any mission critical application or database. Because the beauty of software defined storage is once created and architecture correctly you can then provision your virtual machines in an automated and consistent manner.

So in the spirit of collaboration, I got together with Hitachi’s SAP alliance team, their storage team, and database architects and we came up with these profiles, policies, and containers to use when deploying SAP HANA landscapes.

We had several goals when designing this architecture; one was to use virtual volumes to address the entire data life cycle of SAP HANA, the in-memory component, Dynamic Tiering, Near-Line storage, and archiving or any supported combination of the above when creating a SAP HANA landscape. And secondly we wanted to enable rapidly provisioning of SAP HANA landscapes, so we created profiles, policies, and containers which could be used to deploy SAP HANA databases whose in-memory component could range from 512GB to 1TB in size.

I’ll review some of the capabilities HDS exposed which were used for this architecture:

  • Interestingly enough we were able to meet the SAP HANA in-memory KPIs using Hitachi Tier 2 storage which consisted of 10K SAS drives for both log and data files, as well as for the Operating System and the SAP HANA shared file system. This also simplified the design. We then used high density SAS drives for the backup areas
  • We enabled automatic storage managed snapshots for HANA data, log and the OS; and set the Snapshot frequency based on the classifications of Critical, Important, or Best Effort.
  • So snapshots for the data and log were classified as Critical while the OS was classified as Important and the backup area we didn’t snapshot at all
  • We also tagged this storage as certified, capturing the model and serial number, since the SAP HANA in-memory component requires certified storage. We wanted to make sure that when creating HANA VM’s you’re always pulling from certified storage containers.
  • The Dynamic Tiering and NLS storage had similar requirements so could be provisioned from the same containers and since these are disk based columnar databases we selected Tier 1 storage SSDs for the data files based on the random read/write patterns
  • And stuck with SAS drives for the log files since sequential workload don’t benefit much from SSDs. Again because of the disk based access we selected Tier 2 to satisfy the IOPS and Latency requirements.
  • Then finally for the archiving containers we used the lowest cost & highest density storage, pretty much just a file system.

Now there’s just too much information to cover in this effort with HDS but for those of you interested, VMware and Hitachi we will be publishing a Co-Logo White Paper which will be a much deeper dive into how we architected these landscapes so customers can do this almost out of the box.

Deploying VMware Software-Defined Storage With vSphere and Hitachi Command Suite

Example: SAP HANA Dynamic Tiering and Near-Line Storage Tiers. These next couple of screen captures will show how simple virtual volumes are to deploy once architected correctly

Figure 2: Storage Container Creation: SAP HANA DT and NLS Tier


Figure 3: Create Virtual Machine Storage Policies SAP HANA DT/NLS Data/Log File


Figure 4: Create New SAP HANA DT VM Using VVOLS Policies With Hitachi Storage


Addressing Mission Critical Use Cases with VMware Software-Defined Storage

SAP HANA and Multi-Temperature Data Management is the poster child for mission critical software-defined storage use cases. VMware Virtual Volumes solves the complexities and simplifies storage provisioning by using an application centric model rather than an infrastructure centric model.

The SAP HANA in-memory component is not yet certified for production use on vSphere 6.0, however Virtual Volumes can be used for SAP HANA Dynamic Teiring, Near-Line Storage, and Archiving. So my advice to our customers is to start architecting now, get together with your storage admins, VI Admins, application owners, and database administrators to create containers, policies, and profiles correctly so when vSphere 6.0 is certified you are ready to “Run SAP HANA Simple”.



Updating vCenter Server Appliance 6.0 to Update 1

Earlier this month, we released vSphere 6.0 Update 1. In this update we introduced some awesome new features for vCenter Server. Let’s take a look at some of these just below:

  • Installation and Upgrade using HTML 5 Installer for VCSA: The following installation and upgrade scenarios are now supported for vCenter Server Appliance using its HTML 5 installer:
    1. An installation using HTML 5 installer with a vCenter Server target is supported.
    2. An upgrade using HTML 5 installer with a vCenter Server target is not supported.
    3. An upgrade using command line with a vCenter Server target is supported.
  • Backup and Restore with External Platform Services Controller: vCenter Server deployments with an external PSC (also called MxN) have support for backup and restoration.
  • Appliance Management User Interface: An all new HTML5-based management interface for the appliance at https://<FQDN-or-IP>:5480. 
  • Platform Services Controller Interface: An all new HTML5-based management interface for the Platform Services Controller at https://<FQDN-or-IP>/psc/.  See my earlier blog on the Platform Services Controller Interface.
  • Interoperability: Virtual SAN and SMP-FT are interoperable.
  • Hybrid Cloud Manager: Hybrid Cloud Manager has been updated for vSphere, and can be accessed directly from the vSphere Web Client.
  • VCSA Authentication for Active Directory: VMware vCenter Server Virtual Appliance has been modified to only support AES256-CTS/AES128-CTS/RC4-HMAC encryption for Kerberos authentication between VCSA and Active Directory.
  • Support for SSLv3: Support for SSLv3 has been disabled by default.
  • Customer Experience Improvement Program: The opt-in Customer Experience Improvement Program (CEIP) provides VMware with information that enables VMware to improve the VMware products and services and to fix problems. When you choose to participate in CEIP, VMware will collect technical information listed below about your use of the VMware products and services in CEIP reports on a regular basis. This information does not personally identify you.

One additional feature that we introduced in vCenter Server 6.0 Update 1 is an in-place process for Updates in a major release (e.g. vCenter Server 6.0 to vCenter Server 6.0 Update 1) instead of the migration-based approach that was required in prior VCSA updates (e.g. vCenter Server 5.5 to vCenter Server 5.5 Update 1).

With these new capabilities — and, of course, resolved issues — there’s been a ton of interest in how to update the VCSA to 6.0 Update 1. So, let’s get started and look at the process…

Continue reading

Virtualizing SAP HANA Databases Greater than 1TB on vSphere 5.5

VMWorld 2015 Session Recap

I’m almost fully recovered from VMWorld, which was probably one of my busiest and most enjoyable VMWorld’s I’ve had in my 6 plus years at VMware because of the interaction with attendees, customers, and partners.  I’ll be doing a series of Post-VMWorld Blogs focused on my SAP HANA Software-Defined Data Centers sessions but my first blog will cover the misconceptions associated with sizing SAP HANA databases on vSphere. There are many good reasons to upgrade to vSphere 6.0, going beyond the 1TB monster virtual machine limit in vSphere 5.5 when deploying SAP HANA databases is not necessarily one of them.

SAP HANA is no longer just an in-memory database, it is now a data management platform.  It is NOT confined by the size of available memory since the SAP HANA warm data can be stored on disk in a columnar format and accessed transparently by applications.

What this means is the 1TB monster virtual machine maximum in vSphere 5.5 is an artificial barrier. SAP HANA multi-terabyte size databases can be easily virtualized with vSphere 5.5 using Dynamic Tiering, Near-Line Storage, and other memory management techniques SAP has introduced to the SAP HANA Platform to optimize and reduces HANA’s in-memory footprint.

SAP HANA Dynamic Tiering (DT)

SAP HANA Dynamic Tiering was introduced last year in Support Pack Stack (SPS) 09 for use with BW, Dynamic Tiering allows customers to seamlessly manager their SAP HANA disk based “Warm Data” on an Extended Storage Host, essentially placing data which does not need to be in-memory on disk. The guidance SAP gives when using the SAP HANA Dynamic Tiering option for SPS 09 is up to 20% of in-memory data can reside on the Extended Storage (ES) Host, for SPS 10 up to 40% can reside on the ES Host, and in the future up to 70% of the SAP HANA data can reside on the ES Host. So in the future the majority of SAP HANA data which was once in-memory can reside on-disk.

Near-Line Storage (NLS)

In addition to the reduction of the SAP HANA in-memory footprint DT affords customers, Near-Line Storage should be considered as well. With NLS, data is moved outside of the SAP HANA database proper to disk and classified as “Cold”, due to its infrequent accessed and can only be accessed read only. SAP provides examples showing NLS can reduce the HANA database in-memory requirements by several Terabytes (link below).

It is also important to note that both the DT Extended Storage Host and NLS solutions do not require certified servers or storage, so not only has SAP given customers the ability to run SAP HANA in a reduced memory footprint, customers can run on standard x86 hardware as well.

There is a white paper authored by Priti Mishra, Staff Engineer, Performance Engineering VMware, which is an excellent read for anyone considering DT or NLS options. “Distributed Query Processing in SAP IQ on VMware vSphere and Virtual SAN”

Importance of the VMware Software Defined Data Center

To their credit SAP has taken a leadership role with HANA’s in-memory columnar database computing capabilities and as HANA has evolved the sizing and hardware requirements have evolved as well. Rapid change and evolving requirements are givens in technology; the VMware Software Defined Data Center provides a flexible and agile architecture to effectively react to change by recasting compute, network, and storage resources, in a centrally managed manner.

As a concrete example of the flexibility VMware’s Platform provides, Figure 1. illustrates the evolution of SAP HANA from SPS 07 to SPS 09. For customers who would like to take advantage of SAP HANA’s multi-temperature data management techniques but initially deployed SAP HANA on SPS 07 (all in-memory); through virtualization customers can reclaim and recast memory, storage, and network resources in their virtual HANA landscape to reflect the latest architectural advances and memory management techniques in SPS 10.

Figure 1. SAP HANA Platform: Evolving Hardware Requirements

sap hana vmware

Since SAP HANA can now run in a reduced memory footprint, customers who licensed HANA to be all in-memory can use virtualization to reclaim memory and deploy additional virtual databases and make HANA pervasive in their landscapes.

As a general rule, in any rapidly changing environment The VMware Software-Defined Data Center provides an agile platform which can accommodate change and also protect against capital hardware investments that may not be necessary in the future (certified vs. standard x86 hardware). For that matter, the cloud is a good option to deploy any rapidly changing application/database in places like VMware vCloud Air, Virtustream, or Secure-24 just to mention a few.

Virtual SAP HANA Back on track

After speaking with session attendees, customers, and partners, at VMworld about SAP HANA’s Multi-temperature management capabilities, I was happy to hear they will not be delaying their virtual HANA deployments due to the vSphere 6.0 roadmap certification timeline. As I said earlier, the 1TB monster virtual machine maximum in vSphere 5.5 is an artificial barrier. It really is a worthwhile exercise to take a closer look at the temperature of your data, age of your data, and your access requirements in order to take full advantage of all the tools and features SAP provides their customers.

I was also encouraged to hear from many session attendees that my presentation at VMWorld brought the SDDC from concept closer to reality by demonstrating actual mission critical database/application use cases. My future post VMWorld blogs will focus on how I deconstructed the SAP HANA Networks Requirements document and transformed that into a virtual network design using VMware NSX from my desktop. I’ll also cover Software Defined Storage, essentially translating SAP’s Multi-Temperature Storage Options into VMware Virtual Volumes and Storage Containers.

“SAP HANA SPS10- SAP HANA Dynamic Tiering”; (SAP Product Management)


“Distributed Query Processing in SAP IQ on VMware vSphere and Virtual SAN”; Priti Mishra, Performance Engineering VMware


Blog: Bob Goldsand; “SAP HANA Dynamic Tiering and the VMware Software Defined Data Center”





Big Data Virtualization: Talks and Related Events at VMworld 2015

Here is a list of the Big Data technical talks and events at VMworld 2015 for your conference planning. The big data team at VMware will be delighted to see you at some or all of these events during the conference coming soon. Please register for these through the schedule builder on the VMworld website.

Sunday, 30th August

4:00pm VAPP6442-QT Quick Talk on VMware and Big Data

Monday, 31st August

12:30pm VAPP4567 – Big Data Partnering – Cloudera and VMware Work Together

3pm EXPERTS  – Meet the Big Data Experts

Tuesday, 1st September

11:00am Book Signing at DigitalGuru Bookshop in Moscone Lobby

12:00pm Theater Talk and Book Signing at the VMware Booth in Exhibition Hall

1:00pm VAPP6428GD – Group Discussion on Big Data

2:00pm INF4566 – A Customer Deployment with Hadoop on vSphere

4:00pm VAPP4588 – Virtualizing Big Data – a Customer Panel

Wednesday, 2nd September

08:30am CNA4725 – Scalable Cloud-Native Apps

3:30pm INF4551 – Customer Case Study: Skyscape’s Hadoop-in-the-Cloud Deployment

Big Data Hands-on-Lab HOL-SDC-1609 – available on each day of the show

Big Data Extensions/Hadoop Demos – at the vSphere/VSOM pod on the VMware booth

VMworld 2015: Extreme Performance Series

Who loves virtual Performance? Who wants to learn more about it?

Everybody of course!

I’m very excited about this year’s Extreme Performance Series mini-track being hosted at VMworld San Francisco and Barcelona. These sessions are created and presented by VMware’s best and most distinguished performance engineers, architects and gurus. I’ve tried to provide my personal thoughts on each session but these few words will never do them justice. Hope too see you all there!

Continue reading

Big Data Extensions Version 2.2 – What’s New? A summary of the new features.

The new  vSphere Big Data Extensions Version 2.2 shipped on the 5th June 2015!

Here is a quick summary of the new features that appear in the 2.2 release. This is an exciting and much-awaited release. As always, refer to the technical documents and the release notes to get more detail on these subjects. 

• Support for the Latest Hadoop Distributions. BDE 2.2 supports the latest versions from the major Hadoop distribution vendors, including Bigtop 0.8, Cloudera CDH 5.4, Hortonworks HDP 2.2, MapR 4.1, and Pivotal PHD 3.0.

• Better Fully Qualified Domain Name (FQDN) Management. We found that some users had difficulty with generating FQDNs within their network for newly cloned virtual machines. BDE can now generate and propagate meaningful host names in FQDN form for your new  virtual machines that host the Hadoop nodes. The new FQDNs will be registered to a DNS server if you are using a Dynamic DNS server.

• Shrink clusters. You can now reduce (as well as expand) the number of worker virtual machines that belong to a running Hadoop cluster in an easy way. The virtual machines targeted for shrinking will be quiesced, withdrawn from the Hadoop cluster and then deleted to release any resources that they used completely.

• Active Directory/Lightweight Directory Access Protocol (AD/LDAP) integration. You can use an AD/LDAP server to manage the accounts generated by BDE within the Hadoop nodes . You can specify the accounts to be Hadoop users accounts and/or service accounts in an AD/LDAP server.

• vSphere 6.0 Instant Clone. BDE will, at the user’s request, use Instant Clone technology to spin up new Hadoop VMs. This feature reduces the time of spinning up Hadoop VMs and the runtime footprint. This is an optional way to do this. You can choose to use the older “full clone” method also if you prefer to. We recommend that you use this new type of cloning for your test and development workloads to begin with.
• Centralised logging. You can configure BDE to direct logging information to an external syslog server including LogInsight.
• Quiesce the BDE management server. You can quiesce BDE management server with a command so that you can backup BDE management server’s data for your clusters safely.
• Automatic GUI installation. BDE GUI is automatically registered to the vCenter after BDE is deployed.

• Support for the Latest Partner Hadoop Management Tools. BDE 2.2 supports Cloudera Manager 5.3, and Ambari 1.7. You have more flexibility to deploy Hadoop clusters, including a compute-only cluster,a  HBase-only cluster,a  data-compute separated cluster etc. even when using a Partner Hadoop Management tool.

• Support for the Latest Isilon Version. Fully automated process to deploy and manage compute only clusters on OneFS 7.2.

• Big Data Extensions Upgrade. You can upgrade Big Data Extensions 2.1 to the current version, Big Data Extensions 2.2, and preserve all the data for the Hadoop clusters that were created using Big Data Extensions 2.1. All of your existing clusters can be managed by Big Data Extensions once the upgrade completes.

• Localization. BDE is localized to 6 languages including DE, FR, ZH_CN, ZH_TW, KO, and JA.

Confessions of an Energy Consciousness Mind

I have a confession. 

My data center kit has been using too much energy.

Having kit available at my disposable is great, but I have been wasting this resource when it’s not required by my workloads. And if there’s one thing I try to be conscious of, it’s energy consumption. Just ask my kids who I chase from room to room turning off lights, screens, and the lot when they aren’t using them.

But why not in the data center? Did you know that hosts typically use 60%+ of their peak power when idle?

Until recently, I had overlooked configuring my kit to use the vSphere Distributed Power Management (“DPM”) feature to manage power consumption and save energy.

With the release of vSphere 6.0 it’s a good time to review and take deeper look into the capabilities and benefits of this feature.

What is VMware vSphere Distributed Power Management?

VMware vSphere Distributed Power Management is a feature included with vSphere Enterprise and Enterprise Plus editions that dynamically optimizes cluster power consumption based on workload demands. When host CPU and memory resources are lightly used, DPM recommends the evacuation of workloads and powers-off of ESXi hosts. When CPU or memory resource utilization increases for workloads or additional host resources are required, DPM powers on a required set of hosts back online to meet the demand of HA or other workload-specific contraints by executing vSphere Distributed Resource Scheduler (“DRS”) in a “what-if” mode. DRS will ensure host power recommendations are consistent with the cluster constraints and resources being managed by the cluster.

Beneath the covers there are key challenges that DPM addresses to enable effective power-savings capabilities:

  • Accurately Assessing Workload Resource Demand
  • Avoiding Frequent Power-on/Power-off of Host and Excessive vMotion Operations
  • Rapid Response to Workload Demand and Performance Requirements
  • Appropriate Host Selection for Power-on/Power-Off within Tolerable Host Utilization Ratios
  • Intelligent Redistribution of Workloads After Host Power-on/Power-Off

Once DPM determines the number of hosts needed to satisfy all workloads and relevant constraints, and DRS has distributed virtual machines across hosts to maintain resource allocation constraints and objectives, each powered-on host is free to handle its power management

Hosts Entering and Exiting Standby

When a host is powered-off by DPM, they are marked in vCenter Server in “standby” mode indicating that they are powered-off but available to be powered-on when required. The host icon is updated with a crescent moon overlay symbolizing a “sleeping” state for the host.

DPM can awaken hosts from the standby mode using one of three power management options:

  1. Intelligent Platform Management Interface (IPMI)
  2. Hewlett Packard Integrated Lights-Out (iLO), or
  3. Wake-On-LAN (WOL).

Each protocol requires its own hardware support and configuration. If a host does not support any of these protocols it cannot be put into standby by DPM. If a host supports multiple protocols, they are used in the following order: IPMI, iLO, WOL. This article is focused on the use of the first two.

Continue reading

Help us improve vSphere!

Are you a vSphere user? If so, we want to hear from you. Attached is our new survey. Help us build a better product and make sure our features are aligned with your business needs.



Enhancing User Experience: Customization of vRealize Automation 6.2.x Email Notifications

User Experience (“UX”) focuses on the intimate understanding of your users. What is it that they need or desire, what do they value, what are their abilities, as well as their limitations?

As you embark upon the journey to the software-defined data center (SDDC), think and architect in terms of the user experience in addition to “boxes and arrows.”

  • What are the desired UX outcomes for those consuming the service(s)?
  • Have you considered the UX in terms of its usefulness, usability, desirability, accessibility, credibility, and its value?

In addition to fundamental tenant and business group designs, entitlements and service catalogue designs, one such area for UX consideration is the messages provided to those consuming services of the software-defined data center.

For a moment, imagine you are providing automated infrastructure delivery to multiple business segments of a large media and entertainment organization, each with their own distinct brand. The segments are built upon their individual brand and identity.

  • Do you centrally brand the service that you offer or do you tailor the service to each tenant business segment?
  • How would this change if instead the services were used to provide automated infrastructure delivery only to your IT Operations team and not direct end users?

The messages that appear in the inbox of the user are part of the experience. VMware vRealize Automation can send automatic notifications for several types of events, such as, the successful completion of a catalogue request or a required approval workflow.  System Administrators can configure global email servers, senders and recipients that process email notifications.

Tenant Administrators can override those defaults, or add their own servers, senders and recipients if no global attributes are specified. They may even select which events will cause notifications to be sent to their users. Each component, such as the service catalog or infrastructure-as-a-service, can define events that can trigger notifications.


Additionally, each user can choose if they wish to receive notifications. Users either receive all notifications configured by the Tenant Administrator or no notifications.

Notification may also have links that allow the user to perform interactively. For example, a notification about a request that requires approval can have one link for approving the request and one for rejecting it. When a user clicks one of the links, a new email opens with content that is automatically generated. The user can send the email to complete the approval.

Messages can be easily and beautifully customized using a simple, powerful template engine. These may be customized per-locale, per-tenant, and per-notification scenario. You have the ability to define and craft the desired user experience for any notification.

Continue reading

vCenter Server 6 Deployment Topologies and High Availability

Architectural changes to vSphere 6:

vCenter Server 6 has some fundamental architectural changes compared to vCenter Server Server 5.5. The multitude of components that existed in vCenter Server 5.x has been consolidated in vCenter Server 6 to have only two components vCenter Management Server and Platform Services Controller, formerly vCenter Server Single Sign-On.

The Platform Services Controller (PSC) provides a set of common infrastructure services encompassing

  • Single Sign-On (SSO)
  • Licensing
  • Certificate Authority

The vCenter Management Server consolidates all the other components such as Inventory Service & Web Client services along with its traditional management components. The vCenter Server components can be typically deployed in with either embedded or external PSC. Care should be taken to understand the critical differences between the two deployment models. Once deployed one cannot move from one mode to another in this version.

Continue reading