Home > Blogs > VMware vSphere Blog > Category Archives: ESXi

Category Archives: ESXi

SIOC: I/O Distribution with Reservations & Limits – Part 2

Part 1 of this series explains the new reservation capabilities of the ESXi storage scheduler in vSphere 6.0 called mClock.  That article explains how to calculate the number of entitled IOPS during times of contention.  This article will expand on that topic with a couple new scenarios.  The previous article assumed that all the VMs were evenly consuming the storage resources at the same time.  In the real-world though, some VMs will be consuming resources while others will be idle.  This should help explain how the IOPS are distributed when there are idle VMs in the environment.

Scenario 3
In this scenario the third VM is idle, while the other 3 VMs are consuming storage IOPS.  For the sake of this example, it will be assumed that VM3 will be consuming only 10 IOPS.

8000 IOPS

Unlike memory reservations, the storage scheduler will allow the unused resources to be consumed by other VMs.

The first step is to determine what percentage of the resources each host will receive. In this example there are a total of 5000 shares across all hosts.  Then you would calculate how many shares are assigned to each host to determine the percentage each host will receive.  In this example, Host 1 has 3500/5000 (70%) of the shares, and host 2 has 1500/5000 (30%) of the shares.  This will result with the following entitled IOPS for each host.

Host1: 70% * 8000 IOPS = 5600 IOPS
Host2: 30% * 8000 IOPS = 2400 IOPS

Once the I/O distribution for the hosts are calculated, the VMs will have their entitled resources calculated using the share distribution within the host.

VM1: (1000/3500) * 5600 = 1600 IOPS
VM2 (2500/3500) * 5600 = 4000 IOPS
VM3: (500/1500) * 2400 = 800 IOPS (Only using 10 IOPS)
VM4: (1000/1500) * 2400 = 1600 IOPS

Since VM3 is only using 10 IOPS, the 790 unused IOPS would be distributed to the remaining VMs on the host.  In this case, VM4 would be entitled to 2390 IOPS.  However, VM4 has a limit of 2000 IOPS, which means that there will be 390 IOPS that can still be distributed.  Those 390 IOPS will then be distributed across the VMs on Host1.

In the end, this is how the IOPS allocation would be distributed:

VM1: 1600 + ((1000/3500) * 390) = 1711 IOPS
VM2: 4000 + ((2500/3500) * 390) = 4279 IOPS
VM3: 10 IOPS
VM4: 2000 IOPS (Due to limit)

Scenario 4
Now let’s take the same environment, but calculate the effective IOPS if VM1 was the idle VM. Again, for the sake of this example, the idle VM will be consuming 10 IOPS.

8000 IOPS

The first thing to do is calculate the percentage of the resources each host will receive. In this example there are total 5000 shares across all hosts. Since the environment has not changed, the entitled IOPS per host is unchanged from the previous example.

Host1: 70% * 8000 IOPS = 5600 IOPS
Host2: 30% * 8000 IOPS = 2400 IOPS

Once the I/O distribution for the hosts are calculated, the VMs will have their entitled resources calculated using the share distribution within the host.

VM1: (1000/3500) * 5600 = 1600 IOPS (Only using 10 IOPS)
VM2 (2500/3500) * 5600 = 4000 IOPS
VM3: (500/1500) * 2400 = 800 IOPS
VM4: (1000/1500) * 2400 = 1600 IOPS

Since VM1 is only using 10 IOPS, the 1590 unused IOPS would be distributed to the remaining VMs on the host.  In this case, VM2 would be entitled to 5590 IOPS.  However, VM2 has a limit of 5000 IOPS, which means that there will be 590 IOPS that can still be distributed.  Those 590 IOPS will then be distributed across the VMs on Host2.

In the end, this is how the IOPS allocation would be distributed:

VM1: 10 IOPS
VM2: 5000 IOPS (Due to limit)
VM3: 800 + ((500/1500) * 590) = 997 IOPS
VM4: 1600 + ((1000/1500) * 590) = 1993 IOPS

Hopefully this helps explain how entitled IOPS are calculated and distributed using the mClock storage scheduler in vSphere 6.0.  The important thing to take away is that unused IOPS are not held and wasted, and they distributed across the environment automatically providing the most efficient use of your resources.

VMware Tools Lifecycle: Why Tools Can Drive You Crazy (and How to Avoid it!)

There has been a lot of buzz around vSphere Lifecycle since VMworld. My last few blog posts on VMware Tools have had a tremendous amount of traffic, so I decided to continue with the theme and give you all what it appears you want more of. So in this post, LET’S TALK TOOLS!

Continue reading

Big Data on vSphere with HBase

This article describes a set of performance tests that were conducted on HBase, a popular data management tool that is frequently used with Hadoop, running on VMware vSphere 6 and provisioned by the vSphere Big Data Extensions tool. The work described here was done by Xinhui Li, who is a staff engineer in the Big Data team in VMware’s R&D Labs in Beijing. Xinhui’s biography and background details are given at the end of the article.

What is HBase?

HBase is an Apache project that is designed to handle very large amounts of data on the Hadoop platform. HBase is often described as providing the functionality of a NoSQL database running on top of Hadoop. It combines the scalability of Hadoop, through its use of the Hadoop Distributed File System (HDFS) to store the data, with real-time data access to the data. HBase can handle billions of rows of data and very large numbers of columns. Along with Hadoop, HBase runs on clusters of commodity hardware that form a distributed system. The HBase architecture is made up of RegionServers that run on the worker nodes while the HBase Master Server controls them.

Continue reading

Virtualizing SAP HANA Databases Greater than 1TB on vSphere 5.5

VMWorld 2015 Session Recap

I’m almost fully recovered from VMWorld, which was probably one of my busiest and most enjoyable VMWorld’s I’ve had in my 6 plus years at VMware because of the interaction with attendees, customers, and partners.  I’ll be doing a series of Post-VMWorld Blogs focused on my SAP HANA Software-Defined Data Centers sessions but my first blog will cover the misconceptions associated with sizing SAP HANA databases on vSphere. There are many good reasons to upgrade to vSphere 6.0, going beyond the 1TB monster virtual machine limit in vSphere 5.5 when deploying SAP HANA databases is not necessarily one of them.

SAP HANA is no longer just an in-memory database, it is now a data management platform.  It is NOT confined by the size of available memory since the SAP HANA warm data can be stored on disk in a columnar format and accessed transparently by applications.

What this means is the 1TB monster virtual machine maximum in vSphere 5.5 is an artificial barrier. SAP HANA multi-terabyte size databases can be easily virtualized with vSphere 5.5 using Dynamic Tiering, Near-Line Storage, and other memory management techniques SAP has introduced to the SAP HANA Platform to optimize and reduces HANA’s in-memory footprint.

SAP HANA Dynamic Tiering (DT)

SAP HANA Dynamic Tiering was introduced last year in Support Pack Stack (SPS) 09 for use with BW, Dynamic Tiering allows customers to seamlessly manager their SAP HANA disk based “Warm Data” on an Extended Storage Host, essentially placing data which does not need to be in-memory on disk. The guidance SAP gives when using the SAP HANA Dynamic Tiering option for SPS 09 is up to 20% of in-memory data can reside on the Extended Storage (ES) Host, for SPS 10 up to 40% can reside on the ES Host, and in the future up to 70% of the SAP HANA data can reside on the ES Host. So in the future the majority of SAP HANA data which was once in-memory can reside on-disk.

Near-Line Storage (NLS)

In addition to the reduction of the SAP HANA in-memory footprint DT affords customers, Near-Line Storage should be considered as well. With NLS, data is moved outside of the SAP HANA database proper to disk and classified as “Cold”, due to its infrequent accessed and can only be accessed read only. SAP provides examples showing NLS can reduce the HANA database in-memory requirements by several Terabytes (link below).

It is also important to note that both the DT Extended Storage Host and NLS solutions do not require certified servers or storage, so not only has SAP given customers the ability to run SAP HANA in a reduced memory footprint, customers can run on standard x86 hardware as well.

There is a white paper authored by Priti Mishra, Staff Engineer, Performance Engineering VMware, which is an excellent read for anyone considering DT or NLS options. “Distributed Query Processing in SAP IQ on VMware vSphere and Virtual SAN”

Importance of the VMware Software Defined Data Center

To their credit SAP has taken a leadership role with HANA’s in-memory columnar database computing capabilities and as HANA has evolved the sizing and hardware requirements have evolved as well. Rapid change and evolving requirements are givens in technology; the VMware Software Defined Data Center provides a flexible and agile architecture to effectively react to change by recasting compute, network, and storage resources, in a centrally managed manner.

As a concrete example of the flexibility VMware’s Platform provides, Figure 1. illustrates the evolution of SAP HANA from SPS 07 to SPS 09. For customers who would like to take advantage of SAP HANA’s multi-temperature data management techniques but initially deployed SAP HANA on SPS 07 (all in-memory); through virtualization customers can reclaim and recast memory, storage, and network resources in their virtual HANA landscape to reflect the latest architectural advances and memory management techniques in SPS 10.

Figure 1. SAP HANA Platform: Evolving Hardware Requirements

sap hana vmware

Since SAP HANA can now run in a reduced memory footprint, customers who licensed HANA to be all in-memory can use virtualization to reclaim memory and deploy additional virtual databases and make HANA pervasive in their landscapes.

As a general rule, in any rapidly changing environment The VMware Software-Defined Data Center provides an agile platform which can accommodate change and also protect against capital hardware investments that may not be necessary in the future (certified vs. standard x86 hardware). For that matter, the cloud is a good option to deploy any rapidly changing application/database in places like VMware vCloud Air, Virtustream, or Secure-24 just to mention a few.

Virtual SAP HANA Back on track

After speaking with session attendees, customers, and partners, at VMworld about SAP HANA’s Multi-temperature management capabilities, I was happy to hear they will not be delaying their virtual HANA deployments due to the vSphere 6.0 roadmap certification timeline. As I said earlier, the 1TB monster virtual machine maximum in vSphere 5.5 is an artificial barrier. It really is a worthwhile exercise to take a closer look at the temperature of your data, age of your data, and your access requirements in order to take full advantage of all the tools and features SAP provides their customers.

I was also encouraged to hear from many session attendees that my presentation at VMWorld brought the SDDC from concept closer to reality by demonstrating actual mission critical database/application use cases. My future post VMWorld blogs will focus on how I deconstructed the SAP HANA Networks Requirements document and transformed that into a virtual network design using VMware NSX from my desktop. I’ll also cover Software Defined Storage, essentially translating SAP’s Multi-Temperature Storage Options into VMware Virtual Volumes and Storage Containers.

“SAP HANA SPS10- SAP HANA Dynamic Tiering”; (SAP Product Management)


“Distributed Query Processing in SAP IQ on VMware vSphere and Virtual SAN”; Priti Mishra, Performance Engineering VMware


Blog: Bob Goldsand; “SAP HANA Dynamic Tiering and the VMware Software Defined Data Center”





Open-VM-Tools (OVT): The Future of VMware Tools for Linux


For those of you who attended my VMworld sessions with Salil Suri, we dropped a hint that there are things happening with Open-VM-Tools (OVT). We at VMware know that vSphere lifecycle is a difficult task to take on and that updating VMware Tools across hundreds or thousands of virtual machines is an ever-increasing burden. There have been some initiatives inside of VMware to help mitigate the amount of work needed to orchestrate this task and I think you all will find very interesting and exciting. Continue reading

VMware Tools 10.0.0 Released

An announcement we made during our repeat session “INF5123 – Managing vSphere Deployments and Upgrades – Part 2” at VMworld last week, was that VMware Tools 10.0.0 was released and is available for download in MyVMware. Continue reading

New Capabilities in Project Photon OS Technical Preview 2 (TP2)

Project Photon OS, the small-footprint container runtime from VMware that was first announced back in April, is making great progress. Several new enhancements to this open source initiative are especially interesting to vSphere administrators and those responsible for deployment and administration.

PXE Boot and Network Installation

Operating system ISO images may be the lingua franca of install media due to portability and ease of use across a wide range of environments, but a proper PXE boot infrastructure can be a very valuable enhancement to both lab test beds and production environments. Those that have invested the effort in PXE will be pleased to know that Photon OS TP2 can be easily booted from the network for quick installation. And by quick, we mean really quick! Photon OS is purpose-built for containers and does not include the extraneous packages found in general-purpose distributions. Administrators can expect an interactive installation to take less than a minute, and the majority of that time will likely be spent keying in a complex root password two times.

The source of the network installation is also flexible, ranging from an internal HTTP server to a public Internet-based repository for those environments that desire to keep things minimal.

Scripted Installation

Manually installing guests in vSphere is fine for one-off efforts, troubleshooting, or other experiments, but to really operationalize any process, automation is necessary. Photon OS TP2 now supports scripted installation, which can be used with either the network or ISO installation options.

While it accomplishes the same goal as traditional kickstart, the Photon OS scripted install differs somewhat in implementation.   The first and most obvious difference is the configuration file format. Instead of a plain text file with simple directives, Photon OS leverages JSON format. This is easy enough to edit by hand but also opens up the possibilities for programmatic manipulation, if desired. Another major difference is the range of directives – Photon OS is streamlined by nature and does not offer infinite control over aspects such as disk partition layout. There is, however, a means of running an arbitrary script at the end of the installation that should satisfy a great majority of customization requirements.

Guest OS Customization

In a vSphere environment, automated installation is great but it is typical to deploy new VMs from a template or Content Library – one of the new features of vSphere 6. Photon OS TP2 now has the necessary internals to support the guest OS customization that must occur after making a clone of a VM template. This is the procedure by which unique settings such as the hostname and network configuration are properly assigned.  In TP2, all of the typical naming and addressing options are supported.


A new approach to OS deployment known as RPM-OSTree debuts in Photon OS TP2. This is an open source mechanism that combines aspects of image-based and package-based OS configuration, aimed at improving the consistency of deployed systems. Instead of updating packages on farms of individual servers through some means of configuration management, updates are made to a central reference system that is subsequently synchronized to clients.

While this approach may seem restrictive, it is actually very well aligned with a container runtime instance that needs just a small number of packages installed. Offering an advantage in areas such as stability and security, server instances become largely immutable and not subject to configuration drift that would be found in a handcrafted environment.

Photons Everywhere You Look!

Photon OS is a great open source Linux container runtime, but it is also an important ingredient in other VMware cloud-native infrastructure stacks. For instance, vSphere Integrated Containers uses a “pico” edition of the Photon OS Linux kernel for the parent VM that is repeatedly forked with Instant Clone to run containers. This “pico” edition is smaller than is practical for many Photon OS environments, but when used as an embedded component of vSphere Integrated Containers, the image can be very slim. Photon OS is also present as a container runtime in the distributed control plane that makes up Photon Controller, part of Photon Platform, the new VMware infrastructure optimized for running cloud native apps at extreme scale.

For developers, Photon OS is included in the VMware AppCatalyst product as well as through Hashicorp Atlas in the form of a Vagrant box. Speaking of Vagrant, another important new feature of Photon OS TP2 is full support for shared folders (HGFS) when using with VMware desktop hypervisors.

Getting Photon OS TP2

Photon OS continues to be offered as an open source project available on Github.  But for the most part that venue is geared toward developers from VMware as well as other collaborators working on the actual code. vSphere administrators will primarily be interested in a binary ISO release, which now comes in two different sizes, optimized for minimal or full installations.

Take a look at Project Photon OS Technical Preview 2 and explore containers on your trusted vSphere infrastructure today!

How To Choose The Best Infrastructure Stack For Your Cloud-Native Applications

Cloud-native applications are gaining mindshare, especially containerized apps that align well with the requirements of DevOps workflows, microservices, and immutable infrastructure trends. Developers and infrastructure experts must soon identify the platform for their next-generation workloads. Wouldn’t it be great if existing investments in skills, infrastructure, and technology ecosystem continued to offer the best environment to run all applications — including containerized apps?

Acknowledging that a single architecture may not satisfy the sometimes mutually exclusive requirements for traditional and third platform applications, VMware is gearing up for two new approaches in support of containerized apps.

Whether integrating with existing vSphere infrastructure to run alongside other workloads, or building an entirely new footprint optimized for high scale and churn, VMware has all of the bases covered!

vSphere Integrated Containers – Technology Preview

For those customers needing to support developers that are in the initial stages of deconstructing monolithic enterprise applications through microservices, Agile development, and DevOps workflows, the vSphere Integrated Containers (VIC) approach will serve them well.

VIC takes the basic constructs specified by the Open Container Initiative and maps them to the vSphere environment, exposing a virtual container host that is compatible with standard Docker client tools but backed by a flexible pool of resources to accommodate apps of many sizes. In this model, VMs essentially become containers and other aspects, such as storage and network, are mapped to corresponding elements of the vSphere platform. A tiny variant of Photon OS forms the basis of the container runtime in VIC. Performance and density is optimized through the use of Instant Clone – a feature of vSphere 6 that enables a running VM to be rapidly forked so that child VMs consume only resources that change from the parent base image.

Based on Project Bonneville technology, this is the most seamless way to provide a Docker container runtime environment with several advantages over bare-metal Linux container architectures. Hardware-level isolation of individual containers paves the way for capabilities in VIC that cannot be matched through a shared Linux kernel model.

Inherent benefits of the vSphere platform such as administrator tool choices — from the rich Web Client GUI to the productivity-boosting PowerCLI – are further extended by comprehensive application management and monitoring capabilities in vSphere and vRealize. These resource management features deliver enhanced abilities to meet enterprise SLAs for compute, network, and storage.

Photon Platform – Technology Preview

For those customers with new initiatives that have advanced cloud-native requirements, VMware is introducing the Photon Platform.  The platform is a collection of technologies that provide infrastructure with just the features needed to securely run containerized applications, controlled by a massively-scalable distributed management plane with an API-first design approach. Photon Platform benefits from the solid heritage of the VMware ESXi hypervisor but favors scale and speed over the rich management features offered by vSphere.

Photon Platform consists of the following components:

  • Photon Machine
    • Secure ESX Microvisor based on the proven core of VMware ESXi and optimized for container-based workloads
    • Photon OS – the lightweight Linux container runtime designed to integrate with VMware infrastructure
  • Photon Controller
    • Distributed management plane provides massive scale and resiliency
    • API/CLI for flexible integration with DevOps workflows

Photon Platform will also provide an extensible provisioning capability that allows administrators to quickly instantiate popular consumptions surfaces for containerized applications such as Cloud Foundry, Kubernetes, or Mesos.

Scale, Speed, and Churn

For developers on the cutting edge of application architecture, a pattern is emerging that favors re-deployment over painstaking configuration management approaches often found in the traditional datacenter. This trend, sometimes called immutable infrastructure, forces deployments to be described programmatically and helps eliminate human bottlenecks and errors. Configuration changes can require many new VMs or containers to be deployed while old ones are rapidly destroyed, even further amplified when multiple development and test environments must also be delivered. These frequent deployments are automated, essentially eliminating the need for rich graphical interfaces and comprehensive wizards. Photon Platform foregoes full-featured centralized management tools, as they do not add the same value here that they do in traditional datacenter environments.

How to Choose

While VIC will quickly launch a container VM on demand, the magnitude would typically be in the tens, or possibly hundreds, at a time for an application. Photon Platform, on the other hand, is designed for environments where thousands or tens of thousands of containers are needed in a very short time – imagine how pleased your developers will be to learn that they can have a new Kubernetes endpoint with 1,000 nodes available for use within minutes — and another one a few minutes later!

Regardless of your cloud-native infrastructure needs, VMware will continue to be your trusted partner extending a strong record of innovation. Think of vSphere Integrated Containers as the enterprise-grade onramp to containerized applications, leveraging existing investments in technology and skillsets. Imagine Photon Platform as the next-generation infrastructure to support future initiatives that require incredible scale and churn for a range of popular container-centric consumption surfaces.

Both vSphere Integrated Containers and Photon Platform are currently Technology Previews. Please contact your VMware account team for more information or to learn about potential opportunities to participate in private betas.

Technology Preview: Enriching vSphere with hybrid capabilities


Today VMware is revealing a Technology Preview of Project SkyScraper, a new set of hybrid cloud capabilities for VMware vSphere that will enable customers to confidently extend their data center to the public cloud and vice-a-versa by seamlessly operating across boundaries while providing enterprise-level security and business continuity.

At VMworld, we will demonstrate live workload migration with Cross-Cloud vMotion and Content Sync between on-premises and vCloud Air.  These features will complement VMware vCloud® Air™ Hybrid Cloud Manager™ – a free, downloadable solution for vSphere Web Client users, with optional fee-based capabilities. Hybrid Cloud Manager consolidates various capabilities such as workload migration, network extension and improved hybrid management features into one easy-to-use solution for managing workloads in vCloud Air from the vSphere Web Client.

Cross-Cloud vMotion is a new technology based on vSphere vMotion that allows customers to seamlessly migrate running virtual machines between their on-premises environments and vCloud Air. Cross-cloud vMotion can be used via the vSphere Web Client, enabling rapid adoption with minimal training. The flexibility provided by this technology gives customers the ability to securely migrate virtual machines bi-directionally without compromising machine up-time; all vMotion guarantees are maintained.

Content Sync will allow customers to subscribe to an on-premise Content Library and seamlessly synchronize VM templates, vApps, ISOs, and scripts with their content catalog in vCloud Air with a single click of a button. This feature will ensure consistency of content between on-premise and the cloud, eliminating error prone manual sync process.

Learn more about these two capabilities under Project Skyscraper by visiting us the VMware booth at VMworld 2015.

VMworld US 2015 Spotlight Session: Project Capstone, a Collaboration between VMW, HP & IBM

No Application Left behind

This year at VMworld 2015 US in San Francisco, over 40 sessions focused on Business Critical Applications and databases will be delivered by a broad cast of VMware experts. These experts include VMware product specialists, partners, customers, and end users (developers and data scientists).

One specific session that we would like to shine the spotlight on is VAPP6952-S, “VMware Project Capstone”, in which VMware, HP and IBM will announce a collaborative effort to virtualize the highest demanding applications. As a result of this partnership between VMware, HP, and IBM, we can now more than ever, confidently claim that all applications and databases are candidates for virtualized infrastructure.  This joint effort, which utilizes an HP Superdome X and an IBM FlashSystem with massive 120 vCPU VMs on vSphere 6 running Oracle 12c constitutes the most significant advancement in the area of virtualization of Business Critical Applications in many years.

The session takes place Monday, August 31st at 5PM. Join us for this session to learn about this game changing initiative.


VMware Project Capstone, a Collaboration of VMware, HP and IBM, driving Oracle to Soar Beyond the Clouds using vSphere 6, an HP Superdome X and an IBM FlashSystem ®

 Abstract: When three of the most historically significant and iconic technology companies join forces, even the sky is not the limit.  VMware, HP and IBM have collaborated on a project whose scope both eradicates the long accepted boundaries of virtualization for extreme high performance and establishes a new approach to cooperative solution building.

The Superdome X is HP’s first Xeon based Superdome and when combined with an IBM FlashSystem ®  and virtualized with vSphere 6, the raw capabilities of this stack challenge the imagination and dispel previously held notions of performance limitations in virtualized environments.  The Superdome X and the FlashSystem  comprise a unique stack for all Business Critical Applications and databases. The most demanding environments can now be virtualized. It is no longer obligatory for VMware to claim that 99.9% of all applications and databases are candidates for virtualized infrastructure, as that number is now 100%.  This spotlight session features senior executive management from VMware, HP and IBM and an introduction of the tests results of this unprecedented collaborative effort.

Key Takeaways:

  1. The methodologies that are being used to drive the Superdome X and the IBM FlashSystem ® to the far edges of known performance.
  2. The reasons behind the joint effort of these three renowned companies as well as the aspirations for this collaboration.
  3. An understanding of how this new landmark architecture can affect the industry and benefit customers who have extreme but broad performance requirements.