Home > Blogs > VMware vSphere Blog > Category Archives: Performance

Category Archives: Performance

SIOC: I/O Distribution with Reservations & Limits – Part 2

Part 1 of this series explains the new reservation capabilities of the ESXi storage scheduler in vSphere 6.0 called mClock.  That article explains how to calculate the number of entitled IOPS during times of contention.  This article will expand on that topic with a couple new scenarios.  The previous article assumed that all the VMs were evenly consuming the storage resources at the same time.  In the real-world though, some VMs will be consuming resources while others will be idle.  This should help explain how the IOPS are distributed when there are idle VMs in the environment.

Scenario 3
In this scenario the third VM is idle, while the other 3 VMs are consuming storage IOPS.  For the sake of this example, it will be assumed that VM3 will be consuming only 10 IOPS.

8000 IOPS

Unlike memory reservations, the storage scheduler will allow the unused resources to be consumed by other VMs.

The first step is to determine what percentage of the resources each host will receive. In this example there are a total of 5000 shares across all hosts.  Then you would calculate how many shares are assigned to each host to determine the percentage each host will receive.  In this example, Host 1 has 3500/5000 (70%) of the shares, and host 2 has 1500/5000 (30%) of the shares.  This will result with the following entitled IOPS for each host.

Host1: 70% * 8000 IOPS = 5600 IOPS
Host2: 30% * 8000 IOPS = 2400 IOPS

Once the I/O distribution for the hosts are calculated, the VMs will have their entitled resources calculated using the share distribution within the host.

VM1: (1000/3500) * 5600 = 1600 IOPS
VM2 (2500/3500) * 5600 = 4000 IOPS
VM3: (500/1500) * 2400 = 800 IOPS (Only using 10 IOPS)
VM4: (1000/1500) * 2400 = 1600 IOPS

Since VM3 is only using 10 IOPS, the 790 unused IOPS would be distributed to the remaining VMs on the host.  In this case, VM4 would be entitled to 2390 IOPS.  However, VM4 has a limit of 2000 IOPS, which means that there will be 390 IOPS that can still be distributed.  Those 390 IOPS will then be distributed across the VMs on Host1.

In the end, this is how the IOPS allocation would be distributed:

VM1: 1600 + ((1000/3500) * 390) = 1711 IOPS
VM2: 4000 + ((2500/3500) * 390) = 4279 IOPS
VM3: 10 IOPS
VM4: 2000 IOPS (Due to limit)

Scenario 4
Now let’s take the same environment, but calculate the effective IOPS if VM1 was the idle VM. Again, for the sake of this example, the idle VM will be consuming 10 IOPS.

8000 IOPS

The first thing to do is calculate the percentage of the resources each host will receive. In this example there are total 5000 shares across all hosts. Since the environment has not changed, the entitled IOPS per host is unchanged from the previous example.

Host1: 70% * 8000 IOPS = 5600 IOPS
Host2: 30% * 8000 IOPS = 2400 IOPS

Once the I/O distribution for the hosts are calculated, the VMs will have their entitled resources calculated using the share distribution within the host.

VM1: (1000/3500) * 5600 = 1600 IOPS (Only using 10 IOPS)
VM2 (2500/3500) * 5600 = 4000 IOPS
VM3: (500/1500) * 2400 = 800 IOPS
VM4: (1000/1500) * 2400 = 1600 IOPS

Since VM1 is only using 10 IOPS, the 1590 unused IOPS would be distributed to the remaining VMs on the host.  In this case, VM2 would be entitled to 5590 IOPS.  However, VM2 has a limit of 5000 IOPS, which means that there will be 590 IOPS that can still be distributed.  Those 590 IOPS will then be distributed across the VMs on Host2.

In the end, this is how the IOPS allocation would be distributed:

VM1: 10 IOPS
VM2: 5000 IOPS (Due to limit)
VM3: 800 + ((500/1500) * 590) = 997 IOPS
VM4: 1600 + ((1000/1500) * 590) = 1993 IOPS

Hopefully this helps explain how entitled IOPS are calculated and distributed using the mClock storage scheduler in vSphere 6.0.  The important thing to take away is that unused IOPS are not held and wasted, and they distributed across the environment automatically providing the most efficient use of your resources.

VMware Tools Lifecycle: Why Tools Can Drive You Crazy (and How to Avoid it!)

There has been a lot of buzz around vSphere Lifecycle since VMworld. My last few blog posts on VMware Tools have had a tremendous amount of traffic, so I decided to continue with the theme and give you all what it appears you want more of. So in this post, LET’S TALK TOOLS!

Continue reading

Big Data on vSphere with HBase

This article describes a set of performance tests that were conducted on HBase, a popular data management tool that is frequently used with Hadoop, running on VMware vSphere 6 and provisioned by the vSphere Big Data Extensions tool. The work described here was done by Xinhui Li, who is a staff engineer in the Big Data team in VMware’s R&D Labs in Beijing. Xinhui’s biography and background details are given at the end of the article.

What is HBase?

HBase is an Apache project that is designed to handle very large amounts of data on the Hadoop platform. HBase is often described as providing the functionality of a NoSQL database running on top of Hadoop. It combines the scalability of Hadoop, through its use of the Hadoop Distributed File System (HDFS) to store the data, with real-time data access to the data. HBase can handle billions of rows of data and very large numbers of columns. Along with Hadoop, HBase runs on clusters of commodity hardware that form a distributed system. The HBase architecture is made up of RegionServers that run on the worker nodes while the HBase Master Server controls them.

Continue reading

Virtualizing SAP HANA Databases Greater than 1TB on vSphere 5.5

VMWorld 2015 Session Recap

I’m almost fully recovered from VMWorld, which was probably one of my busiest and most enjoyable VMWorld’s I’ve had in my 6 plus years at VMware because of the interaction with attendees, customers, and partners.  I’ll be doing a series of Post-VMWorld Blogs focused on my SAP HANA Software-Defined Data Centers sessions but my first blog will cover the misconceptions associated with sizing SAP HANA databases on vSphere. There are many good reasons to upgrade to vSphere 6.0, going beyond the 1TB monster virtual machine limit in vSphere 5.5 when deploying SAP HANA databases is not necessarily one of them.

SAP HANA is no longer just an in-memory database, it is now a data management platform.  It is NOT confined by the size of available memory since the SAP HANA warm data can be stored on disk in a columnar format and accessed transparently by applications.

What this means is the 1TB monster virtual machine maximum in vSphere 5.5 is an artificial barrier. SAP HANA multi-terabyte size databases can be easily virtualized with vSphere 5.5 using Dynamic Tiering, Near-Line Storage, and other memory management techniques SAP has introduced to the SAP HANA Platform to optimize and reduces HANA’s in-memory footprint.

SAP HANA Dynamic Tiering (DT)

SAP HANA Dynamic Tiering was introduced last year in Support Pack Stack (SPS) 09 for use with BW, Dynamic Tiering allows customers to seamlessly manager their SAP HANA disk based “Warm Data” on an Extended Storage Host, essentially placing data which does not need to be in-memory on disk. The guidance SAP gives when using the SAP HANA Dynamic Tiering option for SPS 09 is up to 20% of in-memory data can reside on the Extended Storage (ES) Host, for SPS 10 up to 40% can reside on the ES Host, and in the future up to 70% of the SAP HANA data can reside on the ES Host. So in the future the majority of SAP HANA data which was once in-memory can reside on-disk.

Near-Line Storage (NLS)

In addition to the reduction of the SAP HANA in-memory footprint DT affords customers, Near-Line Storage should be considered as well. With NLS, data is moved outside of the SAP HANA database proper to disk and classified as “Cold”, due to its infrequent accessed and can only be accessed read only. SAP provides examples showing NLS can reduce the HANA database in-memory requirements by several Terabytes (link below).

It is also important to note that both the DT Extended Storage Host and NLS solutions do not require certified servers or storage, so not only has SAP given customers the ability to run SAP HANA in a reduced memory footprint, customers can run on standard x86 hardware as well.

There is a white paper authored by Priti Mishra, Staff Engineer, Performance Engineering VMware, which is an excellent read for anyone considering DT or NLS options. “Distributed Query Processing in SAP IQ on VMware vSphere and Virtual SAN”

Importance of the VMware Software Defined Data Center

To their credit SAP has taken a leadership role with HANA’s in-memory columnar database computing capabilities and as HANA has evolved the sizing and hardware requirements have evolved as well. Rapid change and evolving requirements are givens in technology; the VMware Software Defined Data Center provides a flexible and agile architecture to effectively react to change by recasting compute, network, and storage resources, in a centrally managed manner.

As a concrete example of the flexibility VMware’s Platform provides, Figure 1. illustrates the evolution of SAP HANA from SPS 07 to SPS 09. For customers who would like to take advantage of SAP HANA’s multi-temperature data management techniques but initially deployed SAP HANA on SPS 07 (all in-memory); through virtualization customers can reclaim and recast memory, storage, and network resources in their virtual HANA landscape to reflect the latest architectural advances and memory management techniques in SPS 10.

Figure 1. SAP HANA Platform: Evolving Hardware Requirements

sap hana vmware

Since SAP HANA can now run in a reduced memory footprint, customers who licensed HANA to be all in-memory can use virtualization to reclaim memory and deploy additional virtual databases and make HANA pervasive in their landscapes.

As a general rule, in any rapidly changing environment The VMware Software-Defined Data Center provides an agile platform which can accommodate change and also protect against capital hardware investments that may not be necessary in the future (certified vs. standard x86 hardware). For that matter, the cloud is a good option to deploy any rapidly changing application/database in places like VMware vCloud Air, Virtustream, or Secure-24 just to mention a few.

Virtual SAP HANA Back on track

After speaking with session attendees, customers, and partners, at VMworld about SAP HANA’s Multi-temperature management capabilities, I was happy to hear they will not be delaying their virtual HANA deployments due to the vSphere 6.0 roadmap certification timeline. As I said earlier, the 1TB monster virtual machine maximum in vSphere 5.5 is an artificial barrier. It really is a worthwhile exercise to take a closer look at the temperature of your data, age of your data, and your access requirements in order to take full advantage of all the tools and features SAP provides their customers.

I was also encouraged to hear from many session attendees that my presentation at VMWorld brought the SDDC from concept closer to reality by demonstrating actual mission critical database/application use cases. My future post VMWorld blogs will focus on how I deconstructed the SAP HANA Networks Requirements document and transformed that into a virtual network design using VMware NSX from my desktop. I’ll also cover Software Defined Storage, essentially translating SAP’s Multi-Temperature Storage Options into VMware Virtual Volumes and Storage Containers.

“SAP HANA SPS10- SAP HANA Dynamic Tiering”; (SAP Product Management)


“Distributed Query Processing in SAP IQ on VMware vSphere and Virtual SAN”; Priti Mishra, Performance Engineering VMware


Blog: Bob Goldsand; “SAP HANA Dynamic Tiering and the VMware Software Defined Data Center”





VMworld US 2015 Spotlight Session: Project Capstone, a Collaboration between VMW, HP & IBM

No Application Left behind

This year at VMworld 2015 US in San Francisco, over 40 sessions focused on Business Critical Applications and databases will be delivered by a broad cast of VMware experts. These experts include VMware product specialists, partners, customers, and end users (developers and data scientists).

One specific session that we would like to shine the spotlight on is VAPP6952-S, “VMware Project Capstone”, in which VMware, HP and IBM will announce a collaborative effort to virtualize the highest demanding applications. As a result of this partnership between VMware, HP, and IBM, we can now more than ever, confidently claim that all applications and databases are candidates for virtualized infrastructure.  This joint effort, which utilizes an HP Superdome X and an IBM FlashSystem with massive 120 vCPU VMs on vSphere 6 running Oracle 12c constitutes the most significant advancement in the area of virtualization of Business Critical Applications in many years.

The session takes place Monday, August 31st at 5PM. Join us for this session to learn about this game changing initiative.


VMware Project Capstone, a Collaboration of VMware, HP and IBM, driving Oracle to Soar Beyond the Clouds using vSphere 6, an HP Superdome X and an IBM FlashSystem ®

 Abstract: When three of the most historically significant and iconic technology companies join forces, even the sky is not the limit.  VMware, HP and IBM have collaborated on a project whose scope both eradicates the long accepted boundaries of virtualization for extreme high performance and establishes a new approach to cooperative solution building.

The Superdome X is HP’s first Xeon based Superdome and when combined with an IBM FlashSystem ®  and virtualized with vSphere 6, the raw capabilities of this stack challenge the imagination and dispel previously held notions of performance limitations in virtualized environments.  The Superdome X and the FlashSystem  comprise a unique stack for all Business Critical Applications and databases. The most demanding environments can now be virtualized. It is no longer obligatory for VMware to claim that 99.9% of all applications and databases are candidates for virtualized infrastructure, as that number is now 100%.  This spotlight session features senior executive management from VMware, HP and IBM and an introduction of the tests results of this unprecedented collaborative effort.

Key Takeaways:

  1. The methodologies that are being used to drive the Superdome X and the IBM FlashSystem ® to the far edges of known performance.
  2. The reasons behind the joint effort of these three renowned companies as well as the aspirations for this collaboration.
  3. An understanding of how this new landmark architecture can affect the industry and benefit customers who have extreme but broad performance requirements.

VMWorld Topic: Virtual Volumes (VVOLS) a game changer for running Tier 1 Business Critical Databases

One of the major components released with vSphere 6 this year was the support for Virtual Volumes (VVOLS). VVOLS has been gaining momentum with storage vendors, who are enabling its capabilities in their arrays.

When virtualizing business databases there are many critical concerns that need to be addressed that include:

  1. Database Performance to meet strict SLAs
  2. Daily Operations e.g. Backup & Recovery to complete in set window
  3. Cut down time to Clone / Refresh of Databases from Production
  4. Meet different IO characteristics and capabilities based on criticality
  5. Never ending debate with DBAs
  6. File Systems v/s Raw Devices (VMFS v/s RDM)

VVOLS can offer solutions to mitigate these concerns that impact the decision to virtualize business critical databases. VVOLS can help with the following:

1. Reduce backup windows for databases
2. Provide ability to have Database consistent backups
3. Reduced cloning times for multi-terabyte databases
4. Provide capabilities for Storage Policy based management

Details on the solutions available with VVOLS and its impact on “Virtualized Tier1 Business Critical Databases” will be discussed in detail at vmworld 2015 in session STO4452:

STO4452 –  STO4452 – Virtual Volumes (VVOLS) a game changer for running Tier 1 Business Critical Databases
Session Date/Time: 08/31/2015 03:30 PM – 04:30 PM

What’s New in VMware vSphere 6 Performance

Not too be outdone by all the new and amazing vSphere 6 features, there are a number of scale and performance enhancements within the vSphere 6 platform that should be highlighted.
Continue reading

VMworld 2015: Extreme Performance Series

Who loves virtual Performance? Who wants to learn more about it?

Everybody of course!

I’m very excited about this year’s Extreme Performance Series mini-track being hosted at VMworld San Francisco and Barcelona. These sessions are created and presented by VMware’s best and most distinguished performance engineers, architects and gurus. I’ve tried to provide my personal thoughts on each session but these few words will never do them justice. Hope too see you all there!

Continue reading

Oracle on vSphere book – Tech Target Interview of Authors

Tech Target has completed and published an interview of the authors (Don Sullivan and Kannan Mani) of the Oracle on vSphere VMware press book.  The published interview is linked  below:

The official VMware press book and the definitive authority on the subject of Oracle on vSphere: http://www.amazon.com/Virtualizing-Oracle-Databases-vSphere-Technology/dp/0133570185 “Serious Databases Require Serious Virtualization”

— Putting Oracle databases on a virtualized infrastructure – http://searchvmware.techtarget.com/feature/Putting-Oracle-databases-on-a-virtualized-infrastructure — The perks to virtualizing Oracle on vSphere 6 – http://searchvmware.techtarget.com/feature/The-perks-to-virtualizing-Oracle-on-vSphere-6

SIOC: I/O Distribution with Reservations & Limits – Part 1

The mClock scheduler was introduced with vSphere 5.5 Storage I/O Control (SIOC) and laid the foundation for new capabilities for scheduling storage resources.  vSphere 6.0 expands upon these capabilities and adds the ability to reserve IOPS, providing even more flexibility and control when delivering storage services to virtual machines.  However, this new capability introduces new questions about how resources are managed and allocated during periods of storage contention.

Continue reading