Home > Blogs > VMware vSphere Blog > Tag Archives: Storage I/O Control

Tag Archives: Storage I/O Control

Attention Storage DRS & Storage I/O Control Users…

…please help us on the future direction of these products.

The new Product Manager for Storage DRS and Storage I/O Control has asked me to reach out the the community and ask for your input on what we should do next in these product areas.

I know many of you actively use these technologies, so you are certainly the best folks to highlight what works well, what doesn’t work well and what additional features you would like to see added.

Please take the quick survey on Storage DRS & Storage I/O Control by clicking here.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage

 

SIOC considerations with mixed HBA environments

I’ve been involved in a few conversations recently related to device queue depth sizes. This all came about as we discovered that the default device queue depth for QLogic Host Bus Adapters was increased from 32 to 64 in vSphere 5.0. I must admit, this caught a few of us by surprised as we didn’t have this change documented anywhere. Anyway, various Knowledge Base articles have now been updated with this information. Immediately, folks wanted to know about the device queue depth for Emulex. Well, this hasn’t changed and continues to remain at 32 (although in reality it is 30 for I/O as two slots on the Emulex HBAs are reserved). But are there other concerns?

Continue reading

vSphere 5.1 New Storage Features

vSphere 5.1 is upon us. The following is a list of the major storage enhancements introduced with the vSphere 5.1 release.

VMFS File Sharing Limits

In previous versions of vSphere, the maximum number of hosts which could share a read-only file on a VMFS volume was 8. The primary use case for multiple hosts sharing read-only files is of course linked clones, where linked clones located on separate hosts all shared the same base disk image. In vSphere 5.1, with the introduction of a new locking mechanism, the number of hosts which can share a read-only file on a VMFS volume has been increased to 32. This makes VMFS as scalable as NFS for VDI deployments & vCloud Director deployments which use linked clones.

Space Efficient Sparse Virtual Disks

A new Space Efficient Sparse Virtual Disk aims to address certain limitations with Virtual Disks. The first of these is the ability to reclaim stale or stranded data in the Guest OS filesystem/database. SE Sparse Disks introduces an automated mechanism for reclaiming stranded space. The other feature is a dynamic block allocation unit size. SE Sparse disks have a new configurable block allocation size which can be tuned to the recommendations of the storage arrays vendor, or indeed the applications running inside of the Guest OS. VMware View is the only product that will use the new SE Sparse Disk in vSphere 5.1.

Continue reading

What could be writing to a VMFS when no Virtual Machines are running?

[Updated with vSphere HA clarifications]

This was an interesting question that came my way recently. One of our storage partners wanted to ensure that a VMFS volume was completely quiesced (no activity) and was interested to know what could possibly be the cause of writes to the VMFS volume when all Virtual Machines were powered off.

There are quite a few vSphere features which could be updating a volume, and after a bit of research, I decided it might be a good idea to share the list with you.

  1. If you have a Distributed Virtual Switch in your virtual infrastructure, changes to the network configuration would result in updates to the .dvsdata configuration file which sits on a VMFS volume. 
  2. If you have implemented a vSphere HA cluster, then there may be updates going to vSphere HA 5.0 heartbeat datastore and related files. First, what are these heartbeat datastores used for? Well, to have some control over the HA cluster in the event of a network failure when nodes can no longer communicate over the network, vSphere HA introduced heartbeat datastores. Through the use of these HB datastores & special files on other datastores, a master can determine which slave hosts are still alive, and also determine if there has been a network partition rather than network isolation (there will be different behaviour depending on which). Note that we don't write to the HB file; it is opened so that the "metadata HB" on the VMFS volume is updated. Other vSphere HA files, which reside in special folders on all datastores in the cluster, are also written to.
  3. Another possibility, of course, is that writes are coming from the VMFS metadata heartbeat updates. These are essentially pulses from an ESXi host to inform other hosts (which might be looking to update a file) that this host still has a lock on the file in question.
  4. An ESXi host can be deployed with a designated scratch partition or the scratch partition could be placed as a folder on a VMFS datastore if no suitable partition exists. If an ESXi scratch partition has been located on a VMFS datastore, then it may be that the scratch partition is being regularly updated with host information (e.g. tmp files, log updates, etc). This could be the source of spurious writes to the VMFS.
  5. Storage I/O Control could be enabled on the datastore. If this is the case, each host that uses the datastore writes metrics to special files on the datastore. These files are used to determine the datastore wide latency value across all hosts to the datastore. If this exceeds the defined latency value (default 30ms), this is an indicator to SIOC to start throttling. The last update I've seen on this suggests that these files are updated by all hosts every 4 seconds.
  6. Finally, the VMFS volume could be part of a Storage DRS datastore cluster. If load balancing based on I/O metrics are enabled, then Storage DRS may be using Storage I/O Control to measure the datastore latency values as mentioned in number 5.

So as you can see, simply shutting down VMs on a datastore is not enough to ensure that they are quiesced. A number of other vSphere features could be writing to the datastore (I may have even missed some in this list).

If you need a datastore to be completely quiesced for whatever reason, I'd recommend using esxtop to ensure that there is no I/O activity after you have shut down your VMs.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

Storage DRS and Storage Array Feature Interoperability

We've had a number of queries recently about how Storage DRS works with certain array based features. The purpose of this post is to try to clarify how Storage DRS will behave when some of these features are enable on the array.

The first thing to keep in mind is that Storage DRS is not going to recommend a Storage vMotion unless something is wrong on the datastore; either it is running out of space, or its performance is degrading.

Let's now look at the interoperability:

1. Thin Provisioned LUNs

If the array presents a Thin Provisioned LUN of 2TB which is backed by only 300GB physical, is Storage DRS aware of this when we make migration decisions? In other words, could we fill up a Thin Provisioned datastore if we choose it as a destination for a Storage vMotion operation, and it is already quite full?

Although Storage DRS is not aware that the LUN is Thin Provisioned, it still should not fill it up. The reason why is that in vSphere 5.0, a new set of VAAI features for Thin Provisioning were introduced. One of these features was to surface an alarm in vCenter when a Thin Provisioned datastore became 75% full on the back-end. If a datastore has this alarm surfaced, then Storage DRS will no longer consider it as a destination for Storage vMotion operations. This should prevent a Storage vMotion operation from ever filling up a Thin Provisioned datastore. In this case, if the 2TB Thin Provisioned datastore has 225GB of its 300GB already used, the alarm would be surfaced and Storage DRS would not consider placing any additional VMs on it.

2. Deduplication & Compression

Many storage arrays use deduplication & compression as a space efficiency mechanism.  Storage DRS is not dedupe aware, but this shouldn't be a cause for concern. For instance, if a VM is heavily deduped, and Storage DRS recommends it for migration, Storage DRS does not know that the VM is deduped. Therfore the amount of space reclaimed from the source datastore will not be the full size of the VM. Also, when the VM is moved to the destination datastore, the VM will have to be inflated to full size. Later on, when the dedupe process runs (in many cases, this doesn’t run in real-time), the array might be able to reclaim some space from dedupe, but it will be temporarily inflated to full size first.

But is this really a concern? Let's take the example of a VM that is 40GB in size, but thanks to dedupe is only consuming 15GB of data on disk. Now when SDRS makes a decision to move this VM, it will find a datastore that can take 40GB (the inflated size of the VM). So that's not too much of an issue. What about the fact that SDRS is only going to gain 15GB of free space on the source datastore as opposed to the 40GB that it thought it was going to get? Well, that's not a concern either because if this datastore is still exceeding the space usage threshold after the VM is migrated, SDRS will migrate another VM from the datastore on the next run, and so on until the datastore space usage is below the threshold. so yes, it may take a few more iterations to handle dedupe datastores, but it will still work just fine.

And yes, it would be nice if Storage DRS understood that datastores were deduped/compressed, and this is something we are looking at going forward.

3. Tiered Storage

The issue here is that the Storage I/O Control (SIOC) injector (the utility which profiles the capabilities of the datastores for Storage DRS) might not understand the capabilities of tiered storage, i.e. if the injector hits the SSD tier, it might conclude that this is a very high performance datastore, but if it hits the SATA tier, it might conclude that this is a lower performance datastore. At this point in time, we are recommending that SDRS be used for initial placement of VMs and load balancing of VMs based on space usage only, and that the I/O metrics feature is disabled. We are looking into ways of determining the profile of a LUN built on tiered storage going forward, and allowing I/O metrics to be enabled.

I hope this gives you some appreciation of how Storage DRS can happily co-exist with various storage array features, and how in many ways the technologies are complimentary. While we would agree that some of the behaviour is sub-optimal, and it would be better if Storage DRS was aware of these array based features in its decision process, there is nothing that prevents Storage DRS working with these features. Going forward, we do hope to add even more intelligence to Storage DRS so that it can understand these features, and include them in its decision making algorithms.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage