Home > Blogs > VMware vSphere Blog


vSphere 5.1 New Storage Features

vSphere 5.1 is upon us. The following is a list of the major storage enhancements introduced with the vSphere 5.1 release.

VMFS File Sharing Limits

In previous versions of vSphere, the maximum number of hosts which could share a read-only file on a VMFS volume was 8. The primary use case for multiple hosts sharing read-only files is of course linked clones, where linked clones located on separate hosts all shared the same base disk image. In vSphere 5.1, with the introduction of a new locking mechanism, the number of hosts which can share a read-only file on a VMFS volume has been increased to 32. This makes VMFS as scalable as NFS for VDI deployments & vCloud Director deployments which use linked clones.

Space Efficient Sparse Virtual Disks

A new Space Efficient Sparse Virtual Disk aims to address certain limitations with Virtual Disks. The first of these is the ability to reclaim stale or stranded data in the Guest OS filesystem/database. SE Sparse Disks introduces an automated mechanism for reclaiming stranded space. The other feature is a dynamic block allocation unit size. SE Sparse disks have a new configurable block allocation size which can be tuned to the recommendations of the storage arrays vendor, or indeed the applications running inside of the Guest OS. VMware View is the only product that will use the new SE Sparse Disk in vSphere 5.1.

vSphere Storage APIs – Array Integration

vSphere 5.0 introduced the offloading of snapshots to the storage array for VMware View via the VAAI NAS primitive ‘Fast File Clone’. vSphere 5.1 will allow VAAI NAS based snapshots to be used for vCloud Director in addition to being used for VMware View, enabling the use of hardware/native snapshots for linked clones.

5 Node MSCS Cluster

Historically, VMware only ever supported 2 Node MSCS Clusters. With vSphere 5.1, we are extending this to 5 nodes.

All Paths Down Enhancements

In vSphere 5.1, the objective is to handle the next set of APD use cases involving more complex transient APD conditions. This involves timing out I/O on devices that enter into an APD state. When the timer expires, any I/O sent to the device will be immediately ‘fast failed’ meaning that we do not tie up hostd waiting for I/O. Another enhancement is introducing PDL for some of those iSCSI arrays which present one LUN per target. This was problematic in the past since an APD removed the target as well as the LUN. We are now addressing this scenario.

Storage Protocol Enhancements

FCoE: The Boot from Software FCoE feature is very similar to Boot from Software iSCSI feature which VMware introduced in ESXi 4.1. It allows an ESXi 5.1 host to boot from an FCoE LUN using a NIC with special FCoE offload capabilities and VMware’s software FCoE driver.

iSCSI: We are adding jumbo frame support for all iSCSI adapters in vSphere 5.1, complete with UI support.

Fibre Channel: VMware introduced support for 16Gb FC HBA with vSphere 5.0. However the 16Gb HBA had to be set to work at 8GB. vSphere 5.1 introduces support for 16GB FC HBAs running at 16Gb.

Advanced IO Device Management (IODM) & SSD Monitoring

IODM introduces new esxcli commands to help administrators troubleshoot issues with I/O devices and fabric. This covers Fibre Channel, FCoE, iSCSI, SAS Protocol Statistics and SMART attributes. For SSD monitoring, a new smartd module in ESXi 5.1 will be used to provide Wear Leveling and other SMART details for SAS and SATA SSD. Disk vendors also have the ability to install their own SSD plugins to display vendor specific SSD info.

Storage I/O Control Enhancements

The latency thresholds for the SIOC can now be automatically set. The benefit is that SIOC now figures out the best latency threshold for a datastore as opposed to using a default/user selection for latency threshold. SIOC is now also turned on in ‘stats only mode’ automatically. It doesn’t enforce throttling but does gather more granular statistics about the datastore. Storage DRS can leverage this as it will now have statistics in advance of a datastore being added to a datastore cluster.

Storage DRS Enhancements

vCloud Director will use Storage DRS for the initial placement of linked clones during Fast Provisioning & for managing space utilization and I/O load balancing. Storage DRS also introduces a new datastore correlation detector which means that if a source and destination datastores are backed by the same physical spindles, Storage DRS won’t consider it for migration. Storage DRS also has a new metric (VMobservedLatency) for I/O latency which will be used for more granular I/O load balancing.

Storage vMotion Enhancements

In vSphere 5.1 Storage vMotion performs up to 4 parallel disk migrations per Storage vMotion operation.

That completes the list of storage enhancements in vSphere 5.1. Obviously this is only a brief overview of each of the new features. I will be elaborating on all of these new features over the coming weeks and months.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

21 thoughts on “vSphere 5.1 New Storage Features

  1. Andy

    Good stuff! However I’m disappointed to see the 2TB file size limit is still in effect for VMFS 5, especially that the new Hyper-V has removed their 2TB limit. I still have a ton of VM’s using guest OS initiators or RDM’s; 2TB is not the huge amount of data today as it was years ago.
    Any idea when we could expect support for larger then 2TB VMDK’s?

    Reply
    1. Cormac HoganCormac Hogan Post author

      Hi Satinder,
      Unfortunately it is not available with View 5.1. We will have to await the next View release as there are a number of integration points that need to be implemented in View Composer for SE Sparse Disks.
      Regards
      Cormac

      Reply
  2. Pingback: Storage Changes in VMware vSphere 5.1 – @SFoskett – Stephen Foskett, Pack Rat

  3. Shree

    Where can I find more information on “SSD Detection and Enablement – SSD automatically detected, tagged and enabled by ESXi 5.0″? Is this the same as SSD monitoring?

    Reply
    1. Cormac HoganCormac Hogan Post author

      We will have a lot more information when the 5.1 documentation set is published. It is no tthe same as the detect & tag mechanism in 5.0. In 5.1, we have the ability to monitor SSD attrbiutes via a new smartd daemon in ESXi. It will be able to display wear leveling stats, temperature and reserved sector count information among other things. I’ll be doing a follow up post with more detail in the coming weeks.

      Reply
  4. Justin Other

    Are there any limitations when upgrading from v5 to v5.1 versus a new install? For example, do any of the new VMFS features require that the datastores be created using v5.1?

    Reply
  5. Pingback: Welcome to vSphere-land! » vSphere 5.1 Link-O-Rama

  6. Pingback: Complete guide to all the essential vSphere 5.1 links | Wintual

  7. Jan Soska

    Hello, is new APD handling system able to handle full storage outage due power failure? How big is timeout you mentioned in article? We had very bad outage due power failure of one of uors server rooms last week (we run esxi 4.1u3 and 5.0u1).

    Thanks
    Jan

    Reply
      1. Jan Soska

        Hello,
        great news and great improvements.
        We just made internal new 5.1 testing host and it works great. We did 2 types of test:
        1) simulation o lun failure – like disconecting lun by mistake, resultst is no management outage, failed path logged “APD Notify PERM LOSS” only 5! seconds after APD. vCenter even showed information about VM mdk’s offiline. It seems as array was still available, EXS got some sence codes and response was realy gret.
        2) simulation of whole array outage by disabling SAN zones between host and array, results – behavior was slightly different. There was management loss for it seems exactly 140s as default parameter is, then management was back. Strange is VM’s runing from failed array were in running state (generally true) as they run from ESX memory and there was not notification about vmdk’s missing.
        STILL, this is absolutelly great if compared to previous versions. We are considering to run production with ESXi 5.1

        Jan

        Reply
  8. Pingback: Making Scalable Midsized VDI Deployment a Simple, Affordable Reality | VMware End-User Computing Blog - VMware Blogs

  9. Patrick

    As an IT services provider we were asked a lot about the storage features in vSphere 5.1 – which we truly appreciate. Unfortunately, most whitepapers and other contents of the official VMware sited were only provided in English. We decided therefore, to catch up with this lack, so that also our mostly German speaking SME audience stays informed about the latest news in this field. Perhaps others will follow, as VMware provides very good arguments for customers to update to vSphere 5.1, which is also a very interesting product for ‘German speaking SMEs’ :-) http://www.ventoo.ch/was-ist-neu-bei-vmware-vsphere-5-1/

    Reply
  10. Pingback: VCAP-CID Study Notes: Objective 2.4 - VMice

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>