Home > Blogs > VMware vSphere Blog


What’s New in vSphere 5.5 Storage

As with any vSphere launch, there are a bunch of new storage related features announced. This year is a little different as there are some considerable updates which I wanted to share with you.

Virtual SAN (VSAN)
VMware has made a public beta announcement on VSAN. While it is not yet GA, it is almost there and vSphere 5.5 will enable you to try it out first hand. VMware is taking a software-defined approach with VSAN and providing a product that is radically simpler, more scalable and agile, and lower-cost than traditional monolithic SAN or NAS storage. The storage is flexible & elastic in that virtual storage can live anywhere across the pooled resource. It’s inherently a fault-replace model; any failure is handled without downtime. And, the entire system is tightly integrated with, and automated by, vCenter … and it’s specifically integrated into the application provisioning workflow so that it’ll maximize Cloud application deployment agility. There will be a lot more on VSAN over the coming weeks and months – stay tuned.

vSphere Flash Read Cache
vFRC is a hypervisor-based software-defined flash storage tier solution. It aggregates local flash devices to provide a clustered flash resource for consumption by both virtual machines and ESXi hosts. ESXi hosts can consume the flash via  Virtual Flash Host Swap Cache (what used to be known as swap to SSD in previous vSphere releases).

Virtual Machine are allocated a part of the flash resource via the web client. The flash is allocated on a per VMDK basis. vFRC is a write-thru caching mechanism in this initial release, which means that the writes go directly to persistent storage as well as into cache, which means that the block is available for subsequent reads to be accelerated.

62TB VMDKs and vRDMs
Yes, we finally have VMDKs that can grow to a size larger than the 2TB – 512 bytes. And we also have 62TB virtual mode RDMs (non pass thru). This is great to see, and something I know a lot of you have been waiting for. Why 62TB and not 64TB I hear you ask? Well, 64TB is still the maximum size of the VMFS-5 datastore, so if you filled the VMFS datastore with a single VMDK, there would be no space left for certain management tasks, like taking a snapshot, etc. This was problematic in the past when we had 2TB VMFS extents and 2TB VMDKs. So lesson learnt, and we have left some overhead.

There are some considerations when using 62TB VMDKs as you might well imagine, but I will cover those in a future post.

16Gbit FC E2E Support
In vSphere 5.0, VMware introduced support for 16Gb FC HBAs. However these HBAs had to be throttled down to work at 8Gb. In 5.1, we supported these HBAs running at 16Gb but there was no support for full end-to-end 16Gb connectivity from host to array. The host to switch could run at 16Gb, but the switch to array had to run at 8Gb. With the release of vSphere 5.5, VMware now supports 16Gb E2E (end-to-end) Fibre Channel connectivity.

Microsoft Cluster Services Enhancements
In 5.1, we introduced support for 5 node clusters running on vSphere. In 5.5, we have a number of additional changes. First off, VMware now supports FCoE and iSCSI protocols for shared disk access. Up until now, this has been supported on Fibre Channel only. We have also introduced support for the round robin path policy on the shared disks. And finally, support for Microsoft Windows 2012 support has also been introduced. A lot of enhancements in this area.

PDL AutoRemove
I’m not going to delve back into the history of All Paths Down (APD) or Permanent Device Loss (PDL). This has a long and checkered history, and has been extensively documented. Suffice to say that this situation occurs on device failures or if the device is incorrectly removed from host. PDL is based on SCSI Sense Codes returned from the array and a PDL state means that the ESXi host no longer sends I/O to these devices.

PDL AutoRemove in vSphere 5.5 automatically removes a device with PDL from the host. A PDL state on a device implies that the device is gone and that it cannot accept more IOs, but needlessly uses up one of the 256 device per host limit. PDL AutoRemove gets rid of the device from the ESXi host perspective.

VAAI UNMAP Improvements
vSphere 5.5 introduced a new simpler VAAI UNMAP/Reclaim command

# esxcli storage vmfs unmap

This is a big improvement from the older vmkfstools -y method. There are some additional enhancements to the reclaim mechanism too. The reclaim size now specified in blocks rather than a percentage value to make it more intuitive to calculate. Finally, dead space reclaimed in increments rather than all at once. Previously, trying to do reclaim in increments would have unmap trying to reclaim the same space over and over again. Lastly, with the introduction of 62TB VMDKs, unmap can now handle much larger dead space areas.

VMFS Heap Improvements
An issue with previous versions of VMFS heap meant that there were concerns when accessing above 30TB of open files from a single ESXi host. ESXi 5.0p5 & 5.1U1 introduced a larger heap size to deal with this. vSphere 5.5 introduces a much improved heap eviction process, meaning that there is no need for the larger heap size, which consumes memory. vSphere 5.5 with a maximum of 256MB of heap allows ESXi hosts to access all address space of a 64TB VMFS.

Storage DRS, Storage vMotion & vSphere Replication Interop
If a VM which was being replicated via vSphere Replication was migrated to another datastore, it triggered a full sync because the persistent state files (psf) were deleted – all of the disks contents are read and check summed on each side. In vSphere 5.5 the psf files are now moved with the virtual machine and retain its current replication state. This means that virtual machines at the production site may now be Storage vMotion’ed, and conversely, participate in Storage DRS datastore clusters without impacting vSphere Replication’s RPO (Recovery Point Objective).

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage

This entry was posted in Storage, vSphere and tagged on by .
Cormac Hogan

About Cormac Hogan

Cormac Hogan is a senior technical marketing architect within the Cloud Infrastructure Product Marketing group at VMware. He is responsible for storage in general, with a focus on core VMware vSphere storage technologies and virtual storage, including the VMware vSphere® Storage Appliance. He has been in VMware since 2005 and in technical marketing since 2011.

18 thoughts on “What’s New in vSphere 5.5 Storage

  1. Scott Friedman

    This is all great sounding news. I would really like to hear more about any hidden limitations still lurking in the new VMFS Heap design. The blurbs I have been reading generate more questions than answers! :) It’s really nice I can access 60+TB from a single VM (very nice) but how much storage can I have open at once on a single host? This is the root of the current problems people with large VMs or many small ones accessing lots of storage each (me).

    Thanks for the great Blog!

    Reply
    1. Peter L

      vVols are not part of vSphere 5.5. While I’m not supposed to discuss VMware’s roadmap, if you look closely at the NetApp vVol demo we shared at VMworld (with permission from VMware, or course ;-) you can see the engineering build we’re currently using.
      As for NetApp versions, for any NetApp customers following, vVols works with clustered Data ONTAP 8.2, but actual HCL/IMT support will likely be in 8.2.1.

      Reply
        1. Cormac Hogan

          Thanks for sharing Peter. A number of our storage partners demoed VVOLs at VMworld 2013 in SF. While they are not part of the 5.5 release, we unfortunately cannot categorically state when they will be available. However, we hope it is in the very near future.

          Reply
    1. Cormac Hogan

      No NFS4 in vSphere 5.5 I’m afraid. Again, like VVOLs, we cannot categorically state when this functionality will be available, but once again we hope it is in the very near future.

      Reply
  2. Pingback: Les news Quelbazar du moment 08/29/2013 - Quelbazar | Quelbazar

  3. Pingback: VMworld 2013 | virtual insanity

  4. Pingback: VMworld 2013 San Francisco - Recap - viktorious.nl - Virtualization & Cloud Management

  5. Pingback: Les news « stockage » de la semaine (weekly) - Quelbazar | Quelbazar

  6. Pingback: Welcome to vSphere-land! » vSphere 5.5 Link-O-Rama

  7. Pingback: vSphere 5.5 UNMAP Deep Dive » boche.net – VMware vEvangelist

  8. Ben Conrad

    Cormac,

    I’m performing an eval of Hyper-V 2012 R2 vs vSphere 5.5 and when I search for UNMAP issues on Hyper-V 2012 (R1) I find very few hits. VMware is really dragging their feet regarding re-integrating UNMAP into the stack real-time.

    For example, after a VHDX delete on a CSV Windows 2012 issues a FILE_LEVEL_TRIM within 30 seconds of the delete. No user intervention or scripting/scheduling required.

    For VMware shops that run 100′s of VMFS volumes using the esxcli workaround for UNMAP seems sloppy.

    When is VMware going to re-introduce real-time UNMAP back into the stack?

    Ben

    Reply
  9. Dachshund-Digital

    Ah, no…. 5.1 and Update 1, especially SSO 1.0 (for the sake of discussion) has some real issues, down play of the SSO issues does not accurately present the issues. Fact is SSO 1.0 did not work in large scale AD environments, it failed. VMware has acknowledged this to its enterprise customers in no uncertain terms. Moreover, VMware had no choice but to completely rework SSO 2.0 (to fully separate it from 1.0 as noted above). VMware can ill afford in this competitive environment, with Xen, KVM, variants as well as Hyper-V, to not own up to the fact that the initial version of SSO was just not good. When VMware had to publish over 65 KBs on how to repair, rework, reinstall, uninstall, and redeploy SSO or communication with the various vCenter services, Inventory, Lookup, etc., it does not take much consideration to realize VMware QA process failed to vet SSO 1.0 sufficiently.

    Reply
  10. Abhijit Chaudhuri

    Regarding, VAAI UNMAP Improvements — is there a command to get the storage space that can be reclaimed?

    Reply
  11. Yayat

    Hi Cormag,

    I’m sorry that this is not a comment but it is a question.
    I do have problem similar to what UNMAP addressed but it’s between datastore and SAN. I used vsphere 4.0 with thin provisioned disk on each VM inside it and the FC SAN storage with thin provisioned Vdisk (Volume).
    Recently we running out of space on SAN where space allocation was 90%. Try to move some VM to local disk did not reduce space allocation on the storage. Delete the FC datastore from vsphere host also did not reduce the allocation, still 90% even we deleted 500GB datastore. The only way to claim the space was delete the Vdisk from the SAN than recreate it. Do you find that this problem related to your post above or it’s something else you knew it ? Your advice please. Thank you very much.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>