Home > Blogs > VMware vSphere Blog

Virtual Volumes (VVOLs) Tech Preview [with video]

Disclaimer : This a Technology Preview. This functionality is not available at the moment. It is simply demonstrating the exciting prospects that the future holds for storage related operations within the virtual world.

Following on from some of the major storage announcements made at VMworld 2012 this year, I wanted to give you an overview of the virtual volumes feature in this post. Virtual Volumes is all about making the storage VM-centric – in other words making the VMDK a first class citizen in the storage world. Right now, everything is pretty much LUN-centric or volume-centric, especially when it comes to snapshots, clones and replication. We want to change the focus to the VMDK, allowing you to snapshot, clone or replicate on a per VM basis from the storage array. Historically, storage admins and vSphere admins would need to discuss up front the underlying storage requirements of an application running in a VM. The storage admin would create a storage pool on the array, set features like RAID level, snapshot capable, replication capable, etc. The storage pool would then be carved up into either LUNs or shares, which would then be presented to the ESXi hosts. Once visible on the host, this storage could then be consumed by the VM and application.

What if the vSphere admin could decide up front what the storage requirements of an application are, and then tell the array to create an appropriate VMDK based on these requirements? Welcome to VVOLs.

My colleague Duncan did a super write up on the whole VVOL strategy in his post here. In this post he also directs you to the VMworld sessions (both 2011 & 2012) which discuss the topic in greater detail. What I wished to show you in this post are the major objects and their respective roles in VVOLs. There are 3 objects in particular; the storage provider, the protocol endpoint and the storage container. Let’s look at each of these in turn.

Storage Provider: We mentioned the fact that a vSphere admin create a set of storage requirements for an application/VM. How does an admin know what an array is capable of offering in terms of performance, availablity, features, etc? This is where the Storage Provider comes in. Out of band communication between vCenter and the storage array is achieved via the Storage Provider. Those of you familiar with VASA will be familiar with this concept. It allows capabilities from the underlying storage to be surfaced up into vCenter. VVOLs uses this so that storage container capabilities can be surfaced up. But there is a significant difference in VVOLs – we can now use the storage provider/VASA to push information down to the array also. This means that we can create requirements for our VMs (availability, performance, etc) and push this profile down to the storage layer, and ask it to build out the VMDK (or virtual volume) based on the requirements in the profile. The Storage Provider is created by the storage array vendor, using an API defined by VMware.

Protocol Endpoint:Since the ESXi will not have direct visibility of the VVOLs which back the VMDKs, there needs to be an I/O demultiplexor device which can communicate to the VVOLs (VMDKs) on its behalf. This is the purpose of the protocol endpoint devices, which in the case of block storage is a LUN, and in the case of NAS storage is a share or mountpoint. When a VM does I/O, the I/O is directed to the appropriate virtual volume by the protocol endpoint. This now allows us to scale to very, very many virtual volumes, and the multipathing characteristics of the protocol endpoint device are implicitly inherited by the VVOLs.

Storage Container: This is your storage pool on the array. Currently, one creates a pool of physical spindles on an array, perhaps building a raid across them and then carves this up into LUNs or shares to be presented to the ESXi hosts. In VVOLs, only the container/pool needs to be created. Once we have the storage provider and protocol endpoints in place, the storage container becomes visible to the ESXi hosts. From then on, as many VVOLs can be created in the container as there is available space, so long as the characteristics defined in the storage profiles matches the storage container.

So with that in mind, here is a short 5 minute video which ties all of this together:

Now, this is a project that can only be successful if our storage partners engage with us to make it a success. I’m pleased to say that many of our storage partners are already working with us on the first phase of this project, with many more on-boarding as we speak. And admittedly the video above is more about the architecture of VVOLs and doesn’t really show off the coolness of the feature. So I’d urge you to look at the following posts from some of our partners. EMC’s Chad Sakac has a post here around how they are integrating with virtual volumes, and HP’s Calvin Zito shows how their 3PAR array is integrated with this post. Interestingly, the title in both posts is around the future of storage. I think VVOLs is definitely going to change the storage landspace.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter VMwareStorage

This entry was posted in vSphere and tagged , on by .
Cormac Hogan

About Cormac Hogan

Cormac Hogan is a Senior Staff Engineer in the Office of the CTO in the Storage and Availability Business Unit (SABU) at VMware. He has been with VMware since April 2005 and has previously held roles in VMware’s Technical Marketing and Technical Support organizations. He has written a number of storage related white papers and have given numerous presentations on storage best practices and vSphere storage features. He is also the co-author of the “Essential Virtual SAN” book published by VMware Press.

49 thoughts on “Virtual Volumes (VVOLs) Tech Preview [with video]

  1. Pingback: Further Thoughts on VVOLS (Updated) | The Storage Architect

  2. Chris Evans

    If a VVOL container continues to simply be a LUN, then how will the storage array know which LBA ranges of the LUN correspond to a particular VVOL? Without that information, the storage array is none the wiser on the contents of the LUN. Also, Command Tag Queuing allows the I/O to a LUN on a shared storage port to be prioritised; the array will have to know LBA ranges in order to prioritise QoS to deliver guaranteed IOPS/latency as shown in your preview.

    It seems the storage vendors have the most work here and lots to do to support all the features you envisage for VVOLs.

    1. Cormac

      The VVOL container is not a LUN – it is a storage pool. The array is asked to instantiate a VVOL object on the storage container for a VMDK – so the storage array has full visibility into that.

      1. Chris Evans

        OK, thanks for clarifying. So the endpoint – that is a LUN from what your description implies. This means that all I/O for an entire pool goes through one device that is a standard SCSI LUN? If so, then this could become a large bottleneck when supporting large numbers of VVOLs.


        1. Cormac

          Not quite. VVOLs do need to be bound to a PE in order to do I/O. This allows the storage system to discover the id of the VVOL, but after that I/O is direct to the VVOL. I/O doesn’t flow through the PE.

  3. Yaron

    So what is the protocol for the VVOL (SCSI Protocol). Is it the T10 object storage protocol (OSD) or some proprietary protocol of VMware?

      1. Yaron

        So, this is a new protocol that will be ratified as a new standard at T10? Currently the only T10 protocol that manages Objects (over SCSI) is OSD-2.

        1. Cormac

          Not quite – SAM isn’t new and already allows multiple levels of device addressing (albeit we typically don’t see this in the SCSI world). When we bind a VVOL to a PE, we must now address the PE + VVOL when doing I/O.

          1. Chris Evans

            the SCSI protocol allows for dependent logical unit, i.e. devices connected to a parent. So presumably the endpoint is a LUN, the VVOLs are dependent LUNs under the endpoint. Of course the devil is in the detail here, in terms of performance, scalability and security.

  4. Pingback: Tech Preview – Virtual Volumes | VMware Support Insider - VMware Blogs

  5. Pingback: VMware vVolumes – Tech Preview Videos | ESX Virtualization

  6. Pingback: Virtual Volumes (vVOL) tech preview « TUG Sweden

  7. Pingback: Tech Preview of EMC’s XtremIO Flash Storage Solution |

  8. Pingback: Welcome to vSphere-land! » Storage Links

  9. Pingback: VMware vVolumes: the game changing future for storage is demoed » WoodITWork.com

  10. Pingback: What is Software Defined Storage? A VMware TMM Perspective | VMware vSphere Blog - VMware Blogs

  11. Pingback: VMware vSphere Blog: What is Software Defined Storage? A VMware TMM Perspective | Virtualization

  12. Pingback: vVOLs – A Blast From The Past

  13. Pingback: Zerto's Virtual Replication 2.0 - first looks | www.vExperienced.co.uk

  14. Pingback: An Introduction to Flash Technology |

  15. Pingback: Office of the CTO | 2013 predictions: The year of software-defined storage?

  16. Pingback: Virtualisering… Vad är det som skiljer Microsoft, Citrix och VMware « TUG Sweden

  17. Shriram

    Does this mean VAAI support is not required anymore, since all the operations are offloaded directly to the Storage arrays (SAN or NAS)?

    1. Cormac HoganCormac Hogan Post author

      It means that the tasks which you currently offload VAAI will be integrated with VVOLs. So you can think of VVOLs are building on what other APIs like VAAI do currently.

  18. Pingback: VMware & Virsto — The Lone Sysadmin

  19. Deepak C Shetty

    With reference to your comment — ” Once we have the storage provider and protocol endpoints in place, the storage container becomes visible to the ESXi hosts.” , I have few questions….

    1) So with vVOLs, host sees the container (aka pool) and not LUNs. In other words.. pool of the array is seen as LUN on the host ? If no, then what do you mean in the above statement & what exactly the host sees.. LUNs or something else ?

    2) I understand that one must use PE+vVOL to address, because you want to do I/O to a part of the LUN that has ur vmdk hosted. I assume PE is a LUN ? Does that mean LUN is a notional/logical entity, and its not something the host sees.. this related to #1 above….what does the host see.. LUN, pool, something else ?

    3) My current understanding is that currently we have LUN exposed to host and pool on the array that has LUNs. Will the scenario change to pool exposed to host, with vVOLs being addressed by host with the understanding that vVOLs is backed by some LUN hence the need for PE+vVOL addressing scheme ?

  20. Pingback: Configuring VMware VASA for EMC VNX - VMtoday

  21. Pingback: Homelab with vSphere 5.5 and VSAN | Erik Bussink

  22. Pingback: A closer look at EMC ViPR |

  23. Pingback: A closer look at SolidFire |

  24. Pingback: Software Defined Storage – Trends, Opportunities, Challenges | Sanjay Agrawal's Blog

  25. Pingback: Le Software-Defined Datacenter et les vCloud Suites - VMnerds blog

  26. Pingback: Solidfire: uno storage "di qualità" | Virtual to the Core

  27. Pingback: A closer look at Fusion-io ioControl 3.0 | CormacHogan.com

  28. Pingback: my VVols vendor info landing page | @hansdeleenheer

  29. Pingback: Tech Preview - Virtual Volumes - IT Videos

  30. Pingback: Hypervisor-based QoS: Helps with the symptoms, but by itself it’s not the cure | SolidFire | Blog

  31. Pingback: vSphere 6.0 Storage Features Part 5: Virtual Volumes | CormacHogan.com

  32. Pingback: vSphere 6.0 Storage Features Part 5: Virtual Volumes | Storage CH Blog

  33. Pingback: Architecting IT | VMware’s Virtual and Physical SAN Misdirection

Leave a Reply

Your email address will not be published. Required fields are marked *