Home > Blogs > VMware vSphere Blog > Tag Archives: Futures

Tag Archives: Futures

Virtual Volumes (VVOLs) Tech Preview [with video]

Disclaimer : This a Technology Preview. This functionality is not available at the moment. It is simply demonstrating the exciting prospects that the future holds for storage related operations within the virtual world.

Following on from some of the major storage announcements made at VMworld 2012 this year, I wanted to give you an overview of the virtual volumes feature in this post. Virtual Volumes is all about making the storage VM-centric – in other words making the VMDK a first class citizen in the storage world. Right now, everything is pretty much LUN-centric or volume-centric, especially when it comes to snapshots, clones and replication. We want to change the focus to the VMDK, allowing you to snapshot, clone or replicate on a per VM basis from the storage array. Historically, storage admins and vSphere admins would need to discuss up front the underlying storage requirements of an application running in a VM. The storage admin would create a storage pool on the array, set features like RAID level, snapshot capable, replication capable, etc. The storage pool would then be carved up into either LUNs or shares, which would then be presented to the ESXi hosts. Once visible on the host, this storage could then be consumed by the VM and application.

What if the vSphere admin could decide up front what the storage requirements of an application are, and then tell the array to create an appropriate VMDK based on these requirements? Welcome to VVOLs.

My colleague Duncan did a super write up on the whole VVOL strategy in his post here. In this post he also directs you to the VMworld sessions (both 2011 & 2012) which discuss the topic in greater detail. What I wished to show you in this post are the major objects and their respective roles in VVOLs. There are 3 objects in particular; the storage provider, the protocol endpoint and the storage container. Let’s look at each of these in turn.

Storage Provider: We mentioned the fact that a vSphere admin create a set of storage requirements for an application/VM. How does an admin know what an array is capable of offering in terms of performance, availablity, features, etc? This is where the Storage Provider comes in. Out of band communication between vCenter and the storage array is achieved via the Storage Provider. Those of you familiar with VASA will be familiar with this concept. It allows capabilities from the underlying storage to be surfaced up into vCenter. VVOLs uses this so that storage container capabilities can be surfaced up. But there is a significant difference in VVOLs – we can now use the storage provider/VASA to push information down to the array also. This means that we can create requirements for our VMs (availability, performance, etc) and push this profile down to the storage layer, and ask it to build out the VMDK (or virtual volume) based on the requirements in the profile. The Storage Provider is created by the storage array vendor, using an API defined by VMware.

Protocol Endpoint:Since the ESXi will not have direct visibility of the VVOLs which back the VMDKs, there needs to be an I/O demultiplexor device which can communicate to the VVOLs (VMDKs) on its behalf. This is the purpose of the protocol endpoint devices, which in the case of block storage is a LUN, and in the case of NAS storage is a share or mountpoint. When a VM does I/O, the I/O is directed to the appropriate virtual volume by the protocol endpoint. This now allows us to scale to very, very many virtual volumes, and the multipathing characteristics of the protocol endpoint device are implicitly inherited by the VVOLs.

Storage Container: This is your storage pool on the array. Currently, one creates a pool of physical spindles on an array, perhaps building a raid across them and then carves this up into LUNs or shares to be presented to the ESXi hosts. In VVOLs, only the container/pool needs to be created. Once we have the storage provider and protocol endpoints in place, the storage container becomes visible to the ESXi hosts. From then on, as many VVOLs can be created in the container as there is available space, so long as the characteristics defined in the storage profiles matches the storage container.

So with that in mind, here is a short 5 minute video which ties all of this together:

Now, this is a project that can only be successful if our storage partners engage with us to make it a success. I’m pleased to say that many of our storage partners are already working with us on the first phase of this project, with many more on-boarding as we speak. And admittedly the video above is more about the architecture of VVOLs and doesn’t really show off the coolness of the feature. So I’d urge you to look at the following posts from some of our partners. EMC’s Chad Sakac has a post here around how they are integrating with virtual volumes, and HP’s Calvin Zito shows how their 3PAR array is integrated with this post. Interestingly, the title in both posts is around the future of storage. I think VVOLs is definitely going to change the storage landspace.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter VMwareStorage

Distributed Storage Tech Preview [with video]

As most of the followers of this blog will know by now, VMware made some significant announcements around its storage direction at VMworld 2012 in San Francisco last month. One of the announcements related to this new feature called Distributed Storage – basically the ability to take ESXi hosts with just local storage and build a distributed datastore across all hosts in the cluster. There are so many neat features attached to this, such as its scale out capability (just add a new ESXi node to the cluster), the ability to have compute-only nodes in the cluster (ESXi hosts with no local storage) and the introduction of Storage Policy Based Management (SPBM) to define virtual machine storage requirements such as performance and availability in the form of a profile. This profile is then pushed down to the Distributed Storage layer when the VMDK is being instantiated, and the VMDK is layed out across the distributed datastore in such a way as to meet these requirements.

There is so much more to Distributed Storage than that of course. For further information, please read the articles on Distrbuted Storage posted by my colleagues Massimo Re’Ferre here, Duncan Epping here and Christos Karamanolis here. The main reason for this post is to show you a video which was used at VMworld 2012 to show some of the neat features of this Distributed Storage announcement. Its pretty short (about 5 minutes) but it gives you an idea as to why we are all so excited about it here at VMware.

For those of you heading to VMworld 2012 in Barcelona in October, INF-STO2192 is a session I highly recommend attending. For those of you who cannot make VMworld, I’d highly recommend watching the recording.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage