Virtual Volumes (vVols)

Is Storage DRS Still Needed With Virtual Volumes?

A question that has been coming up frequently in conversations I have had with customers is why Storage DRS is not supported with Virtual Volumes or Virtual SAN? In fact this question came up again in a customer call on Thursday morning with one of our customers who wanted to understand more about the benefits of Virtual Volumes. Taking onboard the advice that if you are asked a question more than twice it’s worth writing a blog post about it I decided it was about time that I wrote up my thoughts on the topic!

The reason why this question is asked so frequently is because Storage DRS has been very effective in helping customers manage the complexity inherent with managing traditional storage at scale. If you are not familiar with the technology then you should know that Storage DRS continuously balances storage space usage and storage I/O load across datastores to help applications avoid resource bottlenecks and meet their desired service levels. So if Storage DRS is so valuable why isn’t it currently supported with Virtual Volumes?

To answer this common question I think it is important to identify a fundamental difference between a per-VM storage system, like vVols or VSAN, and a traditional storage system.

Working with Storage DRS and Traditional Storage

With A VMFS based system customers manage their arrays by carving out fixed sized LUNs from the arrays and then presenting those LUNs to ESXi hosts and laying down VMFS on top of them. Let’s say I have 160 TB of flash and HDD storage that I want to present to ESXi. With traditional VMFS storage I would carve out that capacity into distinct LUNs of some specific size, in this example let’s go with a relatively large 8 TB LUN size. Next for each of the LUNs I would have to create an equivalent VMFS based datastore in vSphere. Now to ease my management burden I am going to group those 20 datastores into 2 datastore clusters, one cluster of flash backed datastores and one for HDD backed datastores.

By using the datastore clusters to group storage with shared characteristics together I can use Storage DRS to load balance, based on capacity and I/O, across the LUNs that are backing the cluster. When Storage DRS identifies an imbalance that needs to be addressed it will use Storage vMotion to perform a migration of the data to another datastore within the cluster. To change the tier of storage (from HDD to flash for example) would, in this example, require an explicit migration between datastore clusters. This second type of migration is an administrative action to locate the workload on a different class of storage explicitly.

 

 

2 SDRS cluster of 10 datastores each. One cluster containing HDD backed datastores, the other backed by flash storage

Storage DRS provides a great solution to managing large numbers of datastores, without Storage DRS all of the load balancing by capacity and I/O latency would have to be done manually instead.

We can see that Storage DRS addresses some important challenges managing storage at scale within vSphere. Why then wouldn’t we support it with vVols? Don’t we need the same capabilities there? The short answer is not so much. To illustrate let’s look at the same example again, but this time using Virtual Volumes.

Working with Virtual Volumes

I again have 160 TB of flash and HDD storage that I want to present to ESXi. With Virtual Volumes I create a single vVol storage container on the storage system that has access to the storage pool of 80TB of Flash and 80TB of HDD. This single storage container is then exposed in vSphere as a single vVols datastore.

sdrs-3

With Virtual Volumes we have entirely removed the need for capacity and load balancing across smaller datastores presented to ESXi. The radical simplicity of this model now removes the need for the reactive load balancing of Storage DRS. Instead the capacity of the array is now directly exposed as a large pool of storage, rather than us having to simulate such a pool using a Storage DRS cluster.

The radical simplicity of Virtual Volumes brings several additional benefits compared to a traditional storage system. With vVols the storage is abstracted behind the vVols datastore and any necessary load balancing on the storage system is offloaded onto the storage system itself. This means any balancing that is required is done close to the data itself and the storage system can take full advantage of its capabilities to optimize any data migrations that are required without impacting access to that data by vSphere.

In addition to avoiding storage vMotions for load balancing vVols datastores also have the opportunity to offload migrations to deliver different tiers of service as well. With vVols instead of having to migrate between datastores to deliver a new tier of service I can apply VM storage policies directly to my workloads. With these policies I can quickly change the requirements placed on my workloads and have the storage system itself bring the workload into compliance without requiring an explicit storage migration. Similar benefits can be realized with Virtual SAN as well as it also supports the storage policy-based management used with vVols.

Finally working with larger pools of storage instead of potentially hundreds of smaller datastores makes capacity planning much easier for both storage and virtual admins removing many of the problems related to stranded capacity.

Conclusion

As you can see, for storage presented by a single storage system there is no compelling need to support Storage DRS and Virtual Volumes at this time. There are still use cases where a feature like Storage DRS would be valuable with vVols, such as multi-array or multi-vendor clustering for example, but the majority of the issues Storage DRS addresses in a traditional storage system are addressed more completely and more effectively with Virtual Volumes.