vSAN

HCI Economics – Maximize vSAN Thin Provisioning to Maximize Savings

This blog series focuses on how you can lower the cost of storage in your data center by getting the most out of VMware vSAN™ features and functionality.

Thin Provisioning

Thin provisioning used to be a somewhat controversial feature to use when first used with VMFS. vSAN’s architectural design avoids a lot of these concerns. Many organizations adopted a “Always use Eager Zero Thick” policy to deal with the realities of block storage arrays in 2012. When switching to vSAN, thin provisioning should be revisited as a fast, safe, way to drive down storage costs.

1. Management and monitoring overhead – When thin provisioning is used with VMFS block storage it requires you to monitor each datastore for out-of-space conditions. Early versions of VMFS had smaller capacity limits (2TB). This led to additional monitoring overhead to prevent an out of space condition. Thin provisioning only used on the larger pools used in the array lowered this overhead. vSAN pools all available capacity, and so it mirrors the array’s behavior of having thin provisioning be a single pool to monitor. vSAN 7U1 has additionally implemented an “operational reserve” where the vSAN administrator can hide capacity from the datastore to help in cases of low capacity.

2. VMFS Performance overhead – Thin provisioning performance overhead was something that was feared many years ago but times have changed. Historically on VMFS this was done with SCSI reservations and could cause some performance-related issues. While ATS mitigated this, it’s worth noting that vSAN by design doesn’t have this issue as the ownership of blocks is handled without the need for a clustered file system. vSAN behaves more similarly to NFS in that it is “sparse” by default, and the setting of a VMDK as “thick” is a capacity reservation and not a net optimization of the write IO path.
3. Array performance overhead – Some older storage arrays carried performance overhead for using thin provisioning. Storing the metadata map on disk meant that read IO would be amplified by 100% and effectively cut the performance in half of the array. vSAN (like modern arrays) stores metadata maps on flash and caches them in DRAM to avoid this issue.
4. Shared VMDK Support requirements – historically shared VMDKs (Such as those required for Oracle) required an Eager Zero Thick volume. This is no longer a requirement for vSAN.
5. Vendor/application support – This is one where some vendors are unnecessarily following old best practices. It is worth noting that most major application vendors support thin, and some previous holdouts (like Microsoft Exchange) have changed to supporting thin provisioning.
6. Thin Provisioning was not reliable – Anyone who has used thin provisioning for a long time noticed that thin provisioning did not always add up. One customer I spoke with noticed a petabyte discrepancy. Windows would report a drive as using 50GB but the thin VMDK would show 200GB in use, while the array would show 500GB in use for VMFS. This is the result of NTFS/XFS/EXT4 etc being thin unfriendly file systems that aggressively redirect all new writes into free space. Over time this would slowly claw back the thin provisioning savings. TRIM/UNMAP could solve this, but these commands had a history of being problematic by overwhelming storage controllers. In addition, reclaim of blocks had to be managed at both a virtual machine layer, but an additional VMFS layer. vSAN avoids this complexity by reclaim being done at a single layer (Virtual machine to VMDK). vSAN mitigates many of the performance concerns by using a scale-out controller system. Controller performance can scale out to handle metadata operations, as well as intelligently throttling the de-staging of UNMAP commands. By enabling UNMAP support an additional 20-30% thin provisioning savings can be achieved.

VMware vSAN is ready to not only provide thin storage but also work to keep it thin. Adding up this 20-50% capacity savings, this drives down the total cost of storage in a safe and consistent manner.