VMware vSAN supports thin provisioning, which lets you, in the beginning, use just as much storage capacity as currently needed and then add the required amount of storage space at a later time. Using the vSAN thin provisioning feature, you can create virtual disks in a thin format. For a thin virtual disk, vSAN provisions the entire space required for the disk’s current and future activities, but commits only as much storage space as the disk needs for its initial operations. This is configured by default using the Storage Policy Based Management (SPBM) policy of Object Space Reservation (OSR) being set to 0.
One challenge with Thin Provisioning is that over time the space that is allocated slowly grows even if the guest file system is not increasing its usage. The reason for this imbalance comes from the fact that many modern guest file systems always write into free blocks. This behavior can result in a guest partition showing 1GB in use for a small log file that often overwrites, using 1TB at the VMDK level. While VMFS is thin friendly deletions or migrations could leave LUNs with non-used blocks allocated at the storage array level. Historically the means of recovering this space were:
Time-Consuming – Required manual intervention at both the guest and array level and couldn’t easily be automated.
Performance Intensive – Requiring Virtual Machine and LUN migrations as well as writing zero’s to free space.
Short-Lived – In environments where large provisioning actions are automated, or large volumes of writes come in, you could find the thin provisioning “evaporating” shortly after you reclaimed the space. This often resulted in a never-ending game of storage capacity Jenga (™).
VMware vSAN 6.7U1 introduces automated space reclamation support with TRIM and SCSI UNMAP support. SCSI UNMAP and the ATA TRIM command enable the guest OS or file system to notify that back-end storage that a block is no longer in use and may be reclaimed. vSAN does not use LUNs or VMFS, so this does not require multiple layers of reclamation like traditional storage. Throughput tied to this reclaim can be tracked on the vSAN performance service at a host level where throughput for UNMAP can be viewed. Additional benefits come from removing writes pending destage, as well as freeing up cache assigned to no longer valid data.
This feature requires a host setting be enabled. The largest expected performance impact will be removals of cold/stale data from the capacity tier, or file system activities like formatting that send large volumes of UNMAP operations at one time. This feature is enabled initially globally for the cluster. This feature may be disabled with a customer VMX configuration setting, or by disabling it within the guest operating system. You will need to verify that your operating system, file system, virtual machine hardware, and file system and OS configuration settings are configured to support TRIM or UNMAP.
For more specific guidance on how to enable and configure guest operating systems for this functionality, see Space Efficiencies guide on VMware.StorageHub.com.