1) VMFS block alignment with respect to underlying array chunks. Alignment at the VMFS layer has been addressed since VMFS-3 (used in vSphere 3 and 4) being aligned at Logical Block Address (LBA) 128 and VMFS-5 (used in vSphere 5.x) being aligned at LBA 2048 by default. However, this issue is not relevant for Virutal SAN, as Virtual SAN does not utilize VMFS. This is because Virtual SAN utilizes a native object store and there is no underlying array format.
2) Alignment of guest operating system blocks with VMFS blocks. Older guest operating systems have an issue with block alignment that can cause split IO. This occurs when the guest filesystem partition starts at a unaligned LBA and as a result guest IOs may cross block boundaries in the underlying VMFS volume or Virtual SAN datastore. Newer operating systems (e.g., Windows 7 and newer) do not have this issue because they start the partition at an aligned 1MB LBA within a vmdk. For more background information on guest alignment, see our previous blog post.
The need for guest alignment in older operating systems is still applicable with Virtual SAN. However, the performance impact of split IO caused by guest OS misalignment is less noticeable to guests residing on Virtual SAN, compared to guests on traditional storage. This is because of the following facts.
a) All Virtual SAN writes go to the flash acceleration layer and are coalesced before they are de-staged to HDD.
b) Read performance will not be highly impacted by split IOs that span cache lines due guest OS misalignment, as the Virtual SAN flash acceleration layer can serve much higher levels of IOps than spinning disk.
Because all Virtual SAN writes and the vast majority of reads are served from the flash acceleration layer, the impact of guest OS misalignment is lessened in a Virtual SAN environment when compared to misaligned guests residing on traditional storage.
So, in summary, we still recommend that you align older guest operating systems for optimal performance. If they are not aligned, there will be a performance penalty, but generally (depending on workload characteristics) the penalty is less pronounced on Virtual SAN, due to the use of our flash acceleration layer for write buffering and read caching.