Product Announcements

Virtual SAN Hardware Guidance Part 1 – Solid State Drives

With the advent of VMware Virtual SAN, there have been many questions around the type of server hardware that should be used to support and configure a Virtual SAN environment. While this may be as easy as choosing the appropriate Ready Node server or following the appropriate Ready System recommendation for your Virtual SAN environment (more information on the Virtual SAN Ready program can be found here), there will be many people who choose to build their own configuration from individual components within the VMware Compatibility Guide (VCG).

Virtual SAN allows the flexibility to build your own configuration based on a given vendors hardware platform, allowing one to greatly differentiate the performance of Virtual SAN clusters based on your choices. But with flexibility comes a number of decision points. This series of blog posts will provide initial guidance for the hardware design considerations one can make when choosing specific hardware components for a Virtual SAN environment.

Virtual SAN and Solid State Drives
Virtual SAN utilizes Solid State Drives (SSDs) as both a write buffer and read-cache to accelerate the performance of the platform. Solid State Drives (SSD) are devices that store persistent data on solid-state NAND flash modules. Virtual SAN supported SSDs can utilize one of three common interfaces, either SATA, SAS, or PCIe. So what are the tradeoffs between these interfaces? Well from a pure interface speed perspective, each interface provides different maximum throughput levels, as noted below.

  • SAS drives commonly support either 6Gb/s or 12Gb/s (based on the SAS-2 or SAS-3 specification)
  • SATA drives commonly support either 1.5Gb/s, 3Gb/s, or 6Gb/s (based on the SATA 1.0, 2.0 or 3.0 specification)
  • PCIe drives commonly support from 1 to 32 lanes, with throughput dependent on the PCIe generation

Note: SATA 3Gb/s or 6 Gb/s drives may be connected to a SAS interface, but SAS drives cannot be connected to a SATA interface.

The above interface performance numbers are straightforward for SAS and SATA (with interface speeds clearly being stated as 3Gb/s, 6Gb/s, or 12Gb/s), but you may be wondering what PCIe generations and lanes mean to interface performance? Well the first variable is the PCIe generation. Each subsequent generation has increased the speed possible per lane, as detailed below.

  • v1.x: 250 MB/s (2.5 GT/s)
  • v2.x: 500 MB/s (5 GT/s)
  • v3.0: 985 MB/s (8 GT/s)
  • v4.0: 1969 MB/s (16 GT/s)

A PCIe device can contain from 1 to 32 lanes, meaning a PCIe Gen 2 card with x8 lanes can produce a maximum of 4GB/s (32 Gb/s) of throughput. As general guideline, PCIe SSD devices will typically outperform SAS/SATA SSDs. But interface performance is only one factor in choosing an SSD. Next lets look at drive I/O performance.

SSD IOPS  
When selecting hardware for a Virtual SAN cluster, you may notice that SSD’s on the VCG site are categorized into different classes, based on write performance. All SSDs are not created equal, so to simplify choosing an SSD VMware has categorized SSD devices into five performance classes. The class of the SSD you chose can greatly affect the performance of your VSAN cluster. Below are the designated SSD classes specified within the VMware Compatibility Guide.

  • Class A: 2,500-5,000 writes per second
  • Class B: 5,000-10,000 writes per second
  • Class C: 10,000-20,000 writes per second
  • Class D: 20,000-30,000 writes per second
  • Class E: 30,000+ writes per second

While write performance is a good general guideline to gauge SSD performance (and we chose write performance because in SSDs, writes are much more of a limiting factor of SSD performance than either random or sequential reads), to gain further insight into the performance characteristics of an SSD and how it may effect workloads in Virtual SAN, one should also consider factors such as the queue depth at which an SSD vendor is reporting its metrics, and maximum drive latency numbers.

SLC vs MLC vs eMLC 
As you look at different SSDs, you may notice SSD vendors will categorize their components as either single-level cell (SLC), multi-level cell (MLC) or enterprise multi-level cell (eMLC) NAND. NAND flash modules are the non-volatile storage components that comprise SSDs.

In single-level cell (SLC), each cell can store a single bit (0 or 1) of information. In multi-level cells (MLC) NAND flash uses multiple levels per cell to allow more bits to be stored. An MLC will usually have 4 bits (00, 01, 10, 11). MLC is typically lower in cost than SLC, but the NAND modules typically have a shorter life span. Because MLC uses the same number of transistors as SLC, there is potentially a higher risk of errors within each module. eMLC is a middle ground between the cost/lifespan of SLC & MLC modules, and will typically utilize 2 bits. Because eMLC flash media has more program-erase (P/E) cycles than consumer MLC, it has greater endurance and can tolerate the types of workloads that enterprise applications require.

So does this mean that any SSD utilizing SLC is more reliable than eMLC, which is more reliable than MLC? Not necessarily. The reliability and performance of the individual NAND modules can be bolstered by features within an SSD’s controller. Different vendors utilize different SSD design techniques to increase the reliability of their drives to achieve “enterprise class” (such as NAND over provisioning and SSD controller endurance features). Because of this, VMware does not differentiate in the support of any of these NAND technologies. What matters is that the SSD device meets the minimum performance and reliability metrics defined by VMware within its given performance class, regardless of vendor SSD design choices (i.e. NAND type .vs SSD controller features) utilized to reach those metrics.

Note: This graphic displays different SSD design approaches combined with different NAND types to achieve similar SSD endurance levels. Figures here are for illustrative purpose only.

VMware SSD Endurance Requirements

SSD write metrics are the primary measurement used by SSD vendors to gauge SSD reliability. While there is no standard metric across all vendors, most vendors measure SSD reliability in either Drive Writes Per Day (DWPD) or Petabyes Written (PBW). VMware requires that any SSD device within the VCG meet the following minimum endurance metrics during a five year lifespan.

VMware endurance requirements for SAS and SATA SSDs

  • The drive must support at least 10 full Drive Writes per Day (DWPD), and
  • The drive must support random write endurance up to 3.5 PB on 8KB transfer size per NAND module, or
  • The drive must support random write endurance up to 2.5 PB on 4KB transfer size per NAND module.

VMware Endurance requirements for PCIe SSDs

  • The drive must support at least 10 full Drive Writes per Day (DWPD), or
  • The drive must support random write endurance up to 3.5 PB on 8KB transfer size per NAND module, or
  • The drive must support random write endurance up to 2.5 PB on 4KB transfer size per NAND module.

While Virtual SAN focuses on enabling performance and operational efficiency for the majority of workloads without the need for complex design decisions, there will always be those who want to customize their Virtual SAN environment as much as possible through the selection of specific hardware components. Hopefully this post has given you some insight into SSD hardware considerations around Virtual SAN. In the next part of this series, we will delve into storage controller considerations.