Datacenters continue to grow as the use of both public and private clouds becomes more prevalent. A comprehensive review of density, power, and performance is becoming more crucial to understanding the tradeoffs when considering new storage technologies as a replacement for legacy solutions. Expanding on previous articles around comparing storage technologies and the IOPS performance available when using flash-based storage, in this article we are comparing the density, power, and performance differences between traditional hard disk drive (HDDs) and flash-based storage. As might be expected, we found that the flash-based storage performed very well in comparison to the traditional hard disk drives. This article quantifies our findings.
In addition to VMmark’s previous performance measurement capability, VMmark 2.5 adds the ability to collect power measurements on servers and storage under test. VMmark 2.5 is a multi-host virtualization consolidation benchmark that utilizes a combination of application workloads and infrastructure operations running simultaneously to model the performance of a cluster. For more information on VMmark 2.5, see this overview.
Hypervisor: VMware vSphere 5.1
Servers: Two x Dell PowerEdge R720
BIOS settings: High Performance Profile Enabled
CPU: Two x 2.9GHz Intel Xeon CPU-E5-2690
HBAs: Two x 16Gb QLE2672 per system under test
– HDD-Configuration: EMC CX3-80, 120 disks, 8 Trays, 1 SPE, 30U
– Flash-Based-Configuration: Violin Memory 6616, 64 VIMMs, 3U
Workload: VMware VMmark 2.5.1
For this experimentation we set up a vSphere 5.1 DRS-enabled cluster consisting of two identically configured Dell PowerEdge R720 servers. A series of VMmark 2.5 tests were then conducted on the cluster with the same VMs being moved to the storage configuration under test, progressively increasing the number of tiles until the cluster reached saturation. Saturation was defined as the point where the cluster was unable to meet the VMmark 2.5 quality-of-service (QoS) requirements. We selected the EMC CX3-80 and the Violin Memory 6616 as representatives of the previous generation of traditional HDD-based and flash based storage, respectively. We would expect comparable arrays in these generations to have characteristics similar to what we measured in these tests. In addition to the VMmark 2.5 results, esxtop data was collected to provide further statistics. The HDD configuration running a single tile was used as the baseline and all VMmark 2.5 results in this article (excluding raw Watts metrics, %CPU, and Latency) were normalized to that result.
Average Watts and VMmark 2.5 Performance Per Kilowatt Comparison:
For our comparison of the two technologies, the first point of evaluation was reviewing both the average watts required by the storage arrays and the corresponding VMmark 2.5 Performance Per Kilowatt (PPKW) score. Note that the HDD configuration reached saturation at 7 tiles. In contrast, the Flash-based configuration was able to support a total of 9 tiles, while still meeting the quality of service requirements for VMmark 2.5.
As can be seen from the above graphs, the difference between the two technologies is extremely obvious. The average watts drawn by the Flash-based configuration was nearly 50% less than the HDD configuration across all tiles tested. Additionally, the PPKW score of the Flash-based configuration was on average 3.4 times higher than the HDD configuration, across all runs.
Application Score Comparison:
Due to the very large difference in PPKW, we decided to dig deeper into the potential root causes, beyond just the discrepancy in power consumed. Because the application workloads exhibit random access patterns, as opposed to the sequential nature of infrastructure operations, we focused on the differences in application scores between the two configurations, as this is where we would expect to see the majority of the gains provided by the Flash-based configuration.
The difference between the scaling of the application workloads is quite obvious. Although running the same number of tiles, and thus attempting the same amount of work, the flash-based configuration was able to produce application workload scores that were 1.9 times higher than the HDD configuration across 7 tiles.
CPU and Latency Comparison:
After exploring the power consumption and various areas of performance difference, we decided to look into two additional key components behind the performance improvements: CPU utilization and storage latency.
In our final round of data assessment we found that the CPU utilization of the flash-based storage was on average 1.53 times higher than the HDD configuration, across all 7 tiles. Higher CPU utilization might appear to be sub-optimal, however we determined that the systems were waiting less time for I/O to complete and were thus getting more work done. This is especially visible when reviewing the storage latencies of the two configurations. The flash-based configuration showed extremely flat latencies, and had on average less than one tenth of the HDD configuration’s latencies.
Finally, when comparing the physical space requirements of the two configurations, the flash-based storage was effectively 92% denser than the traditional HDD configurations (achieving 9 tiles in 3U versus 7 tiles 30U). In addition to physical density advancements, the flash-based storage allowed for a 29% increase in the number of VMs run on the same server hardware, while maintaining QoS requirements of VMmark 2.5.
The flash-based storage showed wins across the board for power and performance. The flash-based storage consumed half the power while achieving over three times the performance. Although the initial costs of flash-based storage can be somewhat daunting when compared to traditional HDD storage, the reduction in power, increased density, and superior performance of the flash-based storage certainly seems to provide a strong argument for integrating the technology into future datacenters. VMmark 2.5 gives us the ability to look at the larger picture, making an informed decision across a wide variety of today’s concerns.