Virtual SAN is a storage solution that is fully integrated with VMware vSphere. Virtual SAN leverages flash technology to cache data and improve its access time to and from the disks. We used VMware’s VMmark 2.5 benchmark to evaluate the performance of running a variety of tier-1 application workloads together on Virtual SAN 6.0.
VMmark is a multi-host virtualization benchmark that uses varied application workloads and common datacenter operations to model the demands of the datacenter. Each VMmark tile contains a set of virtual machines running diverse application workloads as a unit of load. For more details, see the VMmark 2.5 overview.
VMmark 2.5 requires two datastores for its Storage vMotion workload, but Virtual SAN creates only a single datastore. A Red Hat Enterprise Linux 7 virtual machine was created on a separate host to act as an iSCSI target to serve as the secondary datastore. Linux-IO Target (LIO) was used for this.
|Systems Under Test
||8x Supermicro SuperStorage SSG-2027R-AR24 servers
|CPUs (per server)
||2x Intel Xeon E5-2670 v2 @ 2.50 GHz
|Memory (per server)
||VMware vSphere 5.5 U2 and vSphere 6.0
|Local Storage (per server)
||3x 400GB Intel SSDSC2BA4012x 900GB 10,000 RPM WD Xe SAS drives
||VMware VMmark 2.5.2
Storage performance is often measured in IOPS, or I/Os per second. Virtual SAN is a storage technology, so it is worthwhile to look at how many IOPS VMmark is generating. The most disk-intensive workloads within VMmark are DVD Store 2 (also known as DS2), an E-Commerce workload, and the Microsoft Exchange 2007 mail server workload. The graphs below show the I/O profiles for these workloads, which would be identical regardless of storage type.
The DS2 database virtual machine shows a fairly balanced I/O profile of approximately 55% reads and 45% writes.
Microsoft Exchange, on the other hand, has a very write-intensive load, as shown below.
Exchange sees nearly 95% writes, so the main benefit the SSDs provide is to serve as a write buffer.
The remaining application workloads have minimal disk I/Os, but do exert CPU and networking loads on the system.
VMmark measures both the total throughput of each workload as well as the response time. The application workloads consist of Exchange, Olio (a Java workload that simulates Web 2.0 applications and measures their performance), and DVD Store 2. All workloads are driven at a fixed throughput level. A set of workloads is considered a tile. The load is increased by running multiple tiles. With Virtual SAN 6.0, we could run up to 40 tiles with acceptable quality of service (QoS). Let’s look at how each workload performed with increasing the number of tiles.
There are 3 webserver frontends per DVD Store tile in VMmark. Each webserver is loaded with a different profile. One is a steady-state workload, which runs at a set request rate throughout the test, while the other two are bursty in nature and run a 3-minute and 4-minute load profile every 5 minutes. DVD Store throughput, measured in orders per minute, varies depending on the load of the server. The throughput will decrease once the server becomes saturated.
For this configuration, maximum throughput was achieved at 34 tiles, as shown by the graph above. As the hosts become saturated, the throughput of each DVD Store tile falls, resulting in a total throughput decrease of 4% at 36 tiles. However, the benchmark still passes QoS at 40 tiles.
Olio and Exchange
Unlike DVD Store, the Olio and Exchange workloads operate at a constant throughput regardless of server load, shown in the table below:
||Load per Tile
||320-330 Sendmail actions per minute
||4500-4600 operations per minute
At 40 tiles the VMmark clients are sending over ~12,000 mail messages per minute and the Olio webservers served ~180,000 requests per minute.
As the load increases, the response time of Exchange and Olio increases, which makes them a good demonstration of the end-user experience at various load levels. A response time of over 500 milliseconds is considered to be an unacceptable user experience.
As we saw with DVD Store, performance begins to dramatically change after 34 tiles as the cluster becomes saturated. This is mostly seen in the Exchange response time. At 40 tiles, the response time is over 300 milliseconds for the mailserver workload, which is still within the 500 millisecond threshold for a good user experience. Olio has a smaller increase in response time, since it is more processor intensive. Exchange has a dependence on both CPU and disk performance.
Looking at Virtual SAN performance, we can get a picture of how much I/O is served by the storage at these load levels. We can see that reads average around 2000 read I/Os per second:
The Read Cache hit rate is 98-99% on all the hosts, so most of these reads are being serviced by the SSDs. Write performance is a bit more varied.
We see a range of 5,000-10,000 write IOPS per node due to the write-intensive Exchange workload. Storage is nowhere close to saturation at these load levels. The magnetic disks are not seeing much more than 100 I/Os per second, while the SSDs are seeing about 3,000 – 6,000 I/Os per second. These disks should be able to handle at least 10x this load level. The real bottleneck is in CPU usage.
Looking at the CPU usage of the cluster, we can see that the usage levels out at 36 tiles at about 84% used. There is still some headroom, which explains why the Olio response times are still very acceptable.
As mentioned above, Exchange performance is dependent on both CPU and storage. The additional CPU requirements that Virtual SAN imposes on disk I/O causes Exchange to be more sensitive to server load.
Performance Improvements in Virtual SAN 6.0 (vs. Virtual SAN 5.5)
The Virtual SAN 6.0 release incorporates many improvements to CPU efficiency, as well as other improvements. This translates to increased performance for VMmark.
VMmark performance increased substantially when we ran the tests with Virtual SAN 6.0 as opposed to Virtual SAN 5.5. The Virtual SAN 5.5 tests failed to pass QoS beyond 30 tiles, meaning that at least one workload failed to meet the application latency requirement. During the Virtual SAN 5.5 32-tile tests, one or more Exchange clients would report a Sendmail latency of over 500ms, which is determined to be a QoS failure. Version 6.0 was able to achieve passing QoS at up to 40 tiles.
Not only were more virtual machines able to be supported on Virtual SAN 6.0, but the throughput of the workloads increased as well. By comparing the VMmark score (normalized to 20-tile Virtual SAN 5.5 results) we can see the performance improvement of Virtual SAN 6.0.
Virtual SAN 6.0 achieved a performance improvement of 24% while supporting 33% more virtual machines.
Using VMmark, we are able to run a variety of workloads to simulate applications in a production environment. We were able to demonstrate that Virtual SAN is capable of achieving good performance running heterogeneous real world applications. The cluster of 8 hosts presented here show good performance in VMmark through 40 tiles. This is ~12,000 mail messages per minute sent through Exchange, ~180,000 requests per minute served by the Olio webservers, and over 200,000 orders per minute processed on the DVD Store database. Additionally, we were able to measure substantial performance improvements over Virtual SAN 5.5 using Virtual SAN 6.0.