vSAN

VSAN vs Nutanix Head-to-Head Performance Testing — Part 2

In a previous post, we began to present our case as to why we believe VMware Virtual SAN offers significantly more performance than an identically configured Nutanix cluster.   We thought IT professionals would find it very useful to see a side-by-side comparison across a wide range of synthetic workloads.

The complete details of the test beds, testing methodology and workload details can be found in the previous post.  We also have published our findings regarding Nutanix vs. VSAN pricing.

Towards the end of our testing, however, Nutanix changed their EULA to read as follows:

“You must not disclose the results of testing, benchmarking or other performance or evaluation information related to the Software or the product to any third party without the prior written consent of Nutanix”

So we sent along the details of our test methodologies (plus our Nutanix results) to request written permission as per their EULA.

Nutanix declined their consent for us to publish our head-to-head results.  In the spirit of transparency, a sampling of the VSAN-only results appear below.

Interested parties are invited to download VDbench and conduct their own testing.  Also, you may be interested in our previous performance paper.

Explanation of terms:

IOPS = Input Output Per Second — a basic measure of storage work done
MB/s = megabytes per second — a basic measure of data transferred
OIO = outstanding IOs queued per VMDK.  As OIOs increase, IOPS increase but latency elongates as well.  As OIOs decrease, IOPS decrease but latency improves.  Moderate values are usually best.
Read latency = how long it takes an average read to be serviced, expressed in milliseconds.  Bigger blocks usually take longer.
Write latency = how long it takes an average write to be serviced, expressed in milliseconds.  Bigger blocks usually take longer.
WS = working set, a measure of cache friendliness that is useful when evaluating hybrid storage architectures.

Example: in our 32 VM test, each VM has 10 VMDKs, each 10 GB, so 3.2 TB total.  However, it is extremely unlikely that every application is consistently accessing every byte of data, so we express a “working set” (WS) as a percentage of the total data set size to indicate how cache friendly a workload might be.  A WS to 10% is considered cache friendly, as only 320 GB would be consistently accessed.  A WS of 100% is considered extremely cache unfriendly, as the entire 3.2 TB would be consistently accessed.

The vast majority of data center workloads are very cache friendly, but not all are.  Hybrid architectures (flash for caching, disks for capacity) are a good fit.  Very cache unfriendly workloads are strong candidates for all-flash configurations.

Editorial note: keep in mind that these are intentionally demanding workloads, designed to show performance under workload stress.

32 VMs, 10 VMDKs ea, 10GB per VM, OIO=2
60% random 50% 4K 50% 8K
IOPS MB/s Read Latency (msec) Write Latency (msec)
VSAN VSAN VSAN VSAN
35% read 20% WS 84439 495 2 10
35% read 50% WS 38516 226 1 25
65% read 20% WS 127898 749 2 9
65% read 50% WS 65420 383 1 25
64 VMs, 10 VMDKs ea, 10GB per VM, OIO=2
60% random 50% 4K 50% 8K
IOPS MB/s Read Latency (msec) Write Latency (msec)
VSAN VSAN VSAN VSAN
35% read 25% WS 36650 215 1 53
65% read 10% WS 131307 769 4 19
65% read 25% WS 65242 382 1 53
32 VMs, 10 VMDKs ea, 10GB per VM, OIO=2
100% random 100% write 4K
IOPS MB/s Read Latency (msec) Write Latency (msec)
VSAN VSAN VSAN VSAN
50% WS 31250 122 0 20
100% WS 19941 78 0 32
100% random 67% read 8K
50% WS 56770 444 1 31
100% random 100% read 4K
50% WS 443334 1732 1
100% seq 100% write 32K
50% WS 11269 352 0 56
100% WS 11539 361 0 55
100% seq 100% read 32K
50% WS 92223 2882 6
100% WS 51940 1623 12