Many, many factors go into an informed IT purchasing decision. Priorities may vary for different people, but relative performance can matter greatly to many IT buyers when considering alternatives. If the differences are significant and meaningful, it can certainly tilt the scale.
The usual baseline is the core question — is the performance fast enough for what I want to do?
But answering that question is not as easy as it sounds. Real-world workloads can be notoriously difficult to characterize and size reliably. And it’s nice to have a nice margin of performance headroom in case you’ve guessed wrong, or — more likely — the workloads have changed. Nobody likes to hit a wall.
But there’s a deeper level as well. A poorly performing product can require more hardware, licenses and environmentals to match the results of a much better performing product. While not everyone may care about absolute performance, almost everyone would care about having to pay much more to get equivalent work done.
Unfortunately, there is a paucity of good head-to-head performance testing in the marketplace today. Ideally, that would be done by an independent third party, but while we’re waiting for that, we’ve done our own.
To be clear, this isn’t about vendors beating each over the head with benchmarks — it’s about helping IT professionals make informed choices. More data is better.
We’ve already published an extensive set of performance testing for VSAN, but nothing in terms of a direct comparison. We wanted to correct that situation — and here’s what we’re attempting to do.
The Head-to-Head Test Bed
We wanted to configure two identical four-node clusters — one running Nutanix, and one running native vSphere with VSAN enabled. Both are popular hyperconverged solutions in the marketplace today.
Our assumption going in would be that VSAN would have a significant performance and resource efficiency advantage due to its architecture — it’s built in to the hypervisor vs. layered on top. But it’s one thing to claim something, and something entirely else to show verifiable evidence.
We took pains to configure both test beds to be precisely identical — exact same hardware, exact same drivers and firmware, etc. We constructed a set of synthetic tests using VDbench to drive the exact same workloads, under the exact same conditions. We did our best to pick synthetic workload profiles that matched to what we see in many customer environments.
Our only wrinkle is that we had to use vSphere 5.5U2 on the Nutanix test bed, as vSphere 6.0 support has not yet been announced. Our discussions with VMware engineers led to the conclusion that this wouldn’t result in meaningful differences in the results. And, of course, when vSphere 6.0 support is announce, we’ll switch over.
We ran and re-ran the tests, and came up with some pretty impressive results that should be easily reproducible by anyone with the time and resources. We think it’s the kind of useful, actionable data that most IT pros would want to see.
Details, Details …
The test bed is as follows:
4 x Dell XC630-10 for Nutanix,
4 x Dell R630 for vSphere with VSAN enabled
2 x Intel E5-2690 v3 per server
12 cores @ 2.6GHz per CPU
256 GB of RAM per server
dual 10Gb Ethernet
1 x PERC H730 Mini per server
2 x 400GB Intel S3700 per server
6 x 1TB 7200 RPM NL-SAS (Seagate) per server
Test software:
VDbench 5.04.01
Nutanix 4.1.2.1 / VMware vSphere 5.5U2
vSphere 6.0 with Virtual SAN enabled
Ubuntu Linux VMs, paravirtualization enabled, deadline scheduler (default)
All driver and firmware versions as supplied by their respective vendors as of May 15, 2015
Nutanix CVMs were reconfigured from 8 vCPUs and 16 GB RAM to 10 vCPUs and 32 GB RAM to avoid 100% saturation observed in initial testing, and per vendor recommendations. All other configuration parameters left at vendor-supplied defaults.
Test layouts:
32 or 64 concurrent VMs depending on test
10 VMDKs per VM, 10 GB per VMDK, 2 outstanding IOs per VMDK
30 min warmup up time, 60 minutes steady-state measured results
Tests ran:
Note: “working set” is the percent of the total data set being accessed. It’s a measure of cache-friendliness, lower values being more cache-friendly. The vast majority of data center workloads are cache-friendly, but a few are not. For example, in the 64 VM test, each VM has 10 VMDKs of 10GB each, so 6.4 TB total data. A working set of 20% means that 1.28 TB is being accessed, on average.
32 VMs, 60% random, 50% 4K blocks, 50% 8K blocks, 35% read, 65% write, 20% working set
32 VMs, 60% random, 50% 4K blocks, 50% 8K blocks, 35% read, 65% write, 50% working set
32 VMs, 60% random, 50% 4K blocks, 50% 8K blocks, 35% read, 65% write, 100% working set
32 VMs, 60% random, 50% 4K blocks, 50% 8K blocks, 65% read, 35% write, 20% working set
32 VMs, 60% random, 50% 4K blocks, 50% 8K blocks, 65% read, 35% write, 50% working set
32 VMs, 60% random, 50% 4K blocks, 50% 8K blocks, 65% read, 35% write, 100% working set
64 VMs, 60% random, 50% 4K blocks, 50% 8K blocks, 35% read, 65% write, 20% working set
64 VMs, 60% random, 50% 4K blocks, 50% 8K blocks, 35% read, 65% write, 50% working set
64 VMs, 60% random, 50% 4K blocks, 50% 8K blocks, 35% read, 65% write, 100% working set
64 VMs, 60% random, 50% 4K blocks, 50% 8K blocks, 65% read, 35% write, 20% working set
64 VMs, 60% random, 50% 4K blocks, 50% 8K blocks, 65% read, 35% write, 50% working set
64 VMs, 60% random, 50% 4K blocks, 50% 8K blocks, 65% read, 35% write, 100% working set
32 VMs, 100% random, 4K blocks, 100% write, 20% working set
32 VMs, 100% random, 4K blocks, 100% write, 50% working set
32 VMs, 100% random, 4K blocks, 100% write, 100% working set
32 VMs, 100% random, 8K blocks, 67% reads, 33% writes, 20% working set
32 VMs, 100% random, 8K blocks, 67% reads, 33% writes, 50% working set
32 VMs, 100% random, 8K blocks, 67% reads, 33% writes, 100% working set
32 VMs, 100% sequential, 32K blocks, 100% writes, 20% working set
32 VMs, 100% sequential, 32K blocks, 100% writes, 50% working set
32 VMs, 100% sequential, 32K blocks, 100% writes, 100% working set
32 VMs, 100% sequential, 256K blocks, 70% reads, 20% working set
32 VMs, 100% sequential, 256K blocks, 70% reads, 50% working set
32 VMs, 100% sequential, 256K blocks, 70% reads, 100% working set
32 VMs, 100% sequential, 32K blocks, 100% reads, 20% working set
32 VMs, 100% sequential, 32K blocks, 100% reads, 50% working set
32 VMs, 100% sequential, 32K blocks, 100% reads, 100% working set
All in all, a very useful set of head-to-head synthetic tests — something for everyone.
But There Was A Small Problem
When we started testing, we reviewed the Nutanix EULA to see if there were any restrictions against publishing performance testing results. There were none, so we proceeded with testing, reasonably confident that we could publish our results.
On or about May 15th, Nutanix changed their EULA to read as follows:
“You must not disclose the results of testing, benchmarking or other performance or evaluation information related to the Software or the product to any third party without the prior written consent of Nutanix”
In all fairness, VMware has a similar, but softer provision it its vSphere EULA:
“You may use the Software to conduct internal performance testing and benchmarking studies. You may only publish or otherwise distribute the results of such studies to third parties as follows: (a) if with respect to VMware’s Workstation or Fusion products, only if You provide a copy of Your study to [email protected] prior to distribution; (b) if with respect to any other Software, only if VMware has reviewed and approved of the methodology, assumptions and other parameters of the study (please contact VMware at [email protected] to request such review and approval) prior to such publication and distribution.”
The next logical step?
We’ve tried to address a gap in useful comparison information by investing our own resources in two identical test beds. We’ve run a wide range of synthetic performance tests to characterize relative performance. We think many people will find these results extremely useful in helping them to make informed buying choices.
Now we’ve reached out to Nutanix, and requested permission to publish our results. We hope they agree.
Stay tuned, folks!