Reality intrudes, however. Getting a bunch of storage vendors to loan you their expensive arrays — all at the same time — is almost impossible, unless you’ve got a very big transaction to leverage.
Finding the time and the space to do the testing is another issue as well — it’s rare that someone with the right skills has the luxury to spend several weeks doing array testing.
However, things worked out well for Jay Scheponik of JKS Consulting, Inc. (firstname.lastname@example.org). He was contracted to do just that — put up a raft of comparable storage solutions, and see how they performed head-to-head.
Not only was he able to evaluate the usual external array suspects, but he also was able to test newer hyperconverged products, like VSAN.
Needless to say, we were very interested in his findings. I was lucky enough to get Jay on the phone to ask a few questions.
Jay, being asked to cross-compare a number of popular storage solutions sounds like a fun gig. How did this come about?
The immediate use case was for VDI, but they were also interested in other use cases beyond that, and my testing reflected that.
I had the background (VCDX #193), so I got the job.
You were able to evaluate a large number of products: hybrid and all-flash arrays, as well as products like VSAN. How were you able to round up so many products?
Well, as I mentioned, my client’s firm is very large and buys a lot of storage. I think we were able to get everyone’s attention. It still took a lot of planning and effort to get everything lined up. It wasn’t an easy task.
Tell me a bit about the performance profiles you used?
We fired up multiple IOmeters in our test environment — for example 10 managers, 20 workers and 80 VMDKS, each VMDK being 4GB — using a low per-VMDK OIO setting of 1. We cared more about response times than high IOPS numbers. It was a heavy load, to be sure.
Here are the tests we ran:
35% read / 65% write — 64K blocks — mostly sequential
80% read / 20% write — 128K blocks — mostly sequential
80% read / 20% write — 256K blocks — mostly sequential
10% read / 90% write — 4K blocks — random
35% read / 65% write — 4K blocks — random
50% read / 50% write — 4K blocks — random
80% read / 20% write — 4K blocks — random
synthetic VDI mix — details here.
Sounds like a pretty thorough battery of tests. I’m guessing you can’t publish the results for a variety of reasons.
Yes. I did the work on behalf of my client, so it’s their property. Also, there are all sorts of vendor restrictions on publishing the results of performance testing.
Well, without getting into specifics, anything you could share about VSAN performance?
In a nutshell, surprisingly good.
Our VSAN test rig was relatively modest: four nodes, 24 cores per node, one 400 GB flash cache drive, and seven 1.2TB 10k SAS drives.
We tested VSAN twice, once on 5.5 and then again on 6.0. I observed serious performance improvements in 6.0 compared to 5.5
The hybrid VSAN config did very well against all the hybrid storage arrays we tested, offering better performance most of the time. In addition, VSAN was significantly faster than the other hyperconverged system we tested.
The surprising part for me was comparing a hybrid VSAN config against a few all-flash arrays.
A lot of time, VSAN was either faster or within 1 msec of response time. A few of the tests the all-flash guys did better, but hybrid VSAN performance was very, very respectable — especially given the cost differential.
Installation, usability, etc.?
Compared to dealing with a slew of external storage arrays, getting VSAN up and running was relatively easy, once I had made it through the HCL process.
Any final thoughts?
Prior to me doing this testing, I would have had a hard time believing that a product like VSAN hybrid could not only perform better than the hybrid arrays, but also hang well with the all-flash crowd.
It was good to see the marketing match the reality.
Not everyone has the chance to put a raft of storage solutions side-by-side and put them through their paces. But when that does happen, it’s nice to see that VSAN does as well as we’d expect.
Thanks to Jay for sharing his story!