There’s been a lot in the press recently around the subject of testing storage performance on newer hyperconverged architectures.
Our own experience is that there are big differences in how a given hardware configuration will perform, depending on whose hyperconverged software stack you’re using.
If performance is important to you, you should know what you’re getting before you buy.
With regards to VSAN, we’ve been continually publishing the results of our own internal testing, and done so with enough detail so that someone could reproduce the results if desired (scroll to the bottom of this page for a sampling). We’ve also supported independent reviewers such as StorageReview.com to share their own unbiased results.
That being said, we’d like to do more — much more.
Wouldn’t it be great if anyone could easily do their own head-to-head testing?
To help customers make better informed choices, we’re introducing a free new tool that makes storage performance testing on hyperconverged clusters much, much easier.
We call it HCIbench, as in “hyperconverged infrastructure benchmark”. It’s essentially an automation wrapper around the popular and proven Vdbench open source benchmark tool that makes it far easier to automate testing across a hyperconverged cluster.
The people who’ve tried it tell us that it’s a huge step forward in simplicity and repeatability. Easier testing = more testing + better testing. Continue reading
Let’s face it — enterprise storage is a big investment.
And there are big, meaningful differences in how different storage products perform when you put them to the test. Higher performing solutions can handle more workloads, more easily accommodate growth, and generate fewer unpleasant performance problems to deal with.
Great performing solutions can save both money and time.
Unfortunately, using publicly available information to compare different alternatives is a frustrating exercise at best. Although we at VMware publish VSAN results frequently, that’s not the norm. If you’d like a quick list of our published results to date, please scroll to the bottom of this post.
The lack of directly comparable performance testing information is not helpful if you have an important decision to make.
The solution? Do your own head-to-head testing. Investing in your own storage performance testing can help you figure out what’s the best product for you — and also avoid nasty surprises later on down the road.
And in this post, we’ll give you the basic do’s and don’t you’ll need to be successful in doing your own storage performance testing. Continue reading
Some modifications were made to the EVO:RAIL configuration and it is up to you to figure out what happened! You will have 60 minutes to complete the three tasks in the challenge lab.
Are you up for the EVO:RAIL Challenge?
If you’d like to take a shot at the EVO:RAIL Hands-on Lab Challenge and potentially win a free conference pass to VMworld 2015 or some free EVO:RAIL swag, simply take the lab. There is plenty of time to participate in the challenge which will run through August 20, 2015 with new winners selected monthly.
- It is open to everyone, everywhere (except VMware employees).
- Competition starts May 1, 2015 and ends on August 20, 2015. Winners will be announced on or about the 1 st of each month and we will notify each winner by email.
- You can take the challenge each month but you can only win once.
To learn more about the EVO:RAIL Hands-on Lab Challenge including the prizes, rules as well as terms and conditions, go to: https://www.vmware.com/promotions/evorail-challenge
To learn more about VMware EVO:RAIL, please visit us at: http://www.vmware.com/products/evorail
I have now been on a steady diet of hyperconverged customer conversations for the last six months. That’s my nature — I learn about something by talking to the people who are actually doing it …
With all due respect to industry analysts, I’ve now been able to create a fairly accurate model of what most people are looking for — and it’s not exactly the picture that the analysts are painting.
Sampling bias aside, I’m finding that the more I talk to people, the more the observed customer shopping lists tend to converge into a very short, understandable agenda.
Keep in mind, the hyperconverged segment is moving fast, with many players and interesting choices. What was new and innovative a few years ago is simply table stakes today.
There’s now been enough real-world experiences that there are more than a few fully-informed buyers out there. And they certainly have strong opinions! Continue reading
For those of you have been following this thread for a while, you know we’re in the midst of head-to-head performance testing on two identical clusters: one running VSAN, the other running Nutanix. Recently, we’ve updated the Nutanix cluster to vSphere 6 and 4.1.3 — however, no differences have been observed performance since the change.
Up to now, we’ve only been able to share our VSAN results. That’s because Nutanix recently changed their EULA to prohibit any publishing of any testing by anyone. It’s very hard to find any sort of reasonable Nutanix performance information as a result. That’s unfortunate.
By comparison, VMware not only regularly publishes the results of our own tests, but also frequently approves such publication by others, once we’ve had a chance to review the methodology — simply by submitting to firstname.lastname@example.org.
Since the results are so interesting, we’re continuing to test!
As we start to move from synthetic workloads to specific application workloads, we recently finished a series of head-to-head Jetstress testing against our two identical clusters. Previous results can be found here and here.
If you’re not familiar, Jetstress is a popular Microsoft tool for testing the storage performance of Exchange clusters. A busy Exchange environment can present a demanding IO profile to a storage subsystem, so it’s an important tool in the testing arsenal.
TL:DR our basic 4-node VSAN configuration passed 1000 heavy Exchange users with flying colors — and with ample performance to spare. We can’t share with you how the identical Nutanix cluster did, but it’s certainly a worthwhile test if you have the time and inclinations.
That being said, there were no surprises — each product performed (or didn’t perform) as we would expect based on both prior testing as well as customer anecdotes.
Now, on to the details!