There’s been a lot in the press recently around the subject of testing storage performance on newer hyperconverged architectures.
Our own experience is that there are big differences in how a given hardware configuration will perform, depending on whose hyperconverged software stack you’re using.
If performance is important to you, you should know what you’re getting before you buy.
With regards to VSAN, we’ve been continually publishing the results of our own internal testing, and done so with enough detail so that someone could reproduce the results if desired (scroll to the bottom of this page for a sampling). We’ve also supported independent reviewers such as StorageReview.com to share their own unbiased results.
That being said, we’d like to do more — much more.
Wouldn’t it be great if anyone could easily do their own head-to-head testing?
To help customers make better informed choices, we’re introducing a free new tool that makes storage performance testing on hyperconverged clusters much, much easier.
We call it HCIbench, as in “hyperconverged infrastructure benchmark”. It’s essentially an automation wrapper around the popular and proven Vdbench open source benchmark tool that makes it far easier to automate testing across a hyperconverged cluster.
The people who’ve tried it tell us that it’s a huge step forward in simplicity and repeatability. Easier testing = more testing + better testing. Continue reading
Let’s face it — enterprise storage is a big investment.
And there are big, meaningful differences in how different storage products perform when you put them to the test. Higher performing solutions can handle more workloads, more easily accommodate growth, and generate fewer unpleasant performance problems to deal with.
Great performing solutions can save both money and time.
Unfortunately, using publicly available information to compare different alternatives is a frustrating exercise at best. Although we at VMware publish VSAN results frequently, that’s not the norm. If you’d like a quick list of our published results to date, please scroll to the bottom of this post.
The lack of directly comparable performance testing information is not helpful if you have an important decision to make.
The solution? Do your own head-to-head testing. Investing in your own storage performance testing can help you figure out what’s the best product for you — and also avoid nasty surprises later on down the road.
And in this post, we’ll give you the basic do’s and don’t you’ll need to be successful in doing your own storage performance testing. Continue reading
Our colleagues at StorageReview.com are midway through a great extended review of VSAN 6, worth reading if you’re interested.
In “VMware Virtual SAN Review: Overview and Configuration” the reviewer notes that “deploying and configuring VSAN is a simple process for those used to working in a virtualized environment, and especially for those that are familiar with vSphere“.
In “VMware Virtual SAN Review: VMmark Performance“, the choice of benchmark is interesting, as VMmark is usually seen as a compute and memory test — however, when considering hyperconverged storage, server resource utilization becomes important. The reviewer was able to load up an impressive 18 times on a modest, 4-node VSAN configuration, noting that “the overhead of the shared storage component of VSAN didn’t inhibit the overall performance of the cluster” and that “at 18 tiles, we still had CPU resources leftover on all the nodes“.
And in “VMware Virtual SAN Review: Sysbench OLTP Performance“, the reviewer spun up four transactional databases, one per server, and achieved an impressive 2,829 transactions per second, with “plenty of additional CPU headroom as well as some storage I/O headroom for additional activities”.
New: “VMware Virtual SAN Review: SQLserver Performance“, 4 SQLserver images delivered an impressive ~12,000 transactions per second, and with very good latency.
In each case, pricing for the tested systems are shared. However, head-to-head pricing comparisons vs. traditional arrays aren’t easy to get at, as the VSAN clusters not only offer shared storage services, but also supports a large number of VMs on the same hardware.
More coming soon!
I have now been on a steady diet of hyperconverged customer conversations for the last six months. That’s my nature — I learn about something by talking to the people who are actually doing it …
With all due respect to industry analysts, I’ve now been able to create a fairly accurate model of what most people are looking for — and it’s not exactly the picture that the analysts are painting.
Sampling bias aside, I’m finding that the more I talk to people, the more the observed customer shopping lists tend to converge into a very short, understandable agenda.
Keep in mind, the hyperconverged segment is moving fast, with many players and interesting choices. What was new and innovative a few years ago is simply table stakes today.
There’s now been enough real-world experiences that there are more than a few fully-informed buyers out there. And they certainly have strong opinions! Continue reading
For those of you have been following this thread for a while, you know we’re in the midst of head-to-head performance testing on two identical clusters: one running VSAN, the other running Nutanix. Recently, we’ve updated the Nutanix cluster to vSphere 6 and 4.1.3 — however, no differences have been observed performance since the change.
Up to now, we’ve only been able to share our VSAN results. That’s because Nutanix recently changed their EULA to prohibit any publishing of any testing by anyone. It’s very hard to find any sort of reasonable Nutanix performance information as a result. That’s unfortunate.
By comparison, VMware not only regularly publishes the results of our own tests, but also frequently approves such publication by others, once we’ve had a chance to review the methodology — simply by submitting to firstname.lastname@example.org.
Since the results are so interesting, we’re continuing to test!
As we start to move from synthetic workloads to specific application workloads, we recently finished a series of head-to-head Jetstress testing against our two identical clusters. Previous results can be found here and here.
If you’re not familiar, Jetstress is a popular Microsoft tool for testing the storage performance of Exchange clusters. A busy Exchange environment can present a demanding IO profile to a storage subsystem, so it’s an important tool in the testing arsenal.
TL:DR our basic 4-node VSAN configuration passed 1000 heavy Exchange users with flying colors — and with ample performance to spare. We can’t share with you how the identical Nutanix cluster did, but it’s certainly a worthwhile test if you have the time and inclinations.
That being said, there were no surprises — each product performed (or didn’t perform) as we would expect based on both prior testing as well as customer anecdotes.
Now, on to the details!
Duncan Epping brings us a great VSAN story today — a very early VSAN adopter who now is intent on replacing as much of their existing storage environment as possible with VSAN.
For United Utilities, it’s a perfect storm of lower cost, amazing performance and a simplified operational model. All future storage requirements are going on VSAN unless there’s a really good reason not to.
But it’s not as easy as it sounds, as there are predictable organizational issues at hand 🙂
A great read!
Having been a student of how new tech finds it way into data centers, I am always impressed by how many IT professionals strike this great balance between getting the benefits from the new thing — and managing potential risk.
As I talk to VSAN users, the pattern emerges. They certainly see the potential as compared to traditional storage, but are proceeding prudently.
Today’s story has to be anonymous — not everyone wants their name used. Perfectly reasonable. Let’s call our VSAN adopter Ken, just to keep things simple. Continue reading
VSAN was designed to be the best storage for your VMDKs. But that might seem a limitation if you have a need to exposed file shares, or perhaps iSCSI targets. That means running something else on top of VSAN.
One of the most powerful choices available today comes from our partner Nexenta. In addition to rich functionality, they’ve got the extra mile and have done a nice integration with both VSAN and vSphere.
Cormac Hogan runs through the basics on his blog.
So much enterprise IT is delivered by small, lean teams that have to wear many hats to get the job done.
I had a chance to recently interview Serge Kovarsky, who is using VSAN more and more to get his job done — and make things simpler in the process.
Serge is part of a four-person infrastructure team for Baron Capital Management. He was one of the earlier VSAN adopters, but — it seems — has turned into a big enthusiast. Continue reading
One of the things I enjoy doing is getting VSAN customers on the phone, interview them and share their stories here.
But sometimes a VSAN customer is moved to write their own blog post.
Today’s happy VSAN story comes from Jeff Wong, who works for a large Australian company. He’s responsible for a multi-site Horizon deployment, and is quite happy with his choice.
In this post, he shares his thought process behind his decision. A quick read, but very illustrative on how real-world IT decisions are made.
Thanks, Jeff, for sharing!