Software-Defined Storage vSAN

The Collapse Of Storage

My best soundbite is that storage is in the process of collapsing. Once a standalone topic, storage is clearly pulling away from our familiar model of external storage arrays, and disappearing into the fabric of servers and hypervisors.

While we all like to talk about disruptive industry changes, this one perhaps is the ultimate disruption: it impacts every aspect of storage: the core technology, the consumption model, the integration model as well as the operational model.

As a result, most everything we’ve come to know about storage changes going forward. For most people, what you think you know isn’t how it’s going to be before too long.

Let’s take a closer look at each of these “collapses” going on with storage today.

 

The First Collapse — The Storage Technology Base

flash_pictureFor the last twenty years, external storage arrays have been built on proprietary hardware, running highly specialized microcode. Vendors would compete on performance, functionality, support and price.

However — looking strictly through a technologist’s eyes — there have always been a core challenge with this model. Component technology evolves quickly, and it takes considerable time for storage array vendors to qualify the faster/better/cheaper thing.

Backwards compatibility was a concern too — would the new widget work in the array that’s already sitting on the floor?  Sometimes — but often a new array would have to be purchased to get the benefit of the faster/better/cheaper components. More delay, more cost.

This familiar storage technology base is quickly giving way to industry standard servers running software. The price/performance advantage is remarkable, as is the pace of evolution of server technology, flash drives and the like. The newer widgets are available sooner, and with much less impact on the installed environment.

middleman_1Some traditional storage vendors have started to offer array-like packages of storage software running on preconfigured industry standard servers, lowering costs in the process — also with the advantage of a singular support model.

But customers aren’t always appreciative of the hardware middleman. Maybe they already have a preferred server vendor. Maybe they want unique configurations. Better than what came before, though.

Either way, the trend is clear: storage is starting its inexorable move — away from purpose-built storage arrays, to a software-plus-server model.

 

The Second Collapse — The Storage Consumption Model

In the traditional array model, IT buys storage from one or more storage vendors. Storage arrays can be large and comparatively exboxpensive boxes. Planning is required, as well as negotiations. Attention to detail is mandatory, as mistakes can be expensive.

In the new model, storage capacity is acquired from the server vendor — it’s part of the server buy. One advantage is that the acquisition increments can be much smaller — a few components here, an additional server there. No big acquisitions that demand everyone’s attention for weeks at a time. Less need to plan ahead, more flexibility in making continual adjustments.

Per-unit acquisition costs are lower — often much lower. The exact same storage component found in an average storage array will usually be priced much more when when compared to a purchase as part of a server buy. Why? There’s less economic overhead — server supply chains are pretty efficient.

Indeed, the industry analysts have started to create new categories to capture this growing shift in storage consumption behavior.

 

The Third Collapse — The Storage Integration Model

bridgeFor the last twenty years, external storage has been largely been its own world, and servers and applications in a different one. Many useful bridges have been built over the years — various flavors of storage networks and protocols, APIs, plug-ins, management tools — in an effort to bridge the two worlds — but the gaps and seams are still obvious.

Some examples? Today, most arrays don’t natively understand applications, and application boundaries. A change in application policy isn’t automatically understood by the array. We still live in a world of “bottoms up” provisioning — carve the array, and hope it will be intelligently consumed. The idea of a dedicated storage network is starting to look quaint and rustic.

If you’ve ever spent time on an AS/400 for example, you’ll notice that storage concepts are integrated throughout — it’s not seen as a separate island. While I’m not arguing we should return to the 1980s, our current fixation with technology silos and functional islands hasn’t been that way forever.

200434789-001The pendulum swings. Storage functionality and data services have begun their inevitable march closer to the servers and applications they serve. Given this view, the hypervisor is in a uniquely privileged architectural position to absorb this functionality. It can see each and every workload. It sees all infrastructure resources. And it is a convenient point to attach policies that drive application-specific behaviors.

It’s fair to point out that simply using servers to run storage software might impact the technology and consumption model, but leaves the storage integration model unchanged. Storage still remains a logically isolated island that must be bridged to the world of applications and virtualization.

 

The Fourth Collapse — The Storage Operational Model

Here is the final — and perhaps most significant — collapse of the traditional storage model.

Storage operations have typically been performed by storage professionals in most IT settings. They have their own workflows, their own certification, their own way of looking at the world.

The same can be generally said about virtualization professionals. All the usual IT workflows go back and forth between the two groups: provisioning, monitoring, troubleshooting, capacity planning, performance management — it’s quite a list.

many_armsWhen storage collapses into servers, software and ultimately the hypervisor, there is much less need for dedicated storage expertise. Nor for the inefficient workflows that go back and forth. The vSphere team can do most of what they need to do without involving or engaging the storage team. This frees the storage team to work on the really hard problems where they can create the most value vs. consumed with day-to-day operations.

In essence, the default storage operational model collapses into the general virtualization (and cloud) workflows: be they manual, scripted or fully automated. It becomes one thing to manage from a single control point vs. multiple things to manage with multiple control points.

 

Final Thoughts

The infamous quote is that “software is eating the world”. In the IT sector, it’s perhaps more accurate to say that virtualization is eating infrastructure: compute, networking and now storage.

Traditional standalone storage thinking is rapidly collapsing into servers, software and the hypervisor. And no stone is being left unturned: technology, consumption, integration and operations.

Interesting times, indeed …