vSphere Storage

SPBM, because not all applications are created equal

Since taking over Tech Marketing for Storage in VMware Cloud on AWS, I’ve had the privilege to work with many Customers and Partners working within the VMC on AWS service. For the most part, it’s been an invigorating experience. Given the complexity of Hybrid Cloud operations, I went into this expecting the primary challenge to be technical. I’ve been pleasantly surprised to find that the technology for the most part just works. The conversations I’ve had and the problems I’ve helped work through have been operational issues. Today I would like to talk about one of the more disappointing trends I’ve noticed; an overuse and reliance on the Default Datastore Policy. While we can choose to use a single policy and treat vSAN like a traditional storage platform doing so is rarely the preferred choice.

A bucket labled "Default Datastore Policy" full of VMs with different requirements

Not all data is equal.

The problem with this approach is that not all applications and or services are equal. Applications and by extension the data they rely upon tend to have a class hierarchy. Over the years we’ve become accustomed to building the infrastructure based on the needs of our most demanding application. Few customers have the scale to justify multiple different classes of infrastructure; moreover, virtualization negates the need altogether in most cases.

Storage Policy Based Management (SPBM) is the same principle applied directly to data storage. Customers can choose how available they would like their data on a per-object basis.

That sounds like work?

The criticism of this approach is that it creates a bit of policy sprawl, and from afar can seem like a lot of work. However, consider the cost of the alternative. By placing all data under a single policy not only are we artificially raising the data availability requirements for many workloads, but we are also making any future changes much harder. By using a single policy, we unintentionally create a big scary shark that no one can touch without fear of unintended consequences. It effectively negates the advantage of the many to one declarative management paradigm enabled through SPBM.

In my previous life with storage, we would refer to this as turning sharks into minnows. Using SPBM and vSAN, we can create and non-disruptively re-assign and or modify policies to align the data handling to the needs of a given application. Ideally, we would create a policy for each application or set of services. We may even create any number of policies around the needs of the data used by individual applications.

For example, the boot drive of an Oracle instance in a RAC cluster is less critical than the shared data that all RAC instances rely upon. Since the loss of a single instance doesn’t impact the overall services availability. Therefore it would be beneficial to create policies that align the data availability needs to the application.

Using multiple policies to align the data handling to the needs of the application.Using a requirement driven model not only enables right-sizing any deployed application or service, but it indirectly forces the operations team to pre-tag and triage each service running in the cloud. This groundwork is paid back in spades as the needs change for a given deployment. Operations have prepared for the eventual change, and are empowered to quickly and safely change the data handling requirements without any concerns about unintended consequences. There are other benefits as well around visibility and reporting, but they pale in comparison to the administrative flexibility that a declarative control plane such as SPBM delivers. So please, in the name of manageability and service uptime stop using the Default Datastore Policy.