Welcome to the VMware vSphere Storage Policy Based Management (SPBM) two-part blog series where we will be exploring SPBM features, components, and the major role it plays in automating storage management operations in the Software Defined Data Center.
Storage Policy Based Management (SPBM) is the foundation of the SDS Control Plane and enables vSphere administrators to over come upfront storage provisioning challenges, such as capacity planning, differentiated service levels and managing capacity headroom. Through defining standard storage Profiles, SPBM optimizes the virtual machine provisioning process by provisioning datastores at scale and eliminating the need to provision virtual machines on a case-by-case basis. PowerCLI, VMware vCloud Automation Center, vSphere API, Open Stack and other applications can leverage the vSphere Storage Policy Based Management API to automate storage management operations for the Software-Defined Storage infrastructure.
For more information on the Software-Defined Data Center and its related components, please visit the VMware SDDC Product pages.
Traditional storage systems have been manually managed and over engineered in to better accommodate the static provisioning process inherent in their design along with the constantly evolving storage requirements of applications consuming storage.
Matching Storage Consumers and Storage Providers
The major challenge with traditional storage architectures lies in aligning storage consumer needs with storage provider capabilities. Offset in alignment, results in the over provisioning of storage resources and waste that IT admins have suffered throughout the years. Proper alignment of application needs and storage resources can eliminate the loss of storage brought on through over provisioning.
Storage Management Operations can be divided into two major categories:
• Storage Consumers = Applications requiring storage capabilities
• Storage Providers = Storage arrays offering various storage capabilities
The vSphere environment sits between the storage consumers (applications) and the storage providers (storage arrays). This enables vSphere to act as an arbiter between the application’s needs and the array’s capabilities.
In the traditional storage-provisioning model, storage tiers are created through the acquisition and deployment of arrays from different classes (high-end, mid-tier, archive, etc.). In this model, each class of array becomes a separate storage tier. Applications are then statically bound to a specific array that best matches their class of requirements.
Example: Tier 1 application VMs to the high-end array, file and printer server VMs go to the mid-tier array, and archive data goes to the least expensive array.
Challenges of the Traditional Storage Model
The traditional storage model confines storage arrays to delivering only one level of service. Arrays today generally have multiple capabilities allowing for wider flexibility in delivering multiple levels of service (e.g. performance, data protection, tiering, replication, etc.).
The traditional storage model also required a great emphasis on architecture sizing (How many storage consumers, what type, etc.). Storage arrays are quite often the largest cost in the datacenter and they take a considerable effort to deploy and migrate data to and from. Considering the cost, time, and effort involved, it is imperative that the storage administrators plan storage needs properly to avoid misaligning applications and storage resources.
The Storage Pooling provisioning model takes a “bottoms up” approach to storage provisioning. With storage arrays gaining the capability of delivering multiple levels of service, storage administrators could began creating different classes of service within a single storage array. This allows for more flexibility in provisioning as additional resource pools could be configured for whichever storage tier required additional resources for service.
Existing and new workloads would be assigned to a static level of service established in advance by the storage administrator. Changing service levels required moving the application’s data store to the appropriate pool within a storage array.
Challenges of the Storage Pooling Model
As with the traditional storage provisioning model, storage pooling does not consider individual application requirements. Applications are still forced into pre-defined buckets of storage. Storage administrators still have to go through the typical “have-a-hunch-provision-a-bunch” provisioning style when deploying a new array because re-carving the storage array resources into a different set of service pools is a daunting task.
Misalignment of application needs and storage resources
These storage models inevitably resulted in a considerable amount of misalignment between what the capabilities an array provided, and what the applications actually required. The result of this misalignment was the creation of a lot of higher-end arrays that are supporting less-than-critical applications. It is neither efficient nor optimized.
In these models over provisioning is often necessary in order to best guarantee the allocated storage resources will meet application requirements. Wasted storage resources are brought about due to lack of granularity in the provisioning process.
Example: Say Tier 2 and Tier 3 storage are out of resources and Tier 1 storage is the only available locations to store archive data. In this scenario Tier 1 storage capabilities will not be fully leveraged by the tier 3 application storage requirements for low-cost, long-term storage. This is one example of the misalignment of application resources and storage capabilities.
In the Software-Defined Storage (SDS) architectural model, VMware’s goal is to leverage the hypervisor to bring about revolutionary storage efficiencies. Software-Defined Storage is the vision that storage services should be dynamically created and delivered on a per VM basis with control of these services occurring through policies rather than the time consuming process of managing each storage system independently.
This innovative approach succeeds in aligning storage services with application requirements by shifting the traditional storage operational model from a bottoms-up array-centric approach to a tops-down VM centric model. After storage policies are configured, the storage consumer can choose their desired application or virtual machine and the policy engine will read the associated storage resource policy then orchestrate the precise provisioning of storage resources that match the application’s storage resource requirements.
Through Software-Defined Storage, VMware is allowing storage resources to be provisioned precisely to application requirements and seeing a tangible reduction in storage over provisioning, IT management cycles and cost.
This is great news for storage admins. As a previous storage admin, I would gladly welcome any assistance in offloading any and all routine mundane tasks. Imagine having more cycles to put out the fires you already are fighting. The more that we virtualize and automate the control of storage resources, the more of our cycles that are freed up to do everything else that is demanding our attention.
For a more extended look into Software-Defined Storage, here is an excellent white paper by Chief Strategist for VMware’s Storage and Application services, Chuck Hollis: The VMware Perspective on Software-Defined Storage
The Policy-Driven Control Plane is VMware’s new management layer for Software-Defined Storage. This layer provides a common orchestration and automates storage consumption with a consistent approach across all storage tiers.
Storage Policy-Based Management (SPBM) is VMware’s implementation of the policy-driven control plane. It is integrated with vCloud Automation Center, vSphere APIs, PowerShell, and OpenStack.
The Storage Policy-Based Management (SPBM) engine interprets the storage requirements of individual applications specified in policies associated with individual VMs and dynamically composes the storage service placing the VM on the right storage tier, allocating capacity, and instantiating the necessary data services (snapshots, replication, etc.).
This is in contrast to a typical storage environment, where each type of storage array has its own management tools, largely disassociated from specific application requirements. The policy-driven control plane is programmable via public APIs that can be used to consume and control policies via scripting and cloud automation tools for self-service consumption of storage.
All data plane devices must be able to express their capabilities to the Storage Policy-Based Management engine as well as respond to dynamic requests for composed storage service composition that is aligned to specific application requirements.
Here is a great overview of the Software-Defined Storage model and its components: Understanding the DNA of Software Defined Storage
Stay tuned for part2 of the article where we will dive deeper into the makeup of the vSphere storage policy components. Here is a sneak preview of the pieces of Storage Policy Based Management put together…