Automation technologies are a fundamental dependency to all aspects of the Software-Defined Data center. The use of automation technologies not only increases the overall productivity of the software-defined data center, but it can also accelerate the adoption of today’s modern operating models.
In recent years, a subset of the core pillars of the software-defined data center has experienced a great deal of improvements with the help of automation. The same can’t be said about storage. The lack management flexibility and capable automation frameworks have kept the storage infrastructures from delivering operational value and efficiencies similar to the ones available with the compute and network pillars.
VMware’s software-defined storage technologies and its storage policy-based management framework (SPBM) deliver the missing piece of the puzzle for storage infrastructure in the software-defined data center.
Challenges – Old School
Traditional data centers are operated, managed, and consumed in models that are mapped to silos and statically defined infrastructure resources. However, businesses lean heavily on their technology investments to be agile and responsive in order to gain a competitive edge or reduce their overhead expenses. As such, there is a fundamental shift taking place within the IT industry, a shift that aims to change the way in which data centers are operated, managed, and consumed.
In today’s hardware-defined data centers, external storage arrays drive the de-facto standards for how storage is consumed. The limits of the data center are tied to the limits of the infrastructure. The consumers of storage resources face a number of issues since traditional storage systems are primarily deployed in silos: each with unique features and unique management model. They each have their own constructs for delivering storage (LUNs, Volumes), regularly handled through separate workflows and even different teams. Which means that the delivery of new resources and services requires significantly longer periods of time.
Critical data services are often tied to specific arrays, obstructing standardized and aligned approaches across multiple storage types. Overall, creating a consistent operational model across multiple storage systems remains a challenge. Because storage capabilities are tied to static definitions, no guarantees for performance, risks of multi-tenant impacts, all disks treated same, and OPEX tied to the highest common denominator.
This is grossly inefficient and leaves storage architects and application owners shackled to over-designing their infrastructure or risk no providing the right balance of capacity and performance. Storage silos are typically created as a result of infrastructures that are unable to address all the operational efficiencies and application resource requirements in a modern software-defined datacenter.
Traditionally, the precious metal management approach is directly tied to storage infrastructures that face challenges ensuring the right balance of storage services level across a wide range of workloads demands.
Modern data centers continue to utilize external storage arrays because of they are powerful, multi-purpose systems that combine several tiers of hardware and layers of software to deliver capacity, performance, and data resiliency services. However, it is still obvious that the methodology at which storage is to be consumed and managed in the software-defined data center must change.
The Missing Link – New School
What consumers want, is a way to consolidate workloads onto infrastructures that enable flexibility and agility with the right amount of automation and control. Enabling dynamic and selective access to the capabilities needed for customer applications. Some of these capabilities are:
- Granular or selective control beyond precious metal policies
- Storage options that match application requirements
- Storage options that match budgetary constraints
- Opportunities to reduce operating costs
- Faster implementation of changes
- Faster adoption of new features/capabilities
The VMware’s Software-Defined Storage strategy is driving the transformation of the modern data center by bringing to storage the same operational efficiency currently being delivered by the other pillars (compute, network) of the software-defined datacenter.
By transitioning from the legacy storage models to a Software-Defined Storage model, customers gain the following benefits:
- Automation of storage “class-of-service” at scale: Provision virtual machines quickly across data center using a common control plane (SPBM) for automation.
- Self-Service capabilities: Empower application administrators with cloud automation tool integration (VMware vRealize Automation, PowerCLI, OpenStack).
- Simple change management using policies: Eliminate change management overhead and use policies to drive infrastructure changes.
- Finer control of storage class of service: Match VM storage requirements exactly as needed with class of service delivered per VM.
- Active monitoring/troubleshooting with per VM visibility: Gain visibility on individual VM performance and storage consumption.
- Non-disruptive transition: Use existing protocols (FC, ISCSI, NFS) across heterogeneous storage devices.
- Safeguard existing investment: Use existing resources more efficiently with an operational model that eliminates inefficient static and rigid storage constructs.
In virtualized infrastructures, not all virtual disks are equal. Performance requirements vary, and I/O characteristics tend to create noisy neighbor issues for other applications. These issues can lead to negatively impacting on the performance of other systems, outages, and ultimately greater storage costs.
Customers need more than just agile infrastructures; they need agility with control in order to satisfy the needs for their application while maximizing on infrastructure investments. Managing performance separately from capacity on each volume becomes a key tenant for capabilities to drive greater levels of efficiency and provide controls to both infrastructure/cloud builders and consumers.
One way in which this new operational model can be achieved in a software-defined data center is through a centralized service catalog for storage services. One that is integrated with the rest of the infrastructure resources and its design to deliver the next generation of storage service offerings.
This new approach needs to exceed traditional storage infrastructure management and service offering capabilities that fall short of enterprise scale requirements.
VMware customers that are interested or in need of getting to the next generation data center operating model can start by leveraging some of the core vSphere platform features, capabilities, and products that they probably already own:
- vRealize Automation
- vRealize Orchestrator
- Storage Policy-Based Management (SPBM)
- VMware Aware Storage API’s (VASA)
- vSphere Tags
It may be useful to think of the products and features listed above as being group into two different categories:
Control Plane Technologies
These technologies are defined as the logic layer of the VMware vSphere ecosystem that manages information about configurations, capabilities, groups, and policies.
Storage Policy-Based Management (SPBM) – It is integrated with the vSphere and it works with all vSphere storage technologies and also software-defined storage technologies (VSAN, VVols, VAIO, VMFS, NFS, etc). It also allows you to create VM Storage Policies that automate tasks like provisioning, initial placement, capacity allocation, and differentiating service levels. It integrates with VASA, PowerCLI, and vRealize Automation.
VMware Aware Storage API’s (VASA) – is the key control plane API technology for vSphere. This allows configurations and properties for specific storage functionality to go beyond just capacity and generic attributes. This also gathers info for usage in the greater vSphere implementation and exposures for automation. Reported capabilities are used to develop SPBM policies.
vSphere Tags – is tagging framework that can be use to define Storage services and characteristics when VASA capabilities are not presented or if policies are needed beyond what VASA can provide.
Orchestration and Automation Technologies
The purpose of workflow in our automation design is critical to help drive the order or operations and organization of tasks. vRealize Orchestrator is included with vSphere and is the key bridge technology to get unique capabilities working in a desired service catalog.
vRealize Orchestrator – is a robust workflow management platform that extends the capabilities of vCenter Server and vRealize Automation. It utilizes Java scripts and programming to take exposed capabilities to a selectable set of options for the lifecycle management of applications without touching the underlying technologies.
Together these products and technologies provide the foundation that allows customers to create the next generation storage service catalog with vRealize Automation.
The proof – Project Magic
Below are two fully functional and real demonstrations showcasing all the integration capabilities and values described in this article. Each demonstration is unique based on their use case, and infrastructure dependencies.
The Next Generation Data Center Operating Model from VMware. The infrastructure supporting the first demonstration is composed of of all the storage technologies supported by vSphere today (VSAN, VVols, NFS, and VMFS). The take away here is to identify how the consumption of storage and its silos are completely abstracted to the point where the infrastructure storage resources are consumed and managed entirely through a policy driven model. — Powerful!
This second demonstration is based on an implementation by SolidFire. In a collaborative effort the good folks from SolidFire (Josh Attwel, Keith Norbie) and I worked together to showcase a different solution. One that is based on their storage systems but still based on the same principals discussed in this article. At the same time demonstrate the potential of what can be achieve today with VMware’s software-define storage technologies from a partners perspective.
In this demonstration you see a functional service catalog that represent a full range of capabilities for applications to utilize in profiles of business needs a.k.a. Service Level Objectives (SLO’s). To deliver those we have to go deeper into storage capabilities that can now be exposed through VASA 2, vSphere tagging, and vRealize Operation workflows. The storage capabilities exposed are:
- QoS presets for an application like MongoDB, SQL, Oracle, and others. Example would be MongoDB Standard Min = 1K IOPS, Max = 2K IOPS, Burst = 5K IOPS.
- SRM would require a policy definition and workflow for snap, replication, and SRM configuration parameters
- Encryption which varies by providers
Here once the volume capabilities were built into policies we created vSphere tags mapped them to a custom vRealize Operations workflow that exposed the unique value. The capabilities can now be select from service catalog with any VM provisioning operation and storage service change request.
Applications are deployed from the vRealize Automation service catalog where you can select an application template and select how many VM’s, how many CPU’s, how much RAM, storage capacity, and now all of the exposed capabilities; like what preset QoS standards do I have per application profile. Once I selected what I want the system then executes for me at scale. VM’s with specific storage capabilities for performance, site protection, or encryption add new values to my SLA’s with business units.
The integration and automation capabilities discussed and demonstrated on the article are real and here today. They are not future road map items.
– Enjoy
For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVols) and other Software-defined Storage technologies, as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds