Improving the performance of an infrastructure and the applications that live on it is a common battle for data center administrators. Line-of-business applications depend heavily on the performance of the infrastructure in ways that have a ripple effect throughout an organization. But the effort to improve the performance often overlooks areas that are subject to significant inefficiencies. One of those areas is processes and workflows. Let’s look at why this happens, and how it can be solved.
Looking beyond hardware and software
Performance problems are typically identified first by the application owner or consumer. Symptoms can be as vague as the application “feeling slow,” to statements more tangible: “The batch process is taking 50% longer to complete than desired.” Infrastructure administrators are often insulated from the details of what is being requested of the application, so they address it in ways they are most familiar with, including one or both of the following:
- Hardware. As much as modern data centers are moving to a software-defined model, the software runs on something. Hardware matters, and it may be the right step when looking at addressing performance challenges in an environment. It is easy to become consumed by hardware in a way that seems as if it is the only option for optimization. The latest technical specifications, underlying transport protocols, and bus interconnects are all important influencing factors, but it becomes easy to lose sight of the original problem statement.
- Infrastructure software and applications. Whether it be virtualized applications, or software related to the infrastructure, software typically goes through an evolution of optimizations and is the key to unlocking the power behind the underlying hardware. While VMware makes continual improvements to the hypervisor to improve the performance regardless of the hardware used, a vSAN powered environment brings the opportunity for performance improvement to the forefront even more. Unlike traditional architectures, vSAN powered environments can adopt the latest and greatest commodity technologies at an affordable price to quickly address business needs. Technologies like Intel Optane can be introduced into servers, and applications powered by vSAN can easily take advantage of the new levels of performance they provide.
Unfortunately, acknowledging just two possible areas of focus neglects one of the more important elements of delivering optimal performance for an environment: Processes and workflows. This is a category of automated processes that represent the desired business outcome of the data, carried out through various tasks of data ingestion, transformation, and consumption that are needed by the organization.
Understanding workflows
Let’s take as an example an organization of developers, analysts, testers and business units that all use the same database in multiple ways, and for various durations. How is that database cloned out and presented to these consumers for various needs? How are subsequent updates propagated to these instances? How are changes slipstreamed back into production? Do methods such as full table replication or bulk inserts that were once sufficient for an organization, now place unnecessary burdens on the infrastructure?
Legacy workflows tend to be much less visible than hardware and software and can grow more antiquated as the demands of the business change. Processes and workflows that once worked for an organization when they are small may be terribly inefficient at scale. Workflows using older copying and replication techniques that might have been moving gigabytes a few years ago may be trying to move tens of terabytes today.
Processes and workflows can always benefit from a constant re-evaluation of some type of value stream mapping that aims to reduce impediments and improve workflow performance. Sometimes workflows will not clearly reveal itself as a bottleneck until faster hardware and software have been introduced. This bottleneck-shifting is a natural effect of improving discrete elements of an environment, but far too often, associated exclusively with hardware and software.
How processes and workflows can be optimized
In the example above, an organization faced with old or inefficient database workflows may be served well by a solution like Delphix. Instead of relying on heavyweight operations, Delphix can be used for the presentation and management of virtual databases for a streamlined way to create and manage separate data environments and do so in a way that minimizes costly transfer of payload for largely unchanged data. The unnecessary processing of data is by eliminating large bulk copying and updating is what can often lead to dramatic improvements in effective performance seen by the application owner.
A reference architecture titled “Delphix Data Platform on VMware vSAN” was recently published by Palani Murugan, Solutions Architect in the VMware Storage and Availability Business Unit, detailing how Delphix running on a vSAN powered environment can provide a method of DataOps using Copy Data Management (CDM) technologies, for a fast an efficient environment. It goes into detail and validates how vSAN and Delphix work together.
What is the best way to start optimizing? Discovery and value stream mapping. Application owners are typically the most familiar with the workflow the application is attempting to achieve. They might also be unfamiliar with new ways of accomplishing the goals of data presentation, movement, cloning, and updating that exist beyond the abilities of the application. Looking first at “what” needs to be accomplished, then looking at “how” it can be achieved helps to better define and quantify the opportunities for improvement.
Conclusion
In the quest to provide the most efficient, agile infrastructure, now is the time to challenge your own assumptions on where the opportunity for improvement in your infrastructure may exist. You might be surprised by the answer. VMware vSAN allows for you to ride the hardware wave of the very latest technologies with a storage system built right into the hypervisor. Pair this with a DataOps, CDM style solution for workflow optimization, and you may see unprecedented levels of agility, efficiency, and performance in your environment.