posted

0 Comments

Kai Holthaus, Sr. Transformation Consultant

 

Changing Change Management

Change Management has always been a core component of IT Service Management practices, for obvious reasons. Change is a constant, but change can be risky. With Change Management processes in place, IT organizations define the risk exposure they are willing to take, try to minimize the severity of any impact or disruption, and try to achieve success at the first attempt at change. And of course, Change Management processes are meant to ensure that all stakeholders receive appropriate and timely communication regarding the change, so they may avoid potential negative impact of the change, as well as adopt and support the change.

Perspectives regarding Change Management are changing, however, as a DevOps orientation is disrupting many traditional ITSM-based processes. Working with customers as a consultant with VMware Professional Services with a focus on operations transformation, I see an opportunity to flip the thinking regarding Change Management on its head.

 

Traditional Change Management

Today Change Management is treated as the last checkpoint before deployment to production. In the typical change approval process, the people designing and implementing the change rely on the oversight of the Change Approval Board to catch any risks they may have missed. However, experience shows that this system does not always prevent us from making bad changes.

 

 

Quality Control expert Dr. Sidney Dekker at Griffith University in Brisbane, Australia performed an interesting study that sheds light on what really drives risk reduction. The study examined hospital patients results and found that 1 out of 7 patients left the hospital sicker after their stay, due to the usual suspects: ineffective communication, human error and the like. But the exact same problems were in place with the other 6 out of 7 patients, the ones whose hospital visit had a beneficial effect. What drove the difference? The presence of “positive factors” such as engaged and determined employees, not the absence of “negative” factors.

Conclusion: Change Management today represents an attempt to reduce the presence of negative factors. Instead, we should be focusing on the presence and amplification of positive factors.

 

Take a lesson from Manufacturing

Manufacturers don’t handle changes like IT does. Imagine you are a car manufacturer and you want to implement a change to the entertainment system software. But all the cars affected by this change are not available to you. The best you can do is to change the documentation regarding the entertainment system and design a procedure for upgrading the software once the dealership gets a hold of the car. The people owning the affected documentation are accountable for the accuracy of their documentation. Let’s say the car’s entertainment system was operating at version 19 before. Once you pick up the car from the shop, it’s now matching the description in version 22. The employees were able to change the entertainment system software version by following the changed documentation.

 

What does this mean for IT?

The analogy of what I just described for manufacturing in IT is: we don’t change the server or application itself. Rather, we manage the change to the recipe or procedure describing how to provision or implement the server or application. Additionally, we use techniques like automated testing processes (eliminated audit steps), peer programming, peer reviews, test-driven programming and and on cords/swarming of problems to ensure our change, once deployed into production environments, will succeed, thus optimizing risk reduction. So now, Change Management can focus on (a) whether a change should be implemented and (b) when it should be deployed (if not immediately via Continuous Deployment.)

This represents a massive shift in thinking. Technology disrupters (think Amazon et al) follow this kind of idea: don’t change the “thing” itself. Don’t devise a plan to change the server or patch the operating system. Change the procedure that produced the server or configured the operating system, then redeploy, destroying the old “thing” in the process. Doing it this way will ensure that you always can re-create the “thing” when needed, for instance when things start to break, allowing you for a very quick response. The orientation now, rather than “keep taking a look” is “go faster.”

The outcome we are looking for here is the ability to increase both throughput and quality control. When you can bring new functionality to bear at a quicker pace, with higher quality, that can set you apart and help you gain competitive advantage.

 

How do you get started?

Any IT organization who is actively evolving to a DevOps mindset can benefit from this Change Management philosophy. The first step might be a high-level Value Stream Analysis, where you can pinpoint control points in the Change process that are broken, meaning that they do not reliably contribute to the idea of optimizing risk, and eliminate them.

Let us know if you are interested in exploring changing Change Management in your organization.

 


As a Sr. Transformation Consultant, Kai supports the VMware Advisory Transformation Services (ATS) team. Kai assists his clients to strategize and transform their IT organization to a services focused organization. Through client assessment of people, process, and technology, Kai and the ATS team will develop roadmaps and enhance the processes and procedures required to transform a client’s environment into a Software Defined Datacenter (SDDC).