digital_transformation high_availability software_development

Metrics: The Fourth Secret of Successful Digital Transformers

As a financial analyst, I lived and died by metrics. Words were there to qualify the numbers. All the action happened in carefully maintained spreadsheets. These were the star charts by which we reasoned through a business' momentum and health.

In the financial world, no one questions the importance of metrics. Accounting metrics are the lingua franca of business health. But as businesses are transforming, there's a lot happening under the surface. Measuring digital transformation is not governed by generally accepted accounting principles (GAAP). So, how do we measure that progress?

After I published about the "secrets" of companies using Cloud Foundry to increase release velocity and reduce time to market, I had an epiphany. There was a fourth secret staring me in the face. It was the metrics themselves. The companies in a position to boast about their results had measurable results.  

Business metrics or IT metrics?

With metrics on my mind, I attended a session on Digital Business KPIs at Gartner Symposium Barcelona. The presenter, Paul Proctor, urged attendees to connect to core business metrics. For example, measuring the percentage of revenue (business metric) from digital channels.  

This absolutely makes sense in terms of measuring outcomes. What's the value of learning and experimentation, even if it ends in failure? Put another way, if you only fund projects that have clear line of sight to meaningful revenue contribution (or other business outcome), two things are going wrong:

  • You are spending too much time building individual business cases

  • You are not spreading your risk across a portfolio of possibilities

Don't get me wrong: vetting for a business case is important. Venture capitalists vet every investment, but they spread their risk across several companies. Then they double-down on the investments catching fire.

Business outcomes are important to measure, but on their own they are insufficient. They don't measure agility. How do we measure becoming more nimble from a digital perspective? In a different session at Symposium, Adrian Cockroft offered a decisive suggestion:
 

Why the emphasis on reducing the number of steps and meeting it takes to get software shipped? They're the source of process waste that inhibits speedy release cycles. And shortened release cycles are one of the superpowers of digital transformation.

Measuring speed

Why are shorter release cycles so important? Shorter release cycles de-risk software development. Not because they are inherently smaller releases (although that is also usually true). It's because they allow you to course-correct. Being able to make changes quickly means you can fix bugs and roll-back features with ease.

Take Liberty Mutual, for example: they launched a "shoddiest viable product" for the motorcycle insurance market in 28 days. Then they pushed 45 updates into production in 55 days. During that time they learned and adjusted for real user behavior patterns.

At a minimum, you can begin to measure deployment frequency. Scotiabank recently shared that in ten months, they’ve accelerated to over three thousand deploys per month. Verizon shared a slew of metrics related to faster time to market.

Speed of deployment is also a valuable from a security perspective. Besides seeing a 1,400% increase in deployment frequency, CSAA Insurance recently cited a 1,614% increase in patch frequency. Translating increased patch frequency into a GAAP-based business metric is difficult. Yet mitigating cybersecurity risk is a boardroom-level concern at any enterprise. Patch frequency is worth measuring and it's a function of how frictionless you've made software deployment.

Exiting the firefighting business

The flip side of going faster is crashing less often. A more stable environment is more conducive to releasing more often. Teams can focus on releasing and not fighting fires.

But reducing downtime is more fundamental than that. As I learned interviewing Pivotal's Mark Ruesink on a recent podcast, maintaining availability is a huge source of waste in an IT group. Before you get to the sexy "time to market" numbers, there's a huge opportunity to shore up your uptime operations.

Successful Cloud Foundry adopters measure availability metrics, just like cloud service companies. Comcast has published six metrics around resiliency. These include a 47% reduction in mean time to resolution (MTTR) and a 44% reduction in incident frequency. They group these metrics together, with a note that this means "run the business".

Comcast's groupings of metrics is telling. Running the business more efficiently is a key yardstick in digital transformation. These can translate to cost reduction, but more importantly, it's a core competency for digital operations.

Where to start

In order to measure progress, you have to know the ugly truth of where you currently stand first.  No single metric is a reliable enough on its own, but you don't want to make the the process too cumbersome. Focus on five to ten metrics that are easy to collect and easy to understand.

There's value in having metrics that are basic, technical "vital signs". Metrics like release frequency, availability, and MTTR. There's also a lot of merit in having a couple metrics that are higher-level business metrics. Measuring revenue from digital channels or developer productivity will pay dividends (pun intended).  

Finally, communicating the key metrics is essential. Think of workplace safety campaigns that put "Number of Days Since An Accident" in prime view. Don't assume that only executives and leaders want to see KPIs: use them to rally developers and attract them to a new way of working.

For a live and interactive discussion of patterns from Cloud Foundry users, join me on January 11 for a webinar on the Secrets of Successful Digital Transformers.