When my product management colleagues and I create roadmaps, the process often looks something like: Decide what outcomes we should prioritize, jot down a few features, epics, or action items that we might expect to do for each, create a swimlane called “metrics,” and say, “We’ll come back to this later.”
Defining success metrics can be daunting for even experienced product managers. And some scenarios are more difficult to define them in than others. Consider these situations:
-
You’re starting off a project from scratch and don’t have baselines yet
-
You’re building an internal tool, where clicks and conversions might not carry the same weight as they do in e-commerce
-
Goals are less quantifiable than “Make more money!”
Metrics are an integral part of the PM role. Skipping them is the first and most obvious way to get them wrong. But even for the valiant PMs who take on the challenge of not-so-obvious metric generation, it’s very easy to come up with success criteria that look and sound like good metrics–quantifiable, measurable, objective, time-bound–but are not actually useful. Fortunately, this can be avoided by asking one question:
How do we know we did a good thing?
Let’s look at an example. Recently I was working on an internal app for a client. For anonymity’s sake, I’m going to pretend the client was a music conservatory. One effort we had to undertake was to make historic recordings of student recitals available on the new internal system. In addition to building out the infrastructure to store and catalog these recordings, our team was also responsible for the effort to digitize every recording they have off of audio and VHS cassettes.
We had an outcome that says “Mary the Music Student can access any recording she wants on the new system.” Time to write some metrics! The client PM I was enabling wrote down, “100% of student recordings are digitized.” That’s a metric, right? It’s quantifiable and measurable. It’s objective, and it tells us we met our outcome.
But telling us we met our outcome is all it does. It’s a circular metric: it shows we did what we said we were going to do. It doesn’t show that we should have done it in the first place!
So I asked: “How do we know we did a good thing?”
The client PM thought for a minute and suggested, “Maybe we can take a survey to see if people like the new way?” Well, that’s better than nothing. But actions speak louder than words, and surveys are ultimately just words.
How do we know we did a good thing?
Now we start brainstorming.
If Mary can find a recording faster, that’s good. “Reduce time to find a recording by 50%.” 50% is a rough estimate at this point, but we can put more thought into what numbers would be meaningful later.
If Mary finds more recordings that are useful to her, that’s good. How would we know? Maybe we can go by how many recordings she clicks on, or how many she listens to more than halfway through… This sounds like an opportunity to pair with design and ask our users some questions. (Hopefully you’ve already discussed user needs. But the process of coming up with metrics is a great way to inspire new research questions.) There’s a technical dimension to this as well. Pair with engineering to figure out what kind of analytics we can get. Metrics are a balanced team effort! For now, we’ll jot down “Increase recordings played at least halfway through per search by 20%.” We can iterate on it later.
Here’s another idea that relates to the physical space of the conservatory. All those shelves of tapes in the library take up a lot of room! It would be a good thing if we were able to free up shelf space for other materials. Let’s go with a metric of “Increase available shelf space in the listening room by 3x.”
What about if our software is available on the cloud so that students don’t have to go to the library to listen to tapes? “100+ non-library IP addresses access the recording search page.”
Now, this doesn’t guarantee us that what we’ve built is a quality of life improvement for Mary the Music Student, or Libby the Librarian, or Alex the Alumnus. There may be some nuances that we missed in our user research. Maybe Mary wanted a cassette she could play from her classroom’s boom box, because the room doesn’t have computer speakers. Maybe Libby liked the VHS tapes on the shelves because they looked cool and retro. Or maybe in the process of converting the files, we lost audio quality. If anything is amiss, we hope to catch it with fast feedback loops. But by anchoring our process in this central question, we will know whether or not we did something to make someone’s life better.
What good things are you going to do with your software this release?
Want to know more about the practice of building great software? Register for SpringOne Platform, October 7-10, in Austin. You’ll hear from your peers on cutting-edge methods for building apps users love. These three talks are spot-on for product managers: Making Magic with Balanced Teams, Design Leadership: The X Factor in Digital Transformation, and Four Perspectives, One Product Mindset: A Retrospective into Dell IT's Digital Transformation.
You’ll also want to watch the webinar: Cut the Digital Transformation Fluff: Creating Metrics That Matter. Join this webinar with Jeffrey Hammond, Forrester Research Analyst, and Patrick Feeney, Senior Director of IT Digital Products at Healthcare Services Corporation (HCSC), to learn:
-
The types of metrics high-performance software teams are using
-
Which metrics are counterproductive to improving software delivery
-
How HCSC has used metrics to manage software delivery change
-
Which roles should be using which metrics and when