labs

Volatility for Product Managers

Historically, if a client asks when a certain set of stories will be completed, we point to the Tracker backlog and show the date. We know if the velocity decreases, then the date is pushed out. That’s great in developer land, but as product managers we need some certainty that features will be delivered by a specific date. You may have marketing campaigns and press releases tied to a specific date and you can’t afford to be off even by a day. [1]

Here we’ll walkthrough how to use Tracker in determining a confident range of dates for your launch. This will help raise discussions earlier about date mitigation.

tl;dr:

Volatility is a measure of *predictability*. When forecasting milestones, use velocity and/or standard deviation instead. Avoid forecasting too far in the future since it will be inaccurate.

Longer version

Volatility is a measurement of how consistently you repeat a process — inside Tracker, volatility is a reflection of how much your velocity varies. A highly repeatable process will have a lower volatility. If all things are held equal (team size, story complexity) and your team is skilled at estimating, your project volatility will be low, meaning velocity is stable week to week.

But our projects change. A lot. Unknown technical challenges arise. Story estimates aren’t always accurate. You’ll always have volatility. It isn’t an evil thing.

SO WHAT ARE WE TRYING TO ANSWER?

Let’s say our marketing manager wants to know if your product is on track for their big marketing campaign. Do dates need to shift? If so, how much?

For you, that means you want to know the range of dates that a pointed set of stories will be completed. This gives you a best and worst case to start discussing with stakeholders. Yes, the beauty of agile is being able to remove features on-the-fly, and voila – you hit your date. Except when your marketing launch is tied to a specific featureset and you can’t remove anything.

WHY VOLATILITY ISNT HELPFUL

Volatility is expressed as a percentage, which isn’t helpful when discussing dates. When looking to determine how close the project is to a specific date, you want to express this as a unit of time.

volatility = std dev (velocity week) / average (velocity week)

WHAT YOU SHOULD BE LOOKING AT

Two things, velocity and standard deviation. Velocity provides you with a date, standard deviation will show the range around that date. This gives you a best & worst case.

(std dev of velocity * # of remaining iterations) / average velocity

Example 0) A new-ish team has an average velocity of 20 points, and the standard deviation is 5 points. Looking at the backlog, you see that a set of stories is expected to be ready in 4 weeks (20 days, 80 points).

Given your standard deviation, this is a range of 20 points, or one week. ((5*4)/20). Meaning, that set of stories could take 3 – 5 weeks.

Example 1) Now take a fully ramped-up team where their estimates more accurately reflect complexity: Standard deviation is now 2 points.

That set of stories has a likelihood of being completed within 2.5 days of the backlog date. ((2*4)/20)

NOTE: Be careful with the standard deviation that Tracker reports. Like volatility, it is being calculated over the last 10 iterations.

TAKING IT FURTHER:

The above examples uses the range of your past velocity to predict a best/worst case, but not how confident you are of that range. Using volatility answers that question, but unless your volatility is very low, then it isn’t helpful to say “I have 20% confidence that these stories will be delivered within 2-4 weeks of that date”. People will look at you funny.

Instead of talking about confidence intervals, a better (human) approach is to use the standard deviation as above, but when pressed for details, point out the recent changes to the project which could impact that date: new additions to the team, complex 3rd-party integrations, a shift from delivering features to delivering bugs & chores. This resonates better with stakeholders who require a deeper understanding of any date shifts.

For a deeper look at the math behind volatility and confidence intervals, see Ken Mayer’s great post on volatility.

[1] If you are a PM, don’t peg your release milestones to your dev milestones. It’s not all about code: Stakeholders will change their mind and marketing will want copy revisions.