Currently set to No Index
Currently set to No Follow

Using metrics in reporting

Basic

difficulty

Stage 13

Agile IQ® Level

Metrics

Reporting

Introduction

Principle 7 of the Agile Manifesto reminds teams that “working software is the primary measure of progress”. Many people, though, are accustomed to project management reporting and have associated expectations of reporting on activity and milestones. Both of these provide hindsight:

  • What have we done?
  • What did we achieve?
  • Are we compliant?

Velocity

Velocity is the most widely used metric for agile teams. It’s a very useful metric for teams to use in planning to understand their capacity and work out how much work to take on.

When used as a measure of productivity by managers, who then ask a team to “increase their velocity”, teams end up simply inflating the Story Points they assign to Product Backlog items.

Be careful how you use this metric.

What is it?

  • How much of the Product Backlog can the team turn into an Increment of Done. Many teams measure this through the use of Story Points.
  • Velocity is used as a Sprint Burn Down Chart in the Sprint to determine progress toward the Sprint Goal
  • Velocity is used between Sprints in a Product Burndown Chart to understand likely completion dates, and increase/decrease in the Product Backlog over time.
burn-down
Above: Burndown chart showing items Done over the Sprint
velocity over time
Above: Velocity over time

What is it?

  • How much of the Product Backlog can the team turn into an Increment of Done. Many teams measure this through the use of Story Points.

What it’s not

  • Effort points. Just because a team feel they have done 1/2 the work doesn’t mean they get 1/2 the Story Points. If the work doesn’t achieve the Definition of Done, then the work doesn’t count toward velocity.

Good for

  • Burndown charts. It tells you how the team is tracking this Sprint.
  • The team understanding its own immediate past capacity to determine how much it can do in the future.
  • The Product Owner to use to help with roadmaps and release plans. If the team can turn 10 points of the Product Backlog into a Done Increment of work every Sprint, and there are 100 points of Product Backlog items, the Product Owner would forecast everything could be done in 10 Sprints.

Don’t use it for

  • Comparing teams.
  • Producing long-term forecasts (especially if roadmaps or plans are never or rarely updated). Natural variability means that the longer the forecast the more inaccurate it will be.
  • Tasks. Story Points only associated with Velocity are on the Backlog item.
  • Hours. Given there’s only a set number of hours in a Sprint and the team works at a sustainable pace, burning down hours is as useful as asking how many “days left in the Sprint?”

% Work not Done

What is it?

  • When items from the Product Backlog are started in a Sprint, but doesn’t meet the Definition of Done.
  • Excludes work that changes based on adjustments made at Daily Scrum

Good for

  • Understanding whether the team is taking on too much work.
  • Understanding the ratio of work that the team can turn into value and release to users and stakeholders.
  • The amount of work likely to “spill” into other Sprints, delaying the realisation of value by users and stakeholders.
  • Understanding whether the team is Waterfalling their Sprints.

Don’t use it for

  • Punishing a team. It’s an opportunity to discover why this is occurring and help them to adjust their capacity planning.
Above: % work not done over time

% Decrease in defects

What is it?

  • If the product was deployed but not released or failed to attract users, this is recorded as a failed deployment. Sometimes, a failed deployment happens to the decisions of one of the stakeholders, or the business model proves to be unreliable. 

Good for

  • Understanding the reliability of the product.
  • Understanding the likely burden of technical debt of the product.
  • Understanding the quality of the work done by the team. Are they delivering to the Definition of Done? Should we tighten the Definition of Done so that less bugs and rework gets through?

Don’t use it for

  • Punishing a team. Use the Retrospective to find out why a team has quality issues so you can address them.
query-charts-active-bugs
Above: Defects, bugs over time

Ratio successful to failed deployments

What is it?

  • How many bugs, errors, defects in total have not been able to be fixed and have been recorded in the Product Backlog?
  • Is this number increasing or decreasing and by how much?

Good for

  • Understanding the quality of the product.
  • Understanding the size of the opportunities for simplify the product to reduce complexity.
  • The opportunity to change the Definition of Done to reduce the likelihood of future errors.

Don’t use it for

  • Comparing teams. If teams are still meeting their Definition of Done the problem may be due to the product’s complexity or legacy. The metrics is often “gamed” by teams because “bigger is better” – they just end up inflating their estimates and Story Points.

Mean time to repair

What is it?

  • The average time required to troubleshoot and repair failed equipment.

Good for

  • Investigating the value and performance of assets so an organisation can make smarter decisions about asset management.
  • Understanding how quickly a team can respond to unplanned breakdowns and repair them.
  • Helping to ensure team’s preventive maintenance program and tasks are as effective and efficient as possible.
  • A gateway into the root cause of this problem and provides a path to a solution.

Don’t use it for

  • Vanity metric. It’s designed to help assess efficiency and eliminating redundancies, roadblocks, and confusion in maintenance so a business can avoid needless downtime and go back to what it does best.

Decrease in Lead Time

What is it?

  • The time it takes from when an item enters the Product Backlog till when it is Done.
  • If there are upstream decisions to define and approve work (e.g. a large feature), then that time is also included in the total Lead Time.
lead time and cycle time
Above: What is lead time?

What is it?

  • The time it takes from when an item enters the Product Backlog till when it is Done.
  • If there are upstream decisions to define and approve work (e.g. a large feature), then that time is also included in the total Lead Time.

Good for

  • Understanding time to deliver value to stakeholders and users.

Don’t use it for

  • Just reporting as a number. Report on what it was, what it is now, and what that means for the use of value by stakeholders and users.
  • Lead time and Cycle time are impacted by the number of items (work in-progress) in each step at once.
cycle-lead-time-lt-sample-chart
Above: Lead time graph in Microsoft DevOps

Decrease in Cycle Time

What is it?

  • The time it takes for a Backlog Item to get from “In Progress” to “Done” by a team.

Good for

  • Understanding how long work takes to get into the hands of users once the team starts work on it.

Don’t use it for

  • Cycle Time is useful to reflect one step in the bigger chain of events to deliver value. 
  • Use Lead Time to understand the full end-to-end picture of how long people have to wait.
  • Cycle time will be impacted by the number of items in each step at once.

Improvement in Team morale

What is it?

  • Return On Time Invested (ROTI) is an assessment by the team regarding how they feel about their investment of time with the team and at different points in the Sprint cycle.
  • Team members are asked to rate on a scale of 1-5 how they feel about their investment of time. 1=”I’d rather watch paint dry” and 5=”I got so much out of it!”. Team members are then asked what would be one thing in their power to influence that would increase their score by 1 point.

Good for

  • Getting a pulse on team mental health, morale and motivation.
  • Retrospectives.

Don’t use it for

  • Team happiness. The easiest way to make a team happy is to tell them they don’t have to do agile (they can do what they want). 

Usage Index

What is it?

  • Measurement of usage, by feature, to help infer the degree to which
    customers find the product useful and whether actual usage meets
    expectations on how long users should be taking with a feature.

Good for

  • Understanding whether users value the work of the team.

Don’t use it for

  • Punishing teams. A good Product Owner should use this metric to redirect the team’s energy into areas that will be used.

Product Cost Ratio

What is it?

  • How much does a feature cost to produce and maintain compared to how much it used.
  • From a purely economic perspective: total expenses and costs for the product(s)/system(s) being measured, including operational costs compared to revenue.

Good for

  • How much the team is putting into creating features and value compared to how much their work is used. 
  • How much does the team really cost?

Don’t use it for

  • Punishing teams. A good Product Owner should use this metric to redirect the team’s energy into areas that will be used.
search previous next tag category expand menu location phone mail time cart zoom edit close