How Agile Metrics are Getting Misused - And How Can You Fix It?
Unleash the real purpose of metrics.
Hello 👋, It’s Vibhor. Welcome to the 🔥 paid member only🔥 edition of Winning Strategy: A newsletter focused on enhancing product, process, team, and career performance. Many subscribers expense this newsletter to their Learning & Development budget. Here’s an expense template to send to your manager.
3 years ago, I watched a development team fall apart because of a single metric.
The CTO had implemented a "bugs resolved per sprint" target to improve code quality.
Seems reasonable, right?
Within 2 months:
developers started logging minor issues as separate bugs to hit their numbers
the team stopped pair programming (it reduced individual bug counts)
code reviews became superficial
and worst of all, developers avoided taking on complex features that might introduce bugs
The metrics looked fantastic. Bug resolution was up 500%.
But the actual product quality? It went down the drain!
By focusing on what was easy to measure, we'd created exactly the wrong behaviours. The metric had become more important than the mission.
This isn’t the only incident. I've seen similar patterns play out across many different domains:
sales teams hitting quota by signing bad-fit customers
support teams closing tickets prematurely to improve resolution times
etc. etc. etc.
Here's the thing:
Metrics are not inherently bad. But they're like powerful medications - if misused, they can do more harm than good.
In this post, I will share several different ways that you can use to ensure metrics are helpful and do not harm your team.
But first — why do metrics go bad?
Why do metrics go bad?
Metrics go bad when they’re asked to do too much.
Here’s what I mean: Metrics are often overloaded with multiple purposes. They’re used as targets, as performance measures, and as indicators of best practices — all at the same time.
For example:
A team’s “story points per sprint” becomes:
A target (“We need 40 points next sprint!”)
A performance score (“Why is your velocity lower than others?”)
A best practice indicator (“High velocity = agile maturity!”)
But there's a deeper problem.
When metrics become targets, people optimize for the number rather than the outcome. I call this the "speedometer effect" - i.e. if you judge a journey's success purely by the “average speed” shown on your speedometer, would you:
take unnecessary detours on faster roads?
avoid important stops?
maybe even risk an accident?
That's exactly what happens in our teams.
We fall into what system thinkers call "single-loop learning" - we keep pushing the same levers harder without asking if they're the right levers in the first place.
Let me give you an example:
A team I worked with was measured on story points completed per sprint. I observed:
stories getting artificially split to show "more points"
the team padding estimates to show "improvement"
complex but valuable technical debt user items getting pushed aside
The team was hitting all their metrics targets. Leadership was happy. But they were optimizing for the wrong thing.
There was no progress toward improving the value delivered to the business or the end user.
The lesson we learnt:
When metrics drive behaviour, instead of informing it, then the metrics become a ceiling rather than a floor. Teams start to optimize for what's measured, not what matters.
The pattern: Measure → Judge → Punish
It creates survival behaviours, not innovation.
Thankfully, there is a right way of using Metrics. In fact, there are several
#1 — Clearly separate the measure from its purpose
Here's a question I ask every team I coach: "Why are we measuring this?"
The responses are often revealing:
"Because Jira tracks it automatically"
"It's an industry standard"
"Stakeholders want to see it"
I am not saying there’s anything wrong with these answers (even though some people will find these responses completely unacceptable).
But! Here’s the thing:
None of these answers explain how the metric helps the team deliver “better value.”
Let's do a simple exercise I use with teams. Take any metric you're currently tracking and complete this sentence:
"We measure X because it helps us _______."
For example:
"We measure velocity because that's what Agile teams do."
"We measure velocity because it helps us understand our delivery patterns and make reliable commitments."
Do you see the difference?
When you separate the “measure” from its “purpose,” two things become immediately clear:
whether the metric actually serves your goals
if there are better ways to measure what you care about
Here’s what I suggest:
For each metric your team tracks, ask:
what do we measure?
why do we measure it?
how do we expect it to influence decisions?
what behaviours might it accidentally encourage?
For example, Sprint Burndown Chart:
What: Remaining work vs time in sprint
Why: To identify delivery risks early
Expected Use: Daily discussions about obstacles and adjustments
Potential Misuse: Teams rushing work or reducing quality to "stay on the line"
With this clarity, you’ll be able to catch misalignment early, prevent metric abuse, focus on outcomes and not numbers, and have better conversations about what success looks like.
A metric without a clear purpose is just noise. And in Agile, we're all about reducing noise and focusing on what matters.
#2 — Involve people closest to the work in defining metrics targets
Metrics are only valuable if they reflect the reality of the work. And who understands the work better than the people doing it?
Too often, metrics are imposed from the top down.
Leadership or external stakeholders decide what to measure without consulting the developers, testers, and designers who live the process every day.
The result?
Metrics that feel irrelevant, disconnected, or even counterproductive to the team.
When teams have no say in defining metrics, they — DISENGAGE.
Worse, they may start gaming the system just to avoid scrutiny.
Here’s an example:
I once observed a team where leadership introduced a “cycle time” metric to measure efficiency. On the surface, this made sense “to them”.
But the team lead later pointed out an issue:
the metric didn’t account for dependencies outside the team’s control (e.g. waiting on third-party approvals)
developers started rushing tasks to reduce cycle time, sacrificing quality
If the team had been involved in designing the metric, they could have flagged these issues early and proposed a better approach — like separating internal delays from external ones.
Why does the team’s involvement matter?
When teams help define metrics, three things happen:
Metrics become relevant: Teams ensure the measures reflect what truly drives success
Buy-in increases: People are more motivated to improve metrics they helped create
Collaboration improves: Teams and leadership align on what “success” really looks like