GROWS logo

Measure Actuals

What we measure is what our teams will deliver, so instead of measuring one of the countless secondary metrics, measure what we actually want: measure shippable, working software.

Pain Points

  • Estimates are never met; executives think the team is always behind
  • Estimates are met but the software doesn’t deliver results as promised
  • Measure bug fixes: teams break up larger bug reports into multiple smaller ones so it appears like they’ve fixed more bugs
  • Measure bug fixes: teams focus on bugs and neglect new features
  • Measure lines of code: teams adopt longer, more verbose coding styles
  • Measure hours of time at work: teams arrive later in the morning, so they can stay late and offer the apperance of longer hours while actually working less

Benefits

  • When you measure completed code, you communicate clearly to the team what you value and want from them
  • By not measuring lots of secondary metrics, you don’t confuse the teams with mixed messages
  • Collecting and analyzing metrics requires significant resources. By eliminating most of those activities, you’re saving significant resources.

What Does it Look like?

When teams measure points, then point inflation usually kicks in and the team inflates point estimates to present the appearance of more productivity. Managers, for lack of anything else to use, start comparing team productivity based on points. The concept of points, created to avoid these situations, has come full circle and is often abused.

Instead of measuring something like points, which are a proxy for real work, let’s measure the work. Instead of basing plans on estimates, which often consume huge amounts of time that could be spent producing working software, and are universally wrong, let’s base our forecasts on the team’s history, not wishful thinking. Hope is not a strategy.

A release contains features. Features contain stories. Teams working within an iteration complete stories. When bugs are discovered, those are stories too. “Non-functional requirements” (such as infrastructure plumbing, security, internationalization, transaction throughput, etc.) are also stories. So any given release contains a set number of features (although the number of stories may vary as you go along).

If the release contains 17 features, how many of those are working? If we’ve used 90% of our time and competed 3 features, we can cleverly deduce that the release isn’t going to be done on time. We can also draw the same conclusion when only 3 features are complete at the 50% mark. With practice, we can see troubling trends very early in a project’s timeline.

A burn-up chart does an excellent job of allowing us to visualize the amount of work in the release, how much of it is completed, and if we’re trending to an ontime release. Using a burn-up has the additional advantage of highlighting scope creep and scope flux.

If your goal is shipping working software, then measure that directly. Weeks or hours of effort spent is a proxy, not the actual goal.

If you want to know how fast the team can go, then measure that directly. Consider a team’s speed a constant; once known, it can only change very slowly, over time, with increased training and experience.

For a management perspective, see the Progress Management practice for executives.

Warning Signs

  • You’re measuring multiple metrics and the teams pay attention to the one that’s easiest to game
  • You measure working software, but don’t tell anyone… everyone continues working as before, blissfully unaware
  • No working software metrics are on the wall
  • Working software isn’t mentioned or discussed at iteration ceremonies
  • Teams constantly assure you that even though they haven’t completed a single feature yet, they’re doing lots of plumbing and underlying work… just be patient and they’ll catch up later

To be clear, plumbing and underlying work is necessary, but it should be expressed as a feature for scheduling, regardless of whether it’s a “user-facing” feature or not. Work is work.

How To Fail Spectacularly

  • You set the date, the features, and the teams for a release… then be upset that the teams can’t meet the unrealistic dates or complete the huge pile of features with the team you’ve provided.
  • You see that the teams are on track, or ahead of schedule, so you heap on piles of additional work so they’ll feel challenged and work harder
  • Constantly add new work to a release, then punish the teams for not being able to keep up with your ideas
  • Measure the team’s progress and early on see that they can’t meet the date. Yell, cajole, and pressure them to work harder, more hours, etc. Continue to flog the team until everyone is miserable. Then be surprised when everyone becomes burned out and quits.

←Prev (Agree to Try)

Follow @growsmethod on Twitter, or subscribe to our mailing list:

Sign up for more information on how you can participate and use The GROWS Method™. We will not use your email for any other purpose.