GROWS logo

Measure Actuals

Measure what your teams deliver: working, tested features that are ready to ship. Don’t use a secondary or proxy metric, as people are prone to focus on the proxy instead of the underlying value.

Pain Points

  • Executives and managers rely on estimates which are never met, which gives the impression the team is always behind
  • Proxies being measured are met, but the software doesn’t deliver expected value
  • Lines of code are tracked so teams start adopting loonger, more verbose coding styles (See the relevant Dilbert cartoon)
  • Tracking developer time at work causes people to stay late but makes them less productive and less motivated
  • Teams focus exclusively on features or bugs resulting in either more bugs or fewer features generated
  • Measure by the numbers (similar to lines of code): teams break up larger items into multiple smaller items to give the appearance they’ve fixed more items
  • Teams skip priority work items in order to deliver items with higher proxy values than the priority item
  • Schedules and budgets are on plan but perceived value by the user is less than expected

Benefits

  • When you measure working, tested features ready to ship, you communicate what holds value to the team
  • By avoiding proxies, you don’t confuse the teams with mixed messages and allow their focus to remain on what’s most important
  • Collecting and analyzing metrics requires time and money. Estimates and metrics can be big money sinks. By eliminating extraneous activities, you avoid wasting time and money

Assessment

  • You measure prioritized, working, tested features ready to ship (200pts)
  • You measure working, tested features ready to ship but also measure one or more proxies (50pts)
  • You measure a proxy such as an arbitrary point system for features and not working, tested features ready to ship (-50pts)
  • You measure more than one proxy for features (-200pts)

Application

✓ Critical ❑ Helpful ❑ Experimental

Adoption Experiment

Overview of steps to first adopt this practice. Please read detailed explanation below for important information.

Setup

  1. The most difficult part of this practice is figuring out what’s really important and what is actual vs what is a proxy. Start here and spend the time required to figure out what really needs measured
  2. Be diligent about eliminating measures that are not actuals and focus on only a very small number of items to measure

Trial

  1. Capture data from tracking the measurement long enough to establish a trend

Evaluate Feedback

  1. If this truly is an actual you are measuring, it should provide meaningful data regarding your performance. Does it? If not, consider adjusting your measure

What Does it Look like?

When teams measure proxies such as points, there is a tendency to inflate point estimates which creates the illusion of more productivity. Then managers, for lack of a better measure, start comparing team productivity based on this proxy. Using “points” to represent size or complexity was actually created to avoid this situation. However, it often ends up abused.

Instead of measuring proxies like “points”, measure the real work. Instead of basing plans on estimates, which often consume huge amounts of time that could be spent producing working software, and are universally wrong, base your forecasts on the team’s history.

At it’s core, this practice relies on a burn up chart which is used to track the number of working, tested features that are ready to ship. This doesn’t mean that if we complete 90% of 4 features we get 3.6 (.90 * 4) added to a burn up. It means we get 0 because we have 0 working, tested features ready to ship. There is no such thing as partial credit. A feature is working, tested and ready to ship, or it isn’t. If, however, we complete 2 working, tested features ready to ship, we get 2. A burn-up chart does an excellent job of allowing us to visualize the amount of work in the release, how much of it is completed, and how work or business value is trending. If your goal is shipping working, tested features, measure that directly. Weeks or hours of effort spent is a proxy, not the actual goal.

Using the trend by development timebox gives you a good indication of how fast a particular team can go. However, caution is needed as features may be of varying size and complexity. This requires a good sample size of data prior to using a trend as an indicator of capacity.

If your goal is shipping working software, then measure that directly. Weeks or hours of effort spent is a proxy, not the actual goal. If you want to know how fast the team can go, then measure that directly. Consider a team’s speed (working tested features/unit of time) a constant; once known, it can only change slowly, over time, with increased training, experience and improved workflow.

For a management perspective, see the CapabilityPlanning Practice for executives.

Warning Signs

  • You’re measuring multiple metrics and the teams pay attention to the one that’s easiest to game
  • You’re measuring anything other than working tested features ready to ship
  • You measure the working tested features ready to ship but it isn’t transparent to the teams or stakeholders
  • Features are being demonstrated during a development timebox and declared to be working, tested features ready to ship but a high number of defects are being reported once shipped
  • Teams constantly assure you that even though they haven’t completed a single feature yet, they’re doing lots of plumbing and underlying work and there’s no visibility into the volume or priority of that work

How To Fail Spectacularly

  • Rely on schedule and budget performance alone to dictate the quality of a development team’s results
  • Dictate what, when, how and by whom work is to be performed. This removes a team’s engagement in the development process
  • Constantly change business priorities with little knowledge passed on to the team explaining the business value or reason for the change
  • Measure the team’s progress using a proxy or indirect measure and use that measure to provide feedback regarding the perceived results
  • Team and individual rewards are based on a proxy or metric other than prioritized working tested features ready to ship creating a reward system that is not aligned with the needs of the company (again, see Dilbert). It’s critical that reward systems are tightly aligned with desired capabilities

←Prev (Remove Proxies to People)(Answers From Experiments) Next→

Follow @growsmethod on Twitter, or subscribe to our mailing list:

Sign up for more information on how you can participate and use the GROWS Method™. We will not use your email for any other purpose.