Monday, December 24, 2018

Easy to Check versus Useful to Check

Two people standing next to a car. One says "Should we test how she corners?" The other says "Let's just see how far we can turn the wheels..."

There are different axes on which you can evaluate whether or not you want to invest in measuring something and acting on those measurements.

One of the most popular seems to be "How easily can I implement this metric?"

Sadly, one of the least popular seems to be "What kind of decisions will this metric help me make?"

Lower implementation cost might be a good reason to choose one good metric over another good one but it is a terrible reason to reject a meaningful metric in favor of a worthless one.

Yet this has basically been the mode of our industry. Three great examples of metrics that seem to get a lot of traction are number of lines of source code per day, percentage of unit test coverage, and number of story points committed versus delivered in a sprint.

All three of those are worse than useless: they are counterproductive. In fact, each of them is downright corrosive. Yet they keep being chosen over their potentially-useful counterparts again and again.

Why?

I think it's because they are easy to measure.

Easy is nice. Easy is good. Easy is core to what we do. It's important to remember, though, that things are almost never just naturally easy in the software world. We make them easy.

If you want an easy metric, take a useful one and keep investing until it's easy to measure rather than cherry-picking metrics because they can tell you nothing quickly.