Showing posts with label modeling. Show all posts
Showing posts with label modeling. Show all posts

Monday, December 24, 2018

Easy to Check versus Useful to Check

Two people standing next to a car. One says "Should we test how she corners?" The other says "Let's just see how far we can turn the wheels..."

There are different axes on which you can evaluate whether or not you want to invest in measuring something and acting on those measurements.

One of the most popular seems to be "How easily can I implement this metric?"

Sadly, one of the least popular seems to be "What kind of decisions will this metric help me make?"

Lower implementation cost might be a good reason to choose one good metric over another good one but it is a terrible reason to reject a meaningful metric in favor of a worthless one.

Yet this has basically been the mode of our industry. Three great examples of metrics that seem to get a lot of traction are number of lines of source code per day, percentage of unit test coverage, and number of story points committed versus delivered in a sprint.

All three of those are worse than useless: they are counterproductive. In fact, each of them is downright corrosive. Yet they keep being chosen over their potentially-useful counterparts again and again.

Why?

I think it's because they are easy to measure.

Easy is nice. Easy is good. Easy is core to what we do. It's important to remember, though, that things are almost never just naturally easy in the software world. We make them easy.

If you want an easy metric, take a useful one and keep investing until it's easy to measure rather than cherry-picking metrics because they can tell you nothing quickly.

Friday, November 30, 2018

The Danger of Simplistic Models

A couple of people are treading water, encircled by sharks. One of them says "Don't worry. Dolphins are FRIENDLY!"

It's all too common for people to prefer a simpler-seeming model to one that actually works. Another way to put this is that people will typically choose a model with one less variable over one that predicts outcomes more accurately.

Every so often, there's another popular book or article on how people think. They usually try to build a blanket model that explains all people's minds as though each is cast from a single, shared mold.

These kinds of simplistic models are seductive because they feel good. They are easy to understand and easy to rationalize when some of the data don't fit.

That good feeling comes with a heavy price-tag, though.

For the same reasons they are so attractive, simplistic models run the risk of being canonized as right (about which I have previously written). As a result, people will fight to hold a broken model in place when it should be replaced and, ironically, discard a partially-broken model entirely rather than just amending it.

This means we waste a lot of time dropping bad ideas, only to find out they were good again later, then dropping the bad ideas that supplanted them, only to find out they were good, too.

We need a better way of managing this stuff.