(continued from a story that starts here).
Saturday, December 29, 2018
Friday, December 28, 2018
Story Points as Food Pellets
Ever see a team invest a bunch of time into deciding how many story points they could claim at the end of a sprint?
You get what you teach people to give you.
Positioning story points as valuable encourages people to treat them as rewards. Rather than trying to be productive and "earning" points, time will be wasted trying to claim points for work that isn't really done.
Wasted time erodes productivity, and lowered productivity provides more incentive for credit-grabbing behaviors. A vicious cycle ensues.
This is what you get when you set story points up as these little food pellets meant to modify the behavior of your developers: Developers whose behavior is optimized for getting story points, not writing software.
The solution is simple but very hard: Start tracking what you want.
Not something you think will lead to what you want. Not something you consider to be an integral part of what you want. Track whether or not you are actually getting what you want.
Thursday, December 27, 2018
Test-Coverage
Measuring test-coverage tells you almost nothing of value. A disciplined approach leads to high test-coverage. The resulting tests add a lot of value to the development environment.
That correlation isn't something on which you can count, though.
It's like measuring the sharpness of knives in a kitchen. A good cook is going to keep their knives sharp all the time. An amateur cook will probably ignore the a report that their knives are too dull.
The same is true of test-driven development. You're either a professional or you aren't. Either way, test-coverage isn't going to change it.
Most of the time, organizations that are tracking test-coverage end up with something worse than no tests: vast tracts of useless tests that increase cost while providing no real protection.
Wednesday, December 26, 2018
What to Cut?
Imagine you have a timebox and a commitment, like a lot of people imagine sprints and sprint plans to be. Now imagine you can't do everything that was agreed upon at the beginning of the sprint by the end of the sprint.
What do you do?
Regardless of theory, in practice, a lot of people seem to want to cut quality. This is counterproductive because quality and speed are deeply-related. Cutting quality rarely makes you go faster in the moment and always makes you go slower in the mid- to long-term.
Time-boxes like sprints are, by definition, fixed length. Sustainable software development is fixed-quality. The only thing left to cut is scope.
Cut quality to go slower. Cut scope to go faster.
Tuesday, December 25, 2018
For Encapsulation, Ask "What's the Cost of Changing Visibility Later?"
Most languages give you a choice regarding how visible a member of a class will be. The choices vary from language to language but, typically, they range from more-widely exposed options to those granting narrower visibility.
Every time you define a member, you are making this choice. Even if you take the default level of visibility, you are still making the choice to accept the implicit level of visibility.
One thing you can do to inexpensively start improving the level of encapsulation in your system is to start asking a simple question:
What's the cost of changing visibility later?Generally, all levels of visibility have the same cost, initially. You aren't going to choose an access-modifier that prevents something from being used the first time you write it.
What really matters is how the level of visibility will interact with change. If you make something too private, now, how hard will it be to make it more public, later? If you make it too public, now, how hard will it be to change it to be more private, later?
In a lot of cases, it's pretty straightforward but it isn't always obvious. That's why it's worth asking the question.
Monday, December 24, 2018
Easy to Check versus Useful to Check
There are different axes on which you can evaluate whether or not you want to invest in measuring something and acting on those measurements.
One of the most popular seems to be "How easily can I implement this metric?"
Sadly, one of the least popular seems to be "What kind of decisions will this metric help me make?"
Lower implementation cost might be a good reason to choose one good metric over another good one but it is a terrible reason to reject a meaningful metric in favor of a worthless one.
Yet this has basically been the mode of our industry. Three great examples of metrics that seem to get a lot of traction are number of lines of source code per day, percentage of unit test coverage, and number of story points committed versus delivered in a sprint.
All three of those are worse than useless: they are counterproductive. In fact, each of them is downright corrosive. Yet they keep being chosen over their potentially-useful counterparts again and again.
Why?
I think it's because they are easy to measure.
Easy is nice. Easy is good. Easy is core to what we do. It's important to remember, though, that things are almost never just naturally easy in the software world. We make them easy.
If you want an easy metric, take a useful one and keep investing until it's easy to measure rather than cherry-picking metrics because they can tell you nothing quickly.
Sunday, December 23, 2018
Subscribe to:
Posts (Atom)