Estimation Part 6 - The Argument For Story Counting
In the last post, we looked at estimating by essentially not estimating. To do that we broke down stories into two categories - small and the rest. Small stories were ready to be accepted into the team's backlog, the rest were too large and need to be broken down further. By doing this, velocity becomes just a count of stories completed and all the hassles involved with story point estimation just go away.
To me, this is a real no-brainer. Why wouldn't you estimate this way? But whenever I mention this in polite company, I tend to get some uncomfortable silences, strange looks and the inevitable - "but....". These buts, tend to come in three types -
"Surely this only works for mature teams who know how to cut up stories...",
"but how do you make predictions on the future backlog? Doesn't it have the same issues as T-Shirt sizing there?"
and finally a whole range of objections that essentially boil down to "this makes me feel really uncomfortable...I just want to keep doing what Iām used to".
I'll deal with these one at a time.
Only mature teams - OK. Sure. Mature teams will know how to break stories down better. They will also tend to have smaller stories. This will give them a high, stable velocity. Good.
Immature teams that have trouble with stories that are too large (and hence issues with story splitting) will tend to have a low and unstable velocity. But teams that have stories that are too large tend to have low and unstable velocities when using story points as well. They will try to jam that 20 or 40 point story onto the sprint and it won't get done - velocity zero. Then it will get done next sprint - velocity 40, then they will pick another big story - velocity zero... and so on. The classic sawtooth pattern that happens when a team has chunks of work that are too big. That's a pretty extreme example but you find that classic sawtooth velocity anywhere you see teams whose stories are too big. Regardless of whether they use points or just count stories.
If they have trouble breaking work down, their velocity will be unstable. Points or no points. If they use points however, teams can blame their estimations for their unstable velocity - "was that really a 13 pointer or was it really a 20..." and so on down the road to estimation madness. If you just count stories, the conversation becomes - "Was it too big?" "Yes." "OK... slice smaller next time". An unstable velocity in story points can lead to estimation madness. An unstable velocity in story counting leads directly to a good behaviour - slicing smaller.
Predicting future backlog - OK. This one I struggled with for a long time. Until I realised that exactly the same principle applies to features as to stories. For a product, the size of a minimum marketable feature will average out. So just count them. Now the question is just how many the team delivers each release. If that feature velocity is not stable, the features are probably too big. Slice them into proper minimum marketable feature sets. If you use feature points or something like that, you can get sucked into estimation madness if the velocity is unstable, instead of fixing the real problem - features that are too big. Again, with counting, an unstable velocity leads directly to slicing smaller and drives that good behaviour.
I'm really uncomfortable - Ahh well, I'm afraid there is not much I can do to help you with this one. I'm not suggesting that all teams throw away story point estimates and just count stories. If points, or ideal days, or whatever it is that you use, works perfectly well for you, then keep doing it. But if it's not working well for you, why not give counting a try? Give it a try for a few sprints. See if it works. If not, try something else (and let me know what went wrong so I can see where my reasoning is flawed).