Cognitis Consulting

View Original

Estimation Part 2 - Accuracy vs Precision

Last time we looked at why we estimate and why there is always pressure to make our estimates more accurate. We have come up with a vast number of methods for estimation all of which aim to improve accuracy. The problem is that most of them don't. What they improve is precision instead.

Most people think of accuracy and precision as being the same thing. But they aren't. My nerdy and pedantic engineering background tells me that accuracy is how close to the true value a measurement is, while precision is a measure of how reproducible the measurement is. A more formal definition (thanks to Wikipedia) is -

In the fields of science, engineering, industry, and statistics, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's actual (true) value. The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results.

An example might help. If I wanted to measure myself, I could grab a tape measure and make the measurement. If it was an extremely good tape measure it might come up with a figure of 175.36 cm and no matter how many times I make the same measurement, I will get very close to the same figure. That's precision. Trouble is, I'm not 175.36cm tall. Someone has torn off the bottom of the measure so the figure being reported is nowhere near my actual height. That's a precise, but inaccurate, measurement.

If I were to use a foot long stick to make the measurement, I might get a result of "a bit over 6 feet". Given that my measuring device is pretty primitive, I would expect a somewhat different answer each time I made the measurement. That is however a much more accurate measurement than the first one. I am, in fact, just a shade over 6 foot 3 inches. That's a measurement that is imprecise, but accurate.

If you take a common estimation technique of getting multiple estimates from different people, then somehow combining or averaging them, you are making multiple measurements. Taking multiple measurements increases precision. Not accuracy. If we have a systematic error in our measuring technique, no matter how many measurements we make, we won't get any close to the true figure. We will get a very precise measurement of a wrong value, just like a tape with the bottom bit cut off. No mater how many times you measure with it, you will never read the true value.

Likewise, if we use another common technique - the bottom up technique - we break a project up into a bunch of smaller tasks, then estimate each of them individually, then add them together to give a final result. What we are doing is taking large measurements down into lots of smaller measurements. If we have an error in our measuring technique for each small measurement, those errors will compound to give us a seemingly precise, but very inaccurate, measurement.

Even if we assume a random error in each small measurement - some are high and some are low - these errors do not necessarily cancel out. They would, if the error was truly random, each measurement was around the same size, and the magnitude of the error was similar for all measurements. This is almost never the case. For a start, we are all hopeless optimists where it comes to estimates of how long things will take. We systematically under-estimate durations so most measurements will be low, there will be very few that are high. The magnitude of the errors are not the same. Some people are better at estimating than others, some may be out by a few hours, some may be out by days or weeks. And lastly, not all tasks in a typical project plan are the same size. Some are very fine-grained others are big and ill-defined. An error in a large task will swamp the errors in smaller tasks.

We also have a third phenomenon to consider - false precision. Imaging using my foot long stick to take the measurement then reporting a figure of 186.5cm. That's well beyond the ability of the measuring device. Whoever reported that measurement is clearly making it up. It's the same with our duration estimates. In that case we are measuring with a very imprecise tool but yet we will say with confidence that a task will take us 3.5 days. Or two weeks and one day. Really? Can we really measure that precisely? Or are we just making stuff up?

On a larger scale, when we sum up all the little estimates in our project and the project manager proudly announces to management that the project will be delivered in 267 days, we have a clear case of false precision. There is no way the measurement is accurate to within a day (more likely a month) but we report it to the day. Why?

Because we mistake precision for accuracy. We have been told we need to produce accurate results so we give precise ones instead. The accurate measure would be to say that the project will be delivered somewhere between 8 and 10 months. There is a high likelihood that will be the true result (accuracy) but when did management ever accept an estimate like that? We have a cognitive bias that makes us assume that precise measurements (or seemingly precise measurements) are accurate and that imprecise measurements must be inaccurate also.

So what can we do? Several things have been tried. Some involve giving estimates as ranges with a probability (there is a 50% change it will be between 3 and 4 months and a 90% probability it will be between 2 and 6 months). But these tend to come undone when someone gives in to their cognitive bias and insists on "more accuracy" really meaning "more precision". Another is to use a more accurate estimation method.

There are measurement systems that are both accurate and precise. Just like rulers, callipers and micrometers are accurate and precise measures for distance, there are accurate and precise measurements for duration as well. It's called - past experience. Agile estimation relies on this method and we will look at it next time.