Why Do Large Organisations Make Bad Decisions
Everyone who has ever worked for a large company knows that they make really silly decisions. Completely illogical decisions. Decisions so monumentally ridiculous that you wonder how the company actually manages to survive as a going concern, let alone turn a profit. It's seemingly obvious to everyone in the organisation, except the senior executives who are making the decisions. Good projects aren't funded, bad ones are. Good teams or departments are restructured but poorly performing ones aren't. Opportunities are lost. How do they continue to make money with all these bad decisions? And why do smart executives continue to make them?
The answer of course is that big companies very seldom make truly bad decisions. What they make are a lot of very sub-optimal decisions. Decisions made are seldom illogical, there is a lot of reasoning that goes into them. Unfortunately, that logic and reasoning is based on very poor information. The decisions they make are good enough to stay in business and continue to make significant amounts of money. They just aren't the best decisions possible. The real question isn't "how can companies still make money while making poor decisions" but "how much more money could they make if they made better decisions". Looking at the reasons why companies make sub-optimal decisions can point us to ways to make better ones.
Too Big To Fail
The project is huge. It's been running for years. It's late. It's getting later every day. No-one can remember why the project started in the first place. No one is sure why we are still pushing ahead, but the project refuses to die. Money is thrown at it. The project team becomes larger and larger. It becomes harder and harder to get any other projects funded because MegaProject is sucking up all the available money and people. The project has become Too Big To Fail.
We've all seen something like this at one point or another (preferably from a long way away) - a huge project, lumbering on year after year, never delivering anything but consuming every part of the organisation it touches. Everything is diverted into making sure this project doesn't fail. Those on the outside (and many on the inside) wonder why the decision isn't made to kill it off. The original business case has long since evaporated. The project will never deliver the benefits it was supposed to. Why doesn't management pull the pin? The answer is simple - the organisation has fallen prey to the sunk cost fallacy, also known as the gambler's fallacy.
The Responsibility Trap
The responsibility trap is a very easy one to fall into. The symptoms are easy to spot - it's 11pm, you are sitting in an empty office, buried in work up to your eyeballs. Everyone else went home hours ago. Weekends are a myth. You haven't seen your family for days. The agile principle of sustainable pace applies to everyone on the team... except you. How did it happen? The trap is a really easy one to stumble into because it's insidious. You can wander in without realising you are inside, you won't notice until you are deep inside and by then it's too late. Try to leave and the trap will snap shut around you. While anyone can fall into the trap, it's particularly easy for people in expert, leadership or coaching roles to get stuck in it.
The trap is really simple, it works like this - the team needs something done. You, as "the expert" in the area, take it on and do it. The next time it needs doing, you do it again. Now, everyone just expects you to do it. Then something else comes up and, as "the expert", you step up and do it. And so on, until you are buried in a pile of work. Your intentions were good - the team needed something done, they were busy, it was urgent, you did it. What's wrong with that?
The Measurement Fallacy
As soon as someone starts looking at the topic of metrics, the measurement fallacy pricks up its ears (I always imagine it looking somewhat rodent-like with mangy fur, evil eyes and sharp teeth) and gets ready to emerge from its hole behind the database server. When people start discussing what should be measured in order to keep track of a process, it gets ready to strike. Most people have fallen prey to it at one point or other. Mostly without ever knowing they have been bitten. The bite is painless. The only symptom is that the bitten suddenly assumes that because we can measure something, it must be important. More serious cases assume that the easier something is to measure, the more important it must be. This dreadful scourge is responsible for making Lines Of Code the primary measure of developer output for years.
It's a typical case of a severe bite - we can measure lines of code. Therefore it must be an important measurement. It's really easy to measure so it must be a really important measurement. Therefore we must measure it and use it to drive developer behaviour. Once it sets in, it's hard to shift. Despite the fact that the behaviour it drove - writing masses of wordy code to inflate your LOC counts and never, ever remove code - was completely counterproductive, the LOC (or KLOC) still hangs around to this day.
The Black Ecconomy
When you work in a large company, one of the things you hear quite often is “we have to follow the process”. Large companies, for very good reasons, have a need to standardise their processes. If you have 50,000 staff, having one way to do things makes a lot of sense. No matter where someone goes in the organisation, the process for ordering a new pen, or whatever, will be the same. The problem with defined processes though, is that unless they are regularly reviewed and cleaned up, they tend to accumulate complexity. Each time something happens that is just outside the normal way the process works, someone will add some extra checks into the process to make sure that that situation is now covered. Over the years it will collect enough of these extra checks that your carefully considered and streamlined pen ordering process now requires a 10 page form, 15 signatures and about 4 hours (and in some companies a pint of cockerel’s blood). The end result is that everyone spends all day looking for pens.
Estimation Part 7 - Recap
Over the last 6 posts, I have been looking at estimation. First, we looked at why we estimate, then we looked at some of the pitfalls in traditional estimation methods - the way we mistake accuracy for precision. Then we looked at some of the Agile estimation techniques - story points and T-Shirt sizes and the way they are designed to overcome the accuracy vs precision problem. We saw that while they generally do a good job, they also have some fairly serious pitfalls of their own. In the last two posts, we looked at taking T-Shirt sizing one step further and allowing only two sizes - small and extra-large (too big). By doing this we saw how the main pitfalls in the agile estimation techniques were overcome. We also looked at some of the main objections to story counting and my arguments on how these objections can be overcome.
I'm not the only person to come up with this technique. It's doing the rounds at the moment under the name "No Estimation Movement". Apparently I'm part of a movement. Cool.
Estimation Part 6 - The Argument For Story Counting
In the last post, we looked at estimating by essentially not estimating. To do that we broke down stories into two categories - small and the rest. Small stories were ready to be accepted into the team's backlog, the rest were too large and need to be broken down further. By doing this, velocity becomes just a count of stories completed and all the hassles involved with story point estimation just go away.
To me, this is a real no-brainer. Why wouldn't you estimate this way? But whenever I mention this in polite company, I tend to get some uncomfortable silences, strange looks and the inevitable - "but....". These buts, tend to come in three types -
Estimation Part 5 - Story Counting
Last time we looked at T-shirt sizing and some of the benefits and problems that method has. We found that its greatest benefit was also its biggest disadvantage. The use of something completely abstract (T-shirt sizes) removes all our cognitive biases around numbers but by not using numbers we can’t really compare estimates against each other and make predictions except by converting back to numbers which of course brings our biases back.
We can use T-shirt sizes usefully if we make an adjustment to the scale we use. Rather than have Small, Medium, Large and Extra Large, let's just have Small and Extra Large. Now, this would obviously never work for clothing because people come in a range of sizes. Stories come in a range of sizes as well, so what gives? What makes this useful? The trick here is that unlike people where we can’t dictate what size someone should be (outside the modelling industry and certain trendy nightclubs), we can, and should, be pretty strict about what size a story can be before we accept it onto a sprint.
Estimation Part 4 - T Shirt Sizing
Last time we started to look at relative estimates and the most common method of relative estimation using story points. We looked at why they work well but also at some of their limitations. The biggest limitation is the fact that they are numbers and we have some built in cognitive biases when it comes to numbers. We mistake precision for accuracy and tend to agonise for ages over the story point numbers which turns story points from a fast, lightweight and accurate method of estimation into a slow, heavyweight and accurate method. It's still accurate but we waste a lot of time.
There is a way to keep the accuracy of story points but remove the cognitive biases we have around numbers. It’s a simple as not using numbers in our estimates. The usual way to do this is by using T-Shirt sizing – stories are small, medium, large or extra-large. Some teams go a bit further and add Extra Small and XXL but we’re getting into false precision there so I would recommend against that.