Should We Scale?
There has been a trend recently within the agile community to embrace massive scale. Not just a few teams working together but really large groups. Every day we see examples of bigger programs, larger release trains, all successfully being managed through agile techniques. Just recently, a colleague ran a PI planning event for close to 1000 people spread across three countries. I have seen other organisations proudly boasting that they have "the largest release train in the southern hemisphere", with some figures on the incredible budgets that the train is managing. The SAFe framework now has 4 levels rather than 3 to enable it to manage bigger and bigger structures. Less has Less-huge to do the same.
While I celebrate the achievements of the coaches successfully helping organisations achieve this, and the incredible feat of facilitation that a 1000 person, 3 country PI event must be, I can't help worrying that this drive towards massive scale is not altogether a good thing. Large companies want scale because that's the way they are used to working. They are used to thinking in terms of large programs of work involving hundreds of people. In order to help them, we have developed techniques that allow us to handle this sort of scale. But just because we can do something, it does not automatically follow that we should do something. There are significant downsides to scale.
First up, let me say this - scaled agile works. It works much better than waterfall for large scale because it has more efficient communication techniques and faster feedback loops. Scaled agile is able to successfully manage larger programs than waterfall techniques. But...
As any student of Lean will tell you, large batches are inefficient. They cause inventory (unreleased product), they increase cycle times, they increase costs and they create waste. Less waste than a waterfall program would do, but they still create waste.
Large programs release infrequently because they are complex and require a lot of integration. Agile techniques like continuous integration and automated testing help a lot but don't fix the whole problem. There are still a mass of dependencies, integration issues, and the like that large scale programs need to contend with. Even with a really advanced CI system, as you scale up, you will slow down. The CI and automated test systems also grow in complexity as you increase scale until they require significant investment in both time and money. Setting up a CI system for a couple of teams is easy and inexpensive. Setting up a CI system for dozens of teams requires a build farm, a significant amount of cash and probably a dedicated build team to keep it running.
Because large programs release infrequently, they build up large batches of unreleased product. This slows down feedback and in lean terms leads to Inventory waste. This leads to other wastes like overproduction. I know that all large scale frameworks push the fact that being at scale does not mean that you have to release at scale but as you scale up, the number of moving parts involved and the complexity this causes will inevitably cause you to slow down. Again, agile techniques like devops help a lot but don't fix the whole problem. The reality is that while releasing daily from a single team is easy, releasing daily from a dozen teams is hard and requires significant investment in infrastructure and architecture. Large groups tend to release less frequently than small groups. We can certainly do better than the old yearly releases but getting a large program to do really frequent releases is a very hard job.
Large programs are also inherently wasteful. A single team can run with very low overheads. Once you start to scale to many teams, you increase complexity. Complexity costs a lot to manage. If we have two teams we can manage dependencies with a conversation. If we have ten teams we need a dedicated dependency manager (or team of dependency managers). One team can manage stakeholders through a product owner. Many teams might require a stakeholder engagement team. One team can handle its own Dev environments and builds, if you have many teams you start to need a dedicated environment and build team to support them. Yet again, agile techniques help a lot. Scrums of scrums, release trains and the like can help, but don't solve the whole problem.
I have worked on programs that have had 10 development teams, but also required a management team, a stakeholder team, a build and environment team and an architecture team to support them. If you think about it, that's not a 10 team program it's a 14 team program where 10 of the teams are doing the work and 4 of the teams are just handling the complexity caused by having 10 teams in the first place. That's 30% of its capacity (and cost) just to handle complexity. That makes it a very expensive way to get things done. Cheaper than waterfall, but still expensive.
Henry Ford is reputed (probably incorrectly) to have said -
"if I asked my customers want they wanted, they would have asked for a faster horse".
We in the agile community have asked our customers want they want and they have said "larger scale". Have we just given them a faster horse rather than innovating a better solution? By enabling organisations to continue to operate at massive scale, and even to increase the scale that is manageable, have we missed the real solution here? Have we just given them a faster horse but left them still drowning in manure?
By enabling them to keep thinking in terms of massive programs, have we stopped them from thinking about better, more efficient ways to organise the work? Have we stopped them thinking about ways to bring down their batch sizes? Have we stopped them from restructuring to enable flow? Have we stopped them thinking about real MVPs? Small experiments?
In short, by enabling them to operate at larger scale, have we stopped them from thinking about how to efficiently scale down? If we can operate in a scaled down way, we drive out much of the complexity, and and therefore the cost from our system. We enable flow. We reduce batch sizes. We reduce waste.
What does a scaled down system look like? Imagine, instead of a huge program, we have small groups of teams, say 2-5 teams in a group. Each group manages its own stakeholders, environments, dependencies and the like. Each group is directly aligned to a set of business stakeholders with a common set of outcomes, is funded through an investment pool aligned to business outcomes not specific project deliverables and delivers value end to end for the stakeholder groups. Now imagine a bunch of these working side by side, managing dependencies through well defined interfaces.
"But that will never work here" you say. "The organisation isn't structured that way, there is too much complexity to allow that". And you would be absolutely right for most organisations. At the moment there is no way they could operate in a scaled down way. They just don't work that way. But there is a way to get there from where you are now. There is a journey they can go on to get there. The good news is that you may already have taken the first step, because the journey to scaling down starts...with scaling up.
We will look at that journey next time.