Story Smells

Most agile teams these days organise their backlog into user stories. The user story isn't mandatory in any agile methodology but they have become the defacto standard for agile projects. There are many good reasons for this, not least of which is that a well written user story keeps the focus squarely on delivering something of value to the user. Many user stories though are not well written. It takes more than using "story normal form" - As a I want so that I can - to generate a good story.

Many of the backlogs I see are filled with stories that, frankly, stink. Bad stories don't keep the focus on what is important. They distract, confuse and mislead. There are some criteria like INVEST that we use to assess user stories and properly applied they make a big difference to the quality of the stories. They do take some time to learn and apply though so I'll give you a few quick tips to get started. Over the years I have come across a number of common mistakes that teams make when writing stories that cause their backlogs to stink –

The user isn't a user – As a developer I want... As an architect I want... As a who? Really? Unless you are writing a development tool for developers to use then your developers aren't users of your system. What value does this story have to the real users of the system? Chances are, not much. This sort of story is usually a way to get a big chunk of purely technical work into the backlog disguised as a real user story. It may be absolutely necessary technical work in which case by all means put it on the backlog and work on it but let's not disguise it as a user story. Call it out explicitly as a technical task and evaluate it very carefully. I have seen so many projects that did a lot of "essential" technical tasks to create a "re-usable framework" that ended up getting used by one and only one project to view technical tasks with deep suspicion. If it isn't part of a user story that directly delivers value to an end user then do we really need it? There will be cases where the answer will be yes. Refactoring is a prime example, but under close inspection, many times the answer is no.

No outcome – As a user I want . Ok. Why? Why does the user want this? What is the outcome they are trying to achieve? Many times, this kind of story is code for As a developer I want to write because it is cool and interesting. In other words, its gold plating. Of course it may be that someone just forgot to fill in that part of the story, but it's a sign that you should treat that story with some suspicion.

Waterfall Story – As a user I want the design done for... Oh boy. I see this so often. Usually it's a sign that the story is too large to be done in a sprint and the team has decided to split it. That's fine and a sensible thing to do but in this case they have chosen to split it into a design story and a build story (I have also seen teams then add a test story). This means that the team will deliver nothing of real value when they deliver that design story - the user gets nothing. All that happens is that the team gets a design done. No value for the user. That means the story has zero value. Much better to split the story in a way that still delivers value. Maybe (for example) deliver a screen with one field and deliver the remaining fields in the next story. The preference should be to split a story into vertical slices that go end to end and deliver user value (even if only a tiny bit) rather than split horizontally into a bunch of technical pieces which deliver no value until they are all done.

This last one is also a sign of a team that is still thinking in a waterfall way. If the majority of their stories are split into a design story, a build story and a test story this is a very clear sign of waterfall thinking.

Build Only Story or Test Only Story - Stories that only include build and specifically exclude test and stories that include test but no build. This is usually a sign that the team isn't really a team but two sub teams - a build team and a test team with the testers running a sprint or two behind. This is of course a mini waterfall anti-pattern and is to be avoided. This often means that there is no trust between developers and testers and often points to larger organizational issues where developers and testers come from competing parts of the organisation and really don't value each others work - developers see testing as a low value, unskilled activity while testers see developers as a bunch of cowboys who push out dodgy code. Not a healthy situation.

This isn't by any means an exhaustive list of anti-patterns. There are many more than this, but this does cover the ones I see most often. If you can avoid these, you are on the right track.