But it’s not in the story…

Sometimes when I’m working as a tester on an agile project that is using story cards and I come across a bug, a response from the developers is something like this: “That’s not in the story.”, or: “That case wasn’t stated in the story, so we didn’t write code to cover that.”

I’ve heard this line of reasoning before on non-agile projects, and I used to think it sounded like an excuse. The response then to a bug report was: ‘That wasn’t in the specification.” As a tester, my thought is that the specification might be wrong. Maybe we missed a specification. Maybe it’s a fault of omission.

However, when I talk to developers, or wear the developer hat myself, I can see their point. It’s frustrating to put a lot of effort into developing something and have the specification change, particularly when you feel like you’re almost finished. Maybe to the developer, as a tester I’m coming across as the annoying manager type who constantly comes up with half-baked ideas that fuel scope creep.

But as a software tester, I get nervous when people point to documents as the final say. As Brian Marick points out, expressing documentation in a written form is often a poor representation of the tacit knowledge of an expert. Part of what I do as a tester is to look for things that might be missing. Automated tests cannot catch something that is missing in the first place.

I found a newsletter entry talking about “implied specifications” on Michael Bolton’s site that underscores this. On an agile project, replace “specification” with “story” and this point is very appropriate:

…So when I’m testing, even if I have a written specification, I’m also dealing with what James Bach has called “implied specifications” and what other people sometimes call “reasonable expectations”. Those expectations inform the work of any tester. As a real tester in the real world, sometimes the things I know and the program are all I have to work with.

(This was excerpted from the “How to Break Software” book review section of the January 2004 DevelopSense Newsletter, Volume 1, Number 1)

So where do we draw the line? When are testers causing “scope creep”, and when are they providing valuable feedback to the developers? One way to deal with the issue is to realize that feedback for developers is effective when given quickly. If feedback can be given earlier on, the testers and developers can collaborate to deal with these issues. The later the feedback occurs, the more potential there is for the developer to feel that someone is springing the dreaded scope creep on them. But if testers bring these things up, and sometimes it’s annoying, why work with them at all?

Testers often work in areas of uncertainty. In many cases that’s where the showstopper bugs live. Testers think about testing on projects all the time, and have a unique perspective, a toolkit of testing techniques, and a catalog of “reasonable expectations”. Good testers have developed the skill of realizing what implied specs or reasonable expectations are to a high level. They can often help articulate something that is missing on a project that developers or customers may not be able to commit to paper.

When testers find bugs that may not seem important to developers (and sometimes to the customer), we need to be careful not to dismiss them as not worth fixing. As a developer, make sure your testers have demonstrated through serious inquiry into the application that this bug isn’t worth fixing. It might simply be a symptom of a larger issue in the project. The bug on its own might seem trivial to the customer and the developer may not feel it’s worth spending the time fixing. Maybe they are right, but a tester knows that bugs tend to cluster, can investigate whether this bug is a corner case, or a symptom of a larger issue that needs to be discovered and addressed. Faults of omission at some level are often to blame for particularly nasty or “unrepeatable” bugs. If it does turn out to be a corner case that isn’t worth fixing, the work of the tester can help build confidence in the developer’s and customer’s decision to not fix it.

*With thanks to John Kordyback for his review and comments.