Category Archives: agile

Responses to The Agile Backlash

Here are some reactions to the Agile Backlash post:
Tim Beck weighs in:

In the end, I firmly believe (and I think Jonathan does too) that the “Agile or not” debate is moot. There is a post-agilism movement coming (or already here) and we are going to have to deal with it. The software world is changing (did it ever stop) and if you don’t adapt to the world you’ll get left behind.

As Tim knows, “post-agilism” is a descriptive term of a growing community I have observed over the past couple of years. I am not trying to found a movement, there is no manifesto, and I’m not selling a process. “Post-agilism” is a term I used to describe the growing group of practitioners who agree with the Agile Manifesto, but don’t consider themselves to be “Agile” anymore. I struggled with this for a while, and I find the term helpful for my own thinking. It was difficult to square the Agile movement on one hand, and this group of practitioners I kept meeting who just didn’t fit. Others have described a watershed moment when they read the term. “That describes me! I’m not alone!”

Jason Gorman is on the money:

I’m saying that I completely believe in iterative and incremental, feedback-driven delivery of working software. This is based not only on experience, but also on some simple maths that shows me that the evolutionary approach is nearly always better and never worse than single-step delivery (“waterfall”).

But I’m not saying that because it’s written in a book or on a web site. And that’s what makes me post-Agile, I guess.

A reader emailed me:

This is the most balanced Agile piece I remember you writing, the most balanced IT piece overall. The focus has moved from just Agile to software development, which is where I wish the industry focus would move.

Another reader emails:

All the religious hype really gets up my nose, and I just want to get on with the business of developing software the best way we know how, and thinking about all the ways we can do it better.”

Another reader says:

I’m not yet ready to abandon the term “Agile” to those who invoke it in name only. I’d rather shine the light of day on this practice. But whatever name you use, keep up the good fight.

This is exactly the kind of thing I like to see from Agilists.”I’m not ready to abandon the term. If you are, fine, but I’m going to work on improving things from within the community than from without.”

Chris McMahon expands on the skill component:

Skill of course is critically important. (The Economist had an article a couple of weeks ago pointing out that even while companies are outsourcing function to India and China, they are desperate for talent, and will hire talented employees anywhere in the world they find them.)

Other *human* attributes are also critically important.

Matt Heusser posts:

I see a lot of “Agile” process zealots. I don’t get it. One of the points of the manifesto is to focus on individuals and interactions over process and tools, and yet people keep selling methodologies and tools. I suppose that’s because it’s easy to sell. I intend to propose a session at the Agile Conference 2007 to discuss this.

Oddly enough, the term “post-agilism” came to my mind after talking with Matt over a year ago, after we both did keynotes at a conference. Matt was wondering whether the Agile movement was an example of a type of post-modernism. I thought about it for a couple of months, but there was this growing group of people I met that didn’t fit that seemed to be more like post-modernists than the Agilists I was observing.

I also felt conflicted. I liked the Agile Manifesto, and the values of XP, but I didn’t consider myself an Agilist anymore. I had been on teams that practiced evolutionary design and development before the “Agile” term was formed, and I just didn’t relate to the Agile movement anymore. I felt it had stopped innovating. I wasn’t alone. Across the pond, at approximately the same time I was thinking of these things, Jason Gorman was thinking the exact same thing. Jason expressed the post-modernism angle much better than I could. (Note: please work to prove us wrong.)

Matt has jumped ahead with this idea of Post Modern Software Development , and is talking about Perils and Pitfalls of Agile Development. Keep pushing the envelope Matt!

As Tim Beck says, there is an Agile backlash which is already taking place in some communities, or is on its way. Our choice is to either deal with it in a healthy way, or get swept aside when the charlatans jump on the next fad tool or process they can sell to the unwitting.

Another reader commented:

Agile is yesterday’s silver bullet.

This is an interesting point, with potentially serious implications. If “Agile” is perceived by some to be a silver bullet, we are going to have to fight to keep the good things the Agile movement popularized from getting thrown out with the bad.

One reader said to me: “This is great! There is nothing better to improve the Agile movement or help create something new than to speak from the heart and buck the groupthink and hype with thoughtful introspection and constructive criticism. Furthermore, dealing with a backlash in a healthy way is extremely important.” This reader then told me a story of a company he had dealt with that used an Agile consultancy for a project. The project failed, so management banned anything that could be associated with “Agile”. What that meant was that there was a ban imposed on anything that used an iterative life cycle, and incremental, evolutionary design and delivery. This is exactly the kind of reaction we don’t want to see. Instead of building on what worked and improving, they have taken software development back into the dark ages in that company. My reader went on: “This isn’t the only company I’ve seen do this – throwing out “Agile” and returning to big up-front design. I wish more people understood that there is a better way, and I think what you are doing sheds light on that. This can help us improve.”

My hope with the Agile Backlash post is to let people know there is a better way.

The Agile Backlash

Now that the Agile movement is making it into the mainstream, and gaining popularity, it is facing a backlash. Aside from the mainstream pro-Agile trend we hear so much about, I see three reactive trends: rejection by non-practitioners, a mandated reversal in corporate methodology focus, and an adaptive reaction that builds on good software development ideas and practices.

A Source of Division

Sadly, the Agile movement is creating a lot of division in the software development world. A stark division is drawn between Agile practitioners, and those who aren’t “Agile”, therefore to the Agilists, they are “waterfall”, and by default, inferior. This attitude ranges from the relatively harmless, yet tasteless “Waterfall 2006” conference parody set up by well-known Agilists, to companies being torn apart from within as factions rage against each other, grappling for methodology dominance.

I get emails from people all over the world sharing with me the trouble they are now facing. Instead of collaborative, enabling Agilists trying to help a team do better, they are faced with zealous naysayers who try to force process change. One correspondent mentioned that they were fearful of the Agile process evangelists in the company, and other staff were avoiding them. They were tired of being berated for not knowing the Agile lingo, and being belittled as “waterfall”, and “backward” when they would ask innocent questions. The Agile zealots in their company had taken to yelling at the other team members if they did something that was “un-Agile”. They expressed that the team was reasonably cohesive and productive in the past, but now that a couple of team members had bought into the Agile movement, they were disruptive. Obviously, this is behavior that is completely against the values of any Agile methodology I’ve come across. Sadly, the divisive, belittling behavior of some Agile zealots is painting the entire community that way. Some managers are dealing with this situation by removing the people that are disruptive, and fearfully avoiding the Agile processes as a result.

Martin Fowler has aptly described this kind of behavior as “Agile Imposition“. For the reverse of what I described above, read Fowler’s post where he describes “Agile” being imposed on a team by management whether they want it or not.

Another divisive issue is economically-based. It’s important to realize that there are people and companies that are now making a lot of money from Agile processes. Some have told me that they are in danger of losing a customer, because the customer is suddenly concerned that they aren’t “Agile”. The customer is happy with the product or service, but a potential competitor has come along, is using “Agile” as a differentiator, and is trying to use it as a wedge to try to steal away a customer. Some companies react by branding themselves as “Agile”, without altering much of what they are doing. A little lingo adjustment, and some marketing makes short work of the differentiator their potential competitor is trying to use. Unfortunately, this waters down what Agility is supposed to be about. In some communities, you would be hard-pressed to find a company that deals in software development who doesn’t call themselves “Agile”, when few of them are actually using a methodology like XP, Scrum, Crystal, Evo, Feature Driven Development, etc. As people who call themselves “Agile” create messes and some project disasters become known through the local community, the shine certainly wears off.

Other companies react in a more healthy manner. What is this “Agile” fuss? Is it something we need to look at? One person told me that after reading my recent writing that displays some skepticism of Agile methodologies, they realized why they do what they do. They do it because it works. They don’t see a need to change, but are finding the challenge by a potential competitor as a way to make their client relationship stronger. They are looking at their processes, explaining them and making them defensible. Focusing on “what works” is at the heart of many of the Agile methods anyway. This is a healthy way to deal with division: look at what works for your company, and make it defensible. Also, look at areas that can be improved, and look at the Agile movement as only one potential source for innovative ideas.

Methodology Reversals

A troubling trend I have witnessed is a mandated return to old ways of doing things in the face of Agile failures. The story goes something like this: Company A is having process and quality problems. With the support of executives, they adopt an Agile methodology, and implement it with great care. Agile coaches are used, experienced Agilists train the team, and they work to improve their process adoption. In short order, the results are phenomenal. Quality improves, team cohesion improves, and the executives see a happy, healthy, productive team that is helping them meet their objectives. Over time though, productivity drops. Quality once again begins to suffer. The team turns to coaches, training, and work harder to implement the process to the letter. The results aren’t there anymore though. They forget to adapt and improve to changing conditions, and they just stick to the process, and try to follow the formula that worked in the past. As they get more desperate, they make wild claims promising that “Agile” will help them unseat a powerful competitor who is “waterfall”, or will guarantee bug free software, improve all employee morale problems, etc. The wild claims are not met, and executives get nervous. “What else are these Agile evangelists wrong about?”

At least with the old way of doing things, they could exert more control. After all, they had a big paper trail, and isn’t that what the big successful companies do? “We need to stage delivery of each phase of software development and get buy-in and sign-off like the old days. This “Agile” thing was a nice experiment, but it isn’t working. Time to call the big process consultants, and let’s get everyone into their own offices, hire a requirements expert, and go back to what we are used to.” This is not necessarily a healthy way of dealing with the issue. Why not build on what worked, and adapt it? Sadly, this is an Agile implementation failing by people who knew what they were doing. Moving back to the way things were failing before will not necessarily help.

It shouldn’t surprise us that processes stop working over time – stores are full of books to deal with boredom over basic needs such as diet and intimacy. “Spice up your diet!” deals with the problem of yesterday’s favorite dinner becoming today’s tasteless same-old, same-old. If we tire and find repetition difficult to sustain with basic needs, why do we expect our software processes to be necessarily good for all time? This failure to adapt, and blind devotion to a formula causes “Agile” to get a bad name.

Company B on the other hand hears a lot of Agile hype. Every other company in town says they are Agile, and we think we are Agile too. We don’t have much documentation, and we do a release twice a year. It doesn’t get more Agile than that! Someone with some sense attends a user group meeting or a conference, buys a book or two, and starts educating the company on what “Agile” really is. Management isn’t completely comfortable with “Agile”, so they only allow a partial implementation of a methodology. “We’re doing what works for us.” is the common phrase. In some cases this is true, especially as a company iterates towards an Agile implementation, or improves after adoption. In many cases that can be translated into: “We can’t completely adopt an Agile methodology, so we’re going to do what we can, and call it Agile anyway.” As soon as Company B has problems, “Agile” is blamed, and management reverts to the official status quo. “We tried Agile, and it didn’t work.” Again, even though they didn’t implement it properly, “Agile” gets a bad name.

In some extreme cases, association with a particularly corrosive project failure can make it difficult for people to find work in their local community. If project X was a big enough failure to reach the local rumour mill, and project X was also a flagship “Agile” project, some Agile practitioners may find they have to drop the “Agile” label in their community, or they find they are equated with the charlatans.

Innovations

I’ve noticed some small trends that build on Agile methods and are moving forward. One is that most of the early adopters of Agile methods that I know are quietly expressing skepticism over the effectiveness of the processes and are looking for “Agile 2.0”, or the next phase of the Agile movement. Others have left the Agile movement and have moved on. At any rate, these are skilled thinkers who see that Agile processes do not address all the needs of a company. In fact, they see that in some cases Agile methods are inappropriate and ineffective. Others found that they worked for a while, but they needed to drop a methodology and adapt something new to be able to sustain the productivity and quality they required of themselves.

Many have found the Agile methods are too narrow and rigid for them, and don’t fit the bill for the situations they face. Some teams have dealt with this by mastering a process, finding where it didn’t work, and improving on it. Sadly, they face criticism from Agilists who may feel their source of revenue or process religion threatened. These innovators who constantly build on what they know and improve are the people that give me hope, and this trend is something I have labelled “post-agilism“. (Jason Gorman (and probably others) was also thinking about this. Check out his: “Post-Agilism: Beyond the Shock of the New“.)

Another trend I’ve seen is a correlation between companies with stellar process implementations who were financially unstable, and companies with disastrous development shops who had huge sales and were seen as successes in the business community. There are exceptions of course, but this is easily misinterpreted by those who equate a correlation with cause. Gary Pollice has an excellent article: “Does Process Matter?” where he explains how there are many more dimensions to a successful company than their software development methodology. Effective sales, marketing, timing in the market, and having something people want to buy in the first place have much greater bearing on the financial success of a company than the software development methodology that is being used.

The damage to the Agile reputation can occur with thinking like this: “Company X claims to be Agile. Our sales for a month are higher than theirs are for a year, so why should we adopt something that might hurt that? This Agile thing sounds like something to stay away from.” And with that, the ideas are dismissed. The thinking innovators see it differently.

The thinkers realize that the marketing hype of the Agile methods does not address a great deal of what makes a company successful. In fact, they realize that no software development process will make a company financially successful on its own. So they begin to look towards the big picture, and strive to align themselves with the rest of the company. The collaboration skills they have honed while executing some Agile practices can come into play as they move beyond the short-sighted picture of software development processes.

Note: Some people didn’t need the Agile movement to learn these skills. The Agile movement did not invent, and does not have sole ownership of lightweight, iterative processes, frequent communication and strong customer involvement in software projects.

Another troubling trend is the admission that some of the claims of success by Agilists are suspect. Ron Jeffries graciously posted: Misstating the Evidence for Agile and Iterative Development: an accusation from Isaac Gouy as one example. If we subtract the Agile marketing engine hype, discount the books, papers, talks, workshops and conferences that are fueling financial gain for those associated with the Agile movement, what is left? Is all of this Agile hype just that? Hype? Have we been duped by slick-talking process gurus who are giggling maniacally as they drive their sports cars to their banks to cash in their “Agile”-related spoils? This is certainly an impression that is growing with some who have tried Agile processes and found that they just did not work in their unique contexts.

What about the glowing success stories? How do we account for the legions of Agile fans, some of whom display a religious cult-like devotion to Agile methods? Some even appear to be intelligent people. Were they swayed by a leader who with a shakti-pat, or healing, blessing or word of knowledge, convinced the hardened skeptic? I don’t subscribe to conspiracy theories, and I believe that in certain contexts, Agile processes have enabled teams to do extraordinary things. However, measuring the effectiveness of software processes is extremely difficult. Only changing a process, and measuring the result does not take many factors into account, particularly when dealing with a group of humans. In fact, as phenomena like the Hawthorne Effect show us, the measurement itself may impact the changes we see. Rick Brenner calls these Observer Effects.

We may wrongly attribute the success of a software development process when something else may have a stronger impact. One team aspect I didn’t pay enough attention to, and possibly misattributed early Agile successes I observed, is skill. I’ve realized that any skilled team can make a process work for them and do good work, and any team lacking in skill is going to fail no matter what process they are using. Not exactly rocket science, but it was an important lesson for me. There are many other factors that can come into play.

Is misattribution of the reasons behind project success a problem? It can be, particularly when other teams implement the same kind of process, and find they aren’t much further ahead than they were before, over time. The cold reality of their situation, when contrasted with the raucous Agile hype, has caused many to justifiably feel misled. This kind of experience strongly fuels a backlash mentality. The popular (and long) post by Steve Yegge has elements of this, along with some Google hype: Good Agile, Bad Agile. Here is his follow up: Egomania Itself.

Moving Forward

The Agile backlash is here. Some of it is justified, and some of it is not. My intention in this post is to encourage people who find themselves in these different situations I’ve described above.

Are you an Agilist, and you find the particular processes your team subscribes to are working? Are you afraid of returning to the way things were before? Acknowledge that there is a backlash, and for whatever reason, you and your team might be faced with it. Build on what you have learned, and don’t be afraid to adapt and change to keep your productivity and quality at the levels you have achieved. Work to improve what you are doing, and don’t be afraid to discard practices that aren’t working anymore.

Are you a skeptic? Have you found a way to do things that works for your team? Good for you. Don’t give in to the divisive behavior of those who want to capitalize on Agile marketing. You are not alone — there are others who are in the same boat as you. Stick with what works, and don’t be swayed by the hype. Make your position defensible, and have a look at what some of the Agilists are talking about, so you are aware of the issues. Are you a skeptic from within, someone who finds themselves disillusioned with the Agile hype and zealotry? Have you tried Agile methods and find yourself in a similar position that you were in with other methods? This shouldn’t be surprising. Change your course to improve what you are doing. The more difficult the situation you find yourself in should let you know how drastic your changes should be.

Are you a decision maker who is looking for a formula to make more money, or to save more money and be more effective? If you are, I have a bridge to sell you in Saskatchewan. There is no magic formula. There is no cookie cutter approach. The Agile methodologies will not solve all development process ills, and they certainly won’t guarantee success in business ventures. In some cases, they can cause more problems than they address. As Fred Brooks told us twenty years ago, and Gary Pollice recently echoed, there is no silver bullet.

Agile methods are like anything else in life. Applying them involves trade-offs, and unintended consequences. A backlash is certainly one of them. Handling any trade-off well requires solid thought and good problem-solving. Let’s move forward together, and create the new future of software development, standing on the shoulders of the giants who have come before us.

Who Do We Serve?

Bob and Doug are programmers on an Extreme Programming team. Their customer representative has written a story card for a new feature the company would like to see developed in the system to help with calclulations undertaken by the finance group. Currently, they have spreadsheets and a manual process that is time-consuming and somewhat error prone. Bob and Doug pair up to work on the story. They feel confident – they are both experienced with Test-Driven Development, and they constantly use design patterns in their code.

They sit at a machine, check out the latest version of the code from source control, and start to program. They begin by writing a test and forming the code test-first, to meet the feature request. Over time, they accumulate a good number of test cases and corresponding code. Running their automated unit test harness gives them feedback on whether their tests passed or not. Sometimes the tests fail sporadically for no apparent reason, but over time the failures appear less frequently.

As they are programming and have their new suite of automated test cases passing, they begin refactoring. They decide to refactor to a design pattern that seems suitable. It takes some rework, but the code seems to meet the pattern definition and all the automated unit tests are passing. They check the code in, and move on to a new story.

The next day, they demonstrate the feature to the customer representative, but the customer representative isn’t happy with the feature. It feels awkward to them. Bob and Doug explain what pattern they were using, and talk about facades, persistence layers, inversion of control, design patterns, and demonstrate that the automated unit tests all pass by showing the green bar in their IDE. The customer representative eventually reluctantly agrees with them that the feature has been implemented correctly.

Months later, the finance department complains that the feature is wrong almost half of the time, and is causing them to make some embarrassing mistakes. They aren’t pleased. Wasn’t XP supposed to solve these kinds of problems?

Looking into the problem revealed that the implementation was incomplete – several important scenarios had been missed. A qualitative review of the automated tests showed that they were quite simple, worked only at the code level, and no tests had been done in a production-like environment. Furthermore, no thought had been put into the design for the future when the system would be under more load due to more data and usage. “That isn’t the simplest thing that could possibly work!” they retorted. However, the team knew that data load would greatly increase over the coming months. If they weren’t going to deal with that inevitability in the design right then, they needed to address it soon after that. They never had, and blamed the non-technical customer representative for not writing a story to address it.

Finally, when the customer representative tried to express that something wasn’t quite right with the design, they were outnumbered and shot down by technical explanations. One representative against the six technical programmers on the team who were using jargon and technical bullying didn’t stand much of a chance. After the problem was discovered in production, Bob and Doug admitted that things felt a little awkward even to them when they initially implemented the code for the story, but since they were following practices by the book, they ignored signs to the contrary. The automated tests and textbook refactoring to patterns had given them a false sense of security. In fact, the automated tests trumped the manual tests done by a human, an actual user, in their eyes.

What went wrong? Bob and Doug measured their project success on their adoption of their preferred development practices. They worried more about satisfying their desire to use design patterns, and to follow Test-Driven Development and XP practices than they did about the customer. Oh sure, they talked about providing business value, worked with the customer representative on the team and wrote a lot of automated unit tests. The test strategy was sorely lacking, and their blind faith in test automation let them down.

Due to this incident, the development team lost credibility with the business and faced a difficult uphill battle to regain their trust. In fact, they had been so effective at marketing these development methods as the answer to all of the company’s technical problems that “Agile” was blamed, and to many in the business, it became a dirty word.

Sadly, this kind of scenario is far too common. On software development teams, we seem especially susceptible to desire silver-bullet solutions. Sometimes it seems we try to force every programming problem into a pattern whether it needs it or not. We suggest tools to take care of testing, or deployments, or any of the tasks that we don’t understand that well, or don’t enjoy doing as much as programming. We expect that following a process will prevail, particularly a good one like Extreme Programming.

Through all of this we can lose sight of what is really important – satisfying and impressing the customer. In the end, the process doesn’t matter as much as getting that result. The process and tools we use are our servants, not our masters. If we forget that and stop thinking about what we’re trying to solve, and who we are trying to solve it for, the best tools and processes at our disposal won’t help us.

That “Agile” Word

I just came across an interesting post on The Hijacking of the Word Agile. Here is a quote from that post that got my attention:

The word ‘Agile’ seems to have been hijacked once again. Now, it is commonly used to justify just about any bizarre ‘practice’ and strange behavior in a software project. In some cases, it is even misused to justify complete chaos.

Inspired by the list of weird behaviors in that blog post, I thought I’d create my own list. Here are some similar so-called “Agile” actions I’ve seen:

  • Quote: “We do releases every 3 months, so we’re Agile.”
  • Quote: “We don’t have a process or do much documentation, so I guess we’re Agile too!”
  • Being in an organization for months, and seeing no outward clues of anything remotely Agile going on. Later, we were told the organization was following an Agile method that normally is very visible (as in “hard to miss”); most people there had no idea they were doing anything Agile at all.
  • “Quality Police” moving from enforcing traditional processes to enforcing Agile processes. This included political manipulation and attempts to humiliate or blackmail developers. All under the banner of “Agile Testing”.
  • Copying a variety of RUP, iterative development and Agile writing quotations. Mixing them up into a cornucopia, that, when used out of context, sounded an awful lot like what the team had been doing all along.
  • Consultants taking CMM or V Model, (or whatever they have been selling for years,) and sticking the term “Agile” on it, (presumably for the marketing buzz), without actually investigating Agile processes and adapting.

It’s not surprising that this kind of behavior by those ignorant of Agile methods, turns some people off of the term “Agile”. Couple that with some of the obnoxious Agile elitism and zealotry that is displayed by a few who are well-versed in and experienced with Agile methods, and even more people are finding themselves suspicious of this marketing term “Agile”. Tim Beck decided to do something about it, and came up with his own term for software development: “Pliant Software Development.”

We like and use many agile techniques and methodologies. We believe that in theory, agile development and pliant development are pretty much the same. In practice however, we feel that agile development has become a little too un-agile. There are whole sets of “process” which fall under the banner of agile development and unfortunately some people have gotten to the point where they are unable or unwilling to veer from the prescribed course of action. This leads to problems, because no two software development projects are the same. There are far too many technical, social and political issues to be able to come up with a 12-step program that fits all software situations.

–From the Pliant Alliance FAQ

Here are some different ideas on “Agile” from various people:

Unfortunately the industry has latched on to the word “Agile” and has begun to use it as a prefix that means “good”. This is very unfortunate, and discerning software professionals should be very wary of any new concept that bears the “agile” prefix. The concept has been taken meta, but there is no experimental evidence that demonstrates that “agile”, by itself, is good.

The danger is clear. The word “agile” will become meaningless. It will be hijacked by loads of marketers and consultants to mean whatever they want it to mean. It will be used in company names, and product names, and project names, and any other name in order to lend credibility to that name. In the end it will mean as much as Structured, Modular, or Object.

The Agile Alliance worked very hard to create a manifesto that would have meaning. If you want to know their definition of the word “agile” then read the manifesto at www.agilemanifesto.org. Read about the Agile Alliance at www.agilealliance.org. And use a very skeptical eye when you see the word “agile” used.

Robert C. Martin

It really gripes me when people argue that their particular approach is “agile” because it matches the dictionary definition of the word, that being “characterized by quickness, lightness, and ease of movement; nimble.” While I like the word “agile” as a token naming what we do, I was there when it was coined. It was not meant to be an essential definition. It was explicitly conceived of as a marketing term: to be evocative, to be less dismissable than “lightweight” (the previous common term).

Brian Marick

…That’s most of what patterns are about: codified collective experience, made personal through careful feeling and thought. Unfortunately, it turned other people into sheep or, maybe worse, lemmings.

Jim Coplien

As with any mythodology, principles that begin with reasonable grounding can be misapplied and corrupted. This reminded me of James’ observation that the Traditionalists and the Agilists are both instances of the Routine or Factory School of Software Testing (as Bret Pettichord describes in his wonderful paper Four Schools of Software Testing). I may disagree with James to the degree that I don’t think the early or smarter Agilists intended to behave in a Routine-School manner, but I think he’s right to point out that a large and growing number of their disciples appear to have slipped into Routine behaviour.

The common element between Bad Traditionalist processes and Bad Agile Processes is that both want to remove that irritating thinking part.

Michael Bolton (emphasis mine)

The lesson to me is this: our words are important. When a term is used in many contexts meaning something different in each, it’s a vague word. When a word is vague, it is important to get a definition from the people using the word. Otherwise, it may only seem like we are using a shared language, when we really aren’t. We should also be careful with our own language, and the terminology we associate ourselves with, because, in the case of vague words, it can have unintended consequences.

Post-Agilism: Process Skepticism

I’m a fan of Agile practices, and I draw on values and ideas from Scrum and XP quite often. However, I have become a process skeptic. I’m tired of what a colleague complained to me about several years ago as: “Agile hype”. I’m tired of hearing “Agile this” and “Agile that”, especially when some of what is now branded as “Agile” was branded as “Total Quality Management”, or <insert buzzword phrase here> in the past. It seems that some are flogging the same old solutions, and merely tacking “Agile” onto them to help with marketing. It looks like Robert Martin was right when he predicted this would happen.

More and more I am meeting people who, like me, are quiet Agile skeptics. I’m not talking about people who have never been on Agile projects before that take shots from the outside. I’m talking about smart, capable, experienced Agilists. Some were early adopters who taught me Agile basics. While initially overcome with Agile as a concept, their experience has shown them that the “hype” doesn’t always work. Instead of slavishly sticking to Agile process doctrines, they use what works, in the context they are currently working with. Many cite the Agile Manifesto and values of XP as their guide for work that they do, but they find they don’t identify with the hype surrounding Agile methods.

That said, they don’t want to throw out the practices that work well for them. They aren’t interested in turning back the clock, or implementing heavyweight processes in reaction to the hype. While they like Agile practices, some early adopters I’ve talked to don’t really consider themselves “Agile” anymore. They sometimes muse aloud, wondering what the future might hold. “We’ve done that, enjoyed it, had challenges, learned from it, and now what’s next?” Maybe that’s the curse of the early adopter.

They may use XP practices, and wrap up their projects with Scrum as a project management tool, but they aren’t preachy about it. They just stick with what works. If an Agile practice isn’t working on a project, they aren’t afraid to throw it out and try something else. They are pragmatic. They are also zealous about satisfying and impressing the customer, not zealous about selling “Agile”. In fact, “Agile” zealotry deters them.

They also have stories of Agile failures, and usually can describe a watershed moment where they set aside their Agile process zeal, and just worked on whatever it took to get a project complete in order to have happy customers. To them, this is what agility is really about. Being “agile” in the dictionary sense, instead of being “Agile” in the marketing sense.

Personally, I like Agile methods, but I have also seen plenty of failures. I’ve witnessed that merely following a process, no matter how good it is, does not guarantee success. I’ve also learned that no matter what process you follow, if you don’t have skilled people on the team, you are going to find it hard to be successful. A friend of mine told a me about a former manager who was in a state of perpetual amazement. The manager felt that the process they had adopted was the key to their success, and enforced adherence to it. However, when anything went wrong (which happened frequently), he didn’t know what to do. Their project was in a constant state of disarray. I’ve seen this same behavior on Agile projects as well. (In fact, due to the blind religiosity of some Agile proponents, some Agile projects are more prone to this than we might realize.) Too many times we look for the process to somehow provide us the perfect answer instead of just using our heads and doing what needs to be done. “Oh great oracle, the white book! Please tell us what do do!” may be the reaction to uncertainty, instead of using the values and principles in that book as a general guideline to help solve problems and improve processes.

I have also seen people on Agile teams get marginalized because they aren’t Agile enough. While I appreciate keeping a process pure, (such as really doing XP rather than just calling whatever you are doing “Agile” because it’s cool right now), sometimes you have to break the rules and get a solution out the door. I have seen people be pushed out of projects because they “didn’t get Agile”. I have seen good solutions turned away because they weren’t in the white book (Extreme Programming Explained). I’ve also seen reckless, unthinking behavior justified because it was “Agile”. In one case, I saw a manager pull out the white book, and use a paragraph from it in an attempt to justify not fixing bugs. I was reminded of a religious nut pulling quotes out of context from a sacred book to justify some weird doctrine that was contrary to what the original author intended.

Here’s a spin on something I wrote earlier this spring:

In an industry that seems to look for silver-bullet solutions, it’s important for us to be skeptics. We should be skeptical of what we’re developing, but also of the methodologies, processes, and tools on which we rely. We should continuously strive for improvement.

I call this brand of process skepticism exhibited by experienced Agilists “Post-Agilism”. I admit that I have been an Agile process zealot in the past, and I don’t want to fall into that trap again. I’ve learned that skill and effective communication are much more powerful, so much so that they transcend methodologies. There are also pre-Agile ideas that are just fine to pick from. Why throw out something that works just because it isn’t part of the process that’s currently the most popular?

Test your process, and strive for improvement. Be skeptical, and don’t be afraid to follow good Post-Agilist thinking. Some good Post-Agilist thinkers are former process zealots who know and enjoy Agile development, but also don’t throw out anything that isn’t generally accepted by Agilists. Others are just plain pragmatic sorts who saw the Agile hype early on, and decided to do things on their terms. They took what they found effective from Agile methods, and didn’t worry about selling it to anyone.

Post-Agilist thinkers just don’t see today’s “Agile” as the final word on software development. They want to see the craft to move forward and generate new ideas. They feel that anything that stifles new ideas (whether “Agile-approved” or not) should be viewed with suspicion. They may benefit from tools provided by Agile methods, but are more careful now when it comes to evaluating the results of their process. In the past, many of us measured our degree of process adoption as success, sometimes forgetting to look to the product we were developing.

What does the post-Agile future hold? Will individuals adapt and change Agile method implementations as unique needs arise? Will “Agile” go too far and become a dirty word like Business Process Re-engineering? Or will it indeed be the way software is widely developed? Or will we see a future of people picking what process works for them, while constantly evaluating and improving upon it, without worrying about marketing a particular development methodology? I certainly hope it will be the latter. What do you think?

Edit: April 28, 2007. I’ve added a Post-Agilism Frequently Asked Questions post to help explain more. Also see Jason Gorman’s Post-Agilism – Beyond the Shock of the New.

Reckless Test Automation

The Agile movement has brought some positive practices to software development processes. I am a huge fan of frequent communication, of delivering working software iteratively, and strong customer involvement. Of course, before “Agile” became a movement, a self-congratulating community, and a fashionable term, there were companies following “small-a agile” practices. Years ago in the ’90s I worked for a startup with a CEO who was obsessed with iterative development, frequent communication and customer involvement. The Open Source movement was an influence on us at that time, and today we have the Agile movement helping create a shared language and a community of practice. We certainly could have used principles from Scrum and XP back then, but we were effective with what we had.

This software business involves trade-offs though, and for all the good we can get from Agile methods, vocal members of the Agile community have done testing a great disservice by emphasizing some old testing folklore. One of these concepts is “automate all tests”. (Some claimed agilists have the misguided gall to claim that manual testing is harmful to a project. Since when did humans stop using software?) Slavishly trying to reach this ideal often results in: Reckless Test Automation. Mandates of “all”, “everything” and other universal qualifiers are ideals, and without careful, skillful implementation, can promote thoughtless behavior which can hinder goals and needlessly cost a lot of money.

To be fair, the Agile movement says nothing officially about test automation to my knowledge, and I am a supporter of the points of the Agile Manifesto. However, the “automate all tests” idea has been repeated so often and so loudly in the Agile community, I am starting to hear it being equated with so-called “Agile-Testing” as I work in industry. In fact, I am now starting to do work to help companies undo problems associated with over-automation. They find they are unhappy with results over time while trying to follow what they interpret as an “Agile Testing” ideal of “100% test automation”. Instead of an automation utopia, they find themselves stuck in a maintenance quagmire of automation code and tools, and the product quality suffers.

The problems, like the positives of the Agile movement aren’t really new. Before Agile was all the rage, I helped a company that had spent six years developing automated tests. They had bought the lie that vendors and consultants spouted: “automate all tests, and all your quality problems will be solved”. In the end, they had three test cases developed, with an average of 18 000 lines of code each, and no one knew what their intended purpose was, what they were supposed to be testing, but it was very bad if they failed. Trouble was, they failed a lot, but it took testers anywhere from 3-5 days to hand trace the code to track down failures. Excessive use of unrecorded random data sometimes made this impossible. (Note: random data generation can be an incredibly useful tool for testing, but like anything else, should be applied with thoughtfulness.) I talked with decision makers and executives, and the whole point of them buying a tool and implementing it was to help reduce the feedback loop. In the end, the tool greatly increased the testing feedback loop, and worse, the testers spent all of their time babysitting and maintaining a brittle, unreliable tool, and not doing any real, valuable testing.

How did I help them address the slow testing feedback loop problem? Number one, I de-emphasized relying completely on test automation, and encouraged more manual, systematic exploratory testing that was risk-based, and speedy. This helped tighten up the feedback loop, and now that we had intelligence behind the tests, bug report numbers went through the roof. Next, we reduced the automation stack, and implemented new tests that were designed for quick feedback and lower maintenance. We used the tool to complement what the skilled human testers were doing. We were very strict about just what we automated. We asked a question: “What do we potentially gain by automating this test? And, more importantly, what do we lose?” The results? Feedback on builds was reduced from days to hours, and we had same-day reporting. We also had much better bug reports, and frankly, much better overall testing.

Fast-forward to the present time. I am still seeing thoughtless test automation, but this time under the “Agile Testing” banner. When I see reckless test automation on Agile teams, the behavior is the same, only the tools and ideals have changed. My suggestions to work towards solutions are the same: de-emphasize thoughtless test automation in favor of intelligent manual testing, and be smart about what we try to automate. Can a computer do this task better than a human? Can a human do it with results we are happier with? How can we harness the power of test automation to complement intelligent humans doing testing? Can we get test automation to help us meet overall goals instead of thoughtlessly trying to fullfill something a pundit says in a book or presentation or on a mailing list? Are our test automation efforts helping us save time, and helping us provide the team the feedback they need, or are they hindering us? We need to constantly measure the effectiveness of our automated tests against team and business goals, not “percentage of tests automated”.

In one “Agile Testing” case, a testing team spent almost all of their time working on an automation effort. An Agile Testing consultant had told them that if they automated all their tests, it would free up their manual testers to do more important testing work. They had automated user acceptance tests, and were trying to automate all the manual regression tests to speed up releases. One release went out after the automated tests all passed, but it had a show-stopping, high profile bug that was an embarassment to the company. In spite of the automated tests passing, they couldn’t spot something suspicious and explore the behavior of the application. In this case, the bug was so obvious, a half-way decent manual tester would have spotted it almost immediately. To get a computer to spot the problem through investigation would have required Artificial Intelligence, or a very complex fuzzy logic algorithm in the test automation suite, for one quick, simple, inexpensive, adaptive, yet powerful human test. The automation wasn’t freeing up time for testers, it had become a massive maintenance burden over time, so there was little human testing going on, other than superficial reviews by the customer after sprint demos. Automation was king, so human testing was de-emphasized and even looked on as inferior.

In another case, developers were so confident in their TDD-derived automated unit tests, they had literally gone for months without any functional testing, other than occasional acceptance tests by a customer representative. When I started working with them, they first defied me to find problems (in a joking way), and then were completely flabbergasted when my manual exploratory testing did find problems. They would point wide-eyed to the green bar in their IDE signifying that all their unit tests had passed. They were shocked that simple manual test scenarios could bring the application to its knees, and it took quite a while to get them to do some manual functional testing as well as their automated testing. It took them a while to leave their automation dogma aside, to become more pragmatic, and then figure out how to also incorporate important issues like state into their test efforts. When they did, I saw a marked improvement in the code they delivered once stories were completed.

In another “Agile Testing” case, the testing team had put enormous effort into automating regression tests and user acceptance tests. Before they were through, they had more lines of code in the test automation stack than what was in the product it was supposed to be testing. Guess what happened? The automation stack became buggy, unwieldly, unreliable, and displayed the same problems that any software development project suffers from. In this case, the automation was done by the least skilled programmers, with a much smaller staff than the development team. To counter this, we did more well thought out and carefully planned manual exploratory testing, and threw out buggy automation code that was regression test focussed. A lot of those tests should never have been attempted to be automated in that context because a human is much faster and much superior at many kinds of tests. Furthermore, we found that the entire test environment had been optimized for the automated tests. The inherent system variablity the computers couldn’t handle (but humans could!), not to mention quick visual tests (computers can’t do this well) had been attempted to be factored out. We did not have a system in place that was anything close to what any of our customers used, but the automation worked (somewhat). Scary.

After some rework on the testing process, we found it cheaper, faster and more effective to have humans do those tests, and we focussed more on leveraging the tool to help achieve the goals of the team. Instead of trying to automate the manual regression tests that were originally written for human testers, we relied on test automation to provide simulation. Running simulators and manual testing at the same time was a powerful investigative tool. Combining simulation with observant manual testing revealed false positives in some of the automated tests which had to been unwittingly released to production in the past. We even extended our automation to include high volume test automation, and we were able to greatly increase our test effectiveness by really taking advantage of the power of tools. Instead of trying to replicate human activities, we automated things that computers are superior at.

Don’t get me wrong – I’m a practitioner and supporter of test automation, but I am frustrated by reckless test automation. As Donald Norman reminds us, we can automate some human tasks with technology, but we lose something when we do. In the case of test automation, we lose thoughtful, flexible, adaptable, “agile” testing. In some tasks, the computer is a clear winner over manual testing. (Remember that the original “computers” were humans doing math – specifically calculations. Technology was used to automate computation because it is a task we weren’t doing so well at. We created a machine to overcome our mistakes, but that machine is still not intelligent.)

Here’s an example. On one application I worked on, it took close to two weeks to do manual credit card validation by testers. This work was error prone (we aren’t that great at number crunching, and we tire doing repetitive tasks.) We wrote a simple automated test suite to do the validation, and it took about a half hour to run. We then complemented the automated test suite with thoughtful manual testing. After an hour and a half of both automated testing (pure number crunching), and manual testing (usually scenario testing), we had a lot of confidence in what we were doing. We found this combination much more powerful than pure manual testing or pure automated testing. And it was faster than the old way as well.

When automating, look at what you gain by automating, and what you lose by automating a test. Remember, until computers become intelligent, we can’t automate testing, only tasks related to testing. Also, as we move further away from the code context, it usually becomes more difficult to automate tests, and the trade-offs have greater implications. It’s important to make considerations for automated test design to meet team goals, and to be aware of the potential for enormous maintenance costs in the long term.

Please don’t become reckless trying to fulfill an ideal of “100% test automation”. Instead, find out what the goals of the company and the team are, and see how all the tools at your disposal, including test automation can be harnessed to help meet those goals. “Test automation” is not a solution, but one of many tools we can use to help meet team goals. In the end, reckless test automation leads to feckless testing.

Update: I talk more about alternative automation options in my Man and Machine article, and in chapter 19 in the book: Experiences of Test Automation.

Daily Standups

My friend and former colleague Jay Boehm, a Senior Development Manager and I frequently discuss issues we face when working on development teams. Jay has been particularly interested in the daily standup that gets a lot of exposure in Scrum and XP. I gave him my usual rant about how all good things on a project flow from good communication, and some pointers on holding a successful standup, such as:

  • Keep it 15 minutes long
  • Limit discussion to:
    • What I worked on since the last standup
    • What I plan to work on today
    • Any issues preventing me from doing my work
  • Don’t use standups for employee evaluation

Jay reported back to me today:

We’ve been doing a daily stand up for about 3 weeks with my team, and I am really, really pleased with how it’s working out. I love how people talk afterwards and make plans to tackle problems. The shy ones on my team are starting to gain some confidence in their colleagues. It’s great. I’m becoming a convert.

I love hearing stories like that. Nice job Jay!

Testing Debt

When I’m working on an agile project, (or any process using an iterative lifecycle), an interesting phenomenon occurs. I’ve been struggling to come up with a name for it, and conversations with Colin Kershaw have helped me settle on “testing debt”. (Note: Johanna Rothman has touched on this before, she considers it to be part of technical debt.) Here’s how it works:

  • in iteration one, we test all the stories as they are developed, and are in synch with development
  • in iteration two, we remain in synch testing stories, but when we integrate what has been developed in iteration one with the new code, we now have more to test than just the stories developed in that iteration
  • in iteration three, we have the stories to test in that iteration, plus the integration of the features developed in iterations that came before

As you can see, integration testing piles up. Eventually, we have so much integration testing to do as well as story testing, we have to sacrifice one or the other because we are running out of time. To end the iteration (often two to four weeks in length) some sort of testing needs to be cut in this iteration to be looked at later. I prefer keeping in synch with development, so I consciously incur “integration testing debt”, and we schedule time at the end of development to test a completed system.

Colin and I talked about this, and we explored other kinds of testing we could be doing. Once we had a sufficiently large list of testing (unit testing, “ility” testing, etc.), it became clear that the “testing debt” was more appropriate than “integration testing debt”.

Why do we want to test that much? As I’ve noted before, we can do testing in three broad contexts: the code context (addressed through TDD), the system context and the social context. The social context is usually the domain of conventional software testers, and tends to rely on testing through a user interface. At this level, the application becomes much more complex, greater than the sum of its parts. As a result, we have a lot of opportunity for testing techniques to satisfy coverage. We can get pretty good coverage at the code level, but we end up with more test possibilities as we move towards the user interface.

I’m not talking about what is frequently called “iteration slop” or “trailer-hitched QA” here. Those occur when development is done, and testing starts at the end of an iteration. The separate QA department or testing group then takes the product and deems it worthy of passing the iteration after they have done their testing in isolation. This is really still doing development and testing in silos, but within an iterative lifecycle.

I’m talking about doing the following within an iteration, alongside development:

  • work as a sounding board with development on emerging designs
  • help generate test ideas prior to story development (generative TDD)
  • help generate test ideas during story development (elaborative TDD)
  • provide initial feedback on a story under development
  • test a story that has completed development
  • integration test the product developed to date

Of note, when we are testing alongside development, we can actually engage in more testing activities than when working in phases (or in a “testing” phase near the end). We are able to complete more testing, but that can require that we use more testers to still meet our timelines. As we incur more testing debt throughout a project, we have some options for dealing with it. One is to leave off story testing in favour of integration testing. I don’t really like this option; I prefer keeping the feedback loop as tight as we can on what is being developed now. Another is to schedule a testing phase at the end of the development cycle to do all the integration, “ility”, system testing etc. Again I find this can cause a huge lag in the feedback loop.

I prefer a trade-off. We have as tight a feedback loop on testing stories that are being developed so we stay in synch with the developers. We do as much integration, system, “ility” testing as we can in each iteration, but when we are running out of time, we incur some testing debt in these areas. As the product is developed more (and there is now much more potential for testing), we bring in more testers to help address the testing debt, and bring on the maximum number we can near the end. We schedule a testing iteration at the end to catch up on the testing debt that we determine will help us mitigate project risk.

There are several kinds of testing debt we can incur:

  • integration testing
  • system testing
  • security testing
  • usability testing
  • performance testing
  • some unit testing

And the list goes on.

This idea is very much a work-in-progress. Colin and I have both noticed that on the development side, we are also incurring testing debt. Testing is an area with enormous potential, as Cem Kaner has pointed out in “The Impossibility of Complete Testing” (Presentation) (Article).

Much like technical debt we can incur it unknowingly. Unlike refactoring, I don’t know of a way to repay this other than to strategically add more testers, and to schedule time to pay it back when we are dealing with contexts other than program code. Even in the code context, we still may incur testing debt that refactoring doesn’t completely pay down.

How have you dealt with testing debt? Did you realize you were incurring this debt, and if so, how did you deal with it? Please drop me a line and share your ideas.

Testers and Independence

I’m a big fan of collaboration within software development groups. I like to work closely with developers and other team members (particularly documentation writers and customers who can be great bug finders), because we get great results by working closely together.

Here are some concerns I hear from people who aren’t used to this:

  • How do testers (and other critical thinkers) express critical ideas?
  • How can testers integrated into development teams still be independent thinkers?
  • How can testers provide critiques of product development?

Here’s how I do it:

1) I try very hard to be congruent.

Read Virginia Satir’s work, or Weinberg’s Quality Software Management series for more on congruence. I work on being congruent by asking myself these questions:

  • “Am I trying to manipulate someone (or the rest of the team) by what I’m saying?”
  • “Am I not communicating what I really think?”
  • “Am I putting the process above people?”

Sounds simple, but it goes a long way.

We can be manipulative on agile teams as well. If I want a certain bug to be fixed that isn’t being addressed, I can subtly alter my status at a daily standup to give it more attention (which will eventually backfire), or I can be congruent, and just say: “I really want us to focus on this bug.”

Whenever I vocalize a small concern even when the rest of the team is going another direction, it is worthwhile. Whenever I don’t, we end up with problems. It helps me retain my independence as an individual working in a team. If everyone does it, we get diverse opinions, and hopefully diverse views on potential risks instead of getting wrapped up in groupthink. Read Brenner’s Appreciate Differences for more on this.

Sometimes, we ignore our intuition and doubts when we are following the process. For example, we may get annoyed when we feel someone else is violating one of the 12 practices of XP. We may harp on them about not following the process instead of finding out what the problem is. I have seen this happen frequently, and I’ve been on teams that were disasters because we had complete faith in the process (even with Scrum, XP), and forgot about the people. How did we put the process over people? Agile methods are not immune to this problem. On one project, we ran around saying “that isn’t XP” when we saw someone doing something that didn’t fit the process. In most cases it was good work, but it turned out to be a manipulative way of dealing with something we saw as a problem. In the end, some of them were good practices that should have been retained in that context, on that team. They weren’t “textbook” XP, but the people with the ideas knew what they were doing, not the inanimate “white book”.

2) I make sure I’m accountable for what I’m doing.

Anyone on the team should be able to come up and ask me what I’m doing as a tester, and I should be able to clearly explain it. Skilled testing does a lot to build credibility, and an accountable tester will be given freedom to try new ideas. If I’m accountable for what I’m doing, I can’t hide behind a process or what the developers are doing. I need to step up and apply my skills where they will add value. When you add value, you are respected and given more of a free reign on testing activities that might have been discouraged previously.

Note: By accountability, I do not mean lots of meaningless “metrics”, charts, graphs and other visible measurement attempts that I might use to justify my existence. Skilled testing and coherent feedback will build real credibility, while meaningless numbers will not. Test reports that are meaningful, kept in check by qualitative measures, are developed with the audience in mind, and are actually useful will do more to build credibility than generating numbers for numbers sake.

3) I don’t try to humiliate the programmers, or police the process.

(I now see QA people making a career out of process policing on Agile teams). If you are working together, technical skills should be rubbing off on each other. In some cases, I’ve seen testing become “cool” on a project, and on one project not only testers were working on it, but developers, the BA and the PM were also testing. Each were using their unique skills to help generate testing ideas, and engage in testing. This in-turn gave the testers more credibility when they wanted to try out different techniques that could reveal potential problems. Now that all the team members had a sense for what the testers were going through, more effort was made to enhance testability. Furthermore, the discovery of potential problems was encouraged at this point, it was no longer feared. The whole team really bought into testing.

4) I collaborate even more, with different team members.

When I find I’m getting stale with testing ideas, or I’m afraid I’m getting sucked into groupthink, I pair with someone else. Lately, a customer representative has really been a catalyst for me for testing. Whenever we work together, I get a new perspective on project risks that are due to what is going on in the business, and they find problems I’ve missed. This helps me generate new ideas for testing in areas I hadn’t thought of.

Sometimes working with a technical writer, or even a different developer, instead of the developer(s) you usually work with helps you get a new perspective. This ties into the accountability thought as well. I’m accountable for what I’m testing, but so is the rest of the team. Sometimes fun little pretend rivalries will occur: “Bet I can find more bugs than you.” Or “Bet you can’t find a bug in my code in the next five minutes.” (In this case the developer beat me to the punch by finding a bug in his own code through exploratory testing beside me on another computer, and then gave me some good-natured ribbing about him being the first one to find a bug.)

Independent thinkers need not be in a separate independent department that is an arm’s length away. This need for an independent testing department is not something I unquestionably support. In fact, I have found more success through collaboration than isolation. Your mileage will vary.

Testing and Writing

I was recently involved on a project and noticed something interesting. I was writing documentation for a particular feature, and had a hard time expressing it for a user guide. We had made a particular design decision that made sense, and it worked intuitively but was difficult to write about coherently. As time went on, we discovered through user-feedback that this feature was confusing. I remembered how I had struggled to write about it, and shared this experience with my wife who worked as a technical writer for several years . She replied that this is very common. A lot of bugs that she and other writers found simply because a software feature was awkward to write about.

I’ve found that story cards in XP, or Scrum backlog items can suffer from incoherence, like any other written form. Incoherently written requirements, stories, use cases, etc. are frequently a sign that there is a problem in the design. There is something about actually writing down thoughts that helps us identify incoherence. When they are thoughts, or expressed verbally, we can defend problem spots with clarifications. When they are written down, we see them in a different light. What we think we understand may not be so clear when we capture our thoughts on paper.

Some of the most effective testers I’ve worked with were technical writers. They had a different perspective on the project than the rest of us because they had to understand the software and write about it in a way an end-user would understand. They find problems in the design when they struggle with writing about a feature. Hidden assumptions can come to light suddenly through capturing ideas on paper, and reading them. When testing, try writing down design ideas, as well as testing ideas, and see if what you write is coherent. Incoherent expressions of product features or test ideas can tell you a lot about development and testing.