TDD – A Fifth School of Software Testing

Bret Pettichord has a presentation on the Four Schools of Software Testing. He covers the schools of testing identified by Cem Kaner, James Bach and himself: Analytic, Factory, Quality and Context-Driven. (For a brief time, Bret had renamed “Factory” school to the “Routine” school, but has since reverted back to “Factory”.) This breaking down of popular testing ideas into schools is a thought-provoking concept, and just reading his presentation notes should get testers thinking about what they do on projects. “What school do I identify with?” a reader might ask themselves.

Cem Kaner has identified Test-Driven Development as another school of software testing. He talks about TDD as a big force in software development from the past decade in this paper: The Ongoing Revolution in Software Testing. On the context-driven testing mailing list, Cem Kaner explained why he identifies TDD as a testing school:


…TDD isn’t completely inside my [schools of software testing] paradigm. At [this] point, I ask:

(a) Is it coherent enough, popular enough, and looked to for guidance–a foundation for teaching about testing– enough to be called a school?

I think this is a clear yes. The fact that courses in TDD exist and are fashionable demonstrates that it’s a source of knowledge, inspiration, and guidance (which is at the essence of what I think of in a school). And there’s clearly an active group of collaborators who jointly develop and push the ideas.

(b) Is it testing?

It’s not inconsistent with my definition of testing, and a lot of people who advocate it think it is testing. OK, I accept it as testing.

Is it a unified field theory of testing? No. But neither is anything I’ve done. TDD focuses on problems that many of us haven’t focused on before, and sheds remarkably little light (or interest) on problems that many of us take very seriously. To the extent that it thinks its problems are the primary interesting problems, it asserts itself as a paradigm–elucidates some key issues and puts on blinders with respect to the rest. But I have my own blinders, so the mere fact that TDD is blind to some of the problems I find most interesting can’t be a basis for invalidating it as a school of testing. It can only be a basis for rejecting it as the driving philosophy of my school of testing.

Along with techniques, TDD advocates often espouse a set of values, a set of broader ideas about development, an attitude about people and how they should use their time and how they should be held accountable, an attitude about the use of tools, about the types of skills people doing this work should possess and how they should/could grow them–for sure, these are fuzzy sets, not everybody adopts all of them. It’s all of this stuff that separates schools — it’s also this type of stuff that gives context/interpretation for the use of the tools/techniques and the directions of advancement of the use and enhancement of these tools/techniques.

I think these illustrate an approach to testing that includes some deep and careful thinking, a rich set of practices, and attention to a lot of basic issues. I think I see different answers than I would expect from the factory school. I think it is much more brain-engaged than factory school, and much more about the cognition of the test creator than about the detail of procedures to be followed by the test runner.

So, I see a thoughtful, coherent view. A lot of practitioners. A body of work and advocates that guide thinking, practice and the teaching of thinking about how to envision, research, and practice testing. Looks like a school to me. And a welcome one.”

(posted with permission)


I agree with what he has said. In my own forays into pair testing with test-driven developers and learning how to do TDD from skilled practitioners, I have met people who are really dedicated to testing. They tend to seek out conventional testers who are also passionate about testing. This intersection of schools can sometimes have some confusing results at first.

I’ve witnessed confusion when conventional testers and TDD testers are working together. That prompted a blog post on vague words used in testing. Those I would characterize as Quality school testers often feel responsible for the TDD process and unit tests even though they aren’t programmers and haven’t tried TDD themselves. I have heard of TDD testers and Quality school testers attending the same meeting, using the same language, agreeing to move forward together, and then independently moving in completely different directions. In one case, a TDD developer and I spent several sessions where I “translated” the language that the Quality school folks were using. He started changing his use of the shared terminology by using synonyms to improve communication and reduce confusion. They managed to work things out reasonably well.

Is it helpful to make distinctions of testing schools like this? Here is one area where it certainly helps: when communicating testing concepts. When a word can mean different things depending on how a practitioner defines their role, it helps to understand where they are coming from. Cem Kaner has provided further insight on TDD, and we can use that to enhance our communication and collaboration with developers who are honing their own testing craft.

Fast Failures

I was talking with Mark McSweeny the other day about a test design I was thinking about. It involved attempting to automate a process that currently relies on a lot of manual testing involving visual inspection. The tricky part of automating this kind of testing is the potential for variation in the items under test. Variation is hard for a computer to handle, but a human who understands the context can instantly spot the variation and see whether it is permissible, or whether it constitutes a test failure. I was asking Mark’s advice on how to deal with the variation, and mentioned the tools I was thinking of using. Mark pointed out that I was overcomplicating the test design by thinking about what data structures, xUnit tools and libraries to use, and not thinking enough about the human tester.

He described “fast failure” tests he writes that are pure change detectors, designed to run quickly. While they can provide feedback very quickly, they don’t provide a lot of information other than “Pass” or “Fail”. When a test fails, the human steps in and does the manual inspection. In many cases, the human can tell at a glance if the failure is a bug or not. If the change is ok, and recurring, the test gets changed. If it isn’t, the human recognizes the bug, and logs it, or just adds a new unit test and fixes the problem in their own code.

Fast failure tests have a lot of potential as a testing tool. What Mark described is something I’ve written about before: Computer Assisted Testing. After all, I’m part of a school that believes testing is an intellectual craft, and skilled tester activities cannot be automated. An issue many have heard me rant about before is that until we can program inference, and have some sort of intelligent, thinking bots doing testing, we can’t automate all the tests a human tester can do. Humans handle variation easily, and respond to change. Fast failure tests yield the best of both worlds. The computer does what it is good at, and the human exploits the computer to help them concentrate on what they are good at. “So why didn’t you think of this solution Jonathan?” you might ask. I guess I got caught up in the technology instead of thinking about a solution that harnesses both the computer and the skills of a human. I forgot to practice what I preach. This also demonstrates how brilliant people like Mark design simple, elegant solutions, and teach me something every time I talk to them.

There are some potential benefits for fast failure tests, even if they don’t completely emulate what a human does. For example, say a tester must manually inspect ten items every build. We develop some fast failure tests that do a rough approximation of that inspection, and now run these automated tests every build. With the fast failure tests, say that two of those ten items under test report failures because of a variation. The tester inspects the two files manually, and using their judgement and skill realizes that these are legal variations. Also, say that in one of five builds, the fast failure tests fail on one of the items under test because of a bug that can be reported. Instead of the burden being completely on the tester to inspect each item every build, they now have to inspect far fewer items after each build. Even though they may get a couple of red herrings each build due to variation, the tests prove their worth by helping the tester quickly identify potential problems. This test design shows how a computer helps the tester work more efficiently.

If there is a need to have a more complex test that can provide more information about the failures, we can develop it over time. In the mean time, we have a solution that is good enough, and we can use it as a base line for the test under development. However, a complex test automation solution can be dangerous. As Mark warned, test automation is software development, so it is just as prone to bugs, design problems, maintenance issues etc. as any other software. A simple solution that requires some human intervention may be more efficient than a complex one that requires a lot of time spent in the automation code to keep it working.

I’m glad I have smart people like Mark around to talk to, who challenge my ideas and let me know when I’m off the mark.

Tim’s Comments on Software Testing and Scientific Research

Tim Van Tongeren commented on one of my recent posts, building on my thoughts on software testing and the philosophy of science. I like the correlation he made between more scripted testing and exploratory testing to quantitative vs. qualitative scientific research. When testing, what do we value more on a project? Tim says that this depends on project priorities.

He recently expanded more on this topic, and talks about similarities in qualitative research and exploratory testing.

Tim researches and writes about a discipline that can teach us a lot about software testing: the scientific process.

Testing Values

I was thinking about the agile manifesto, and this blatant ripoff came to mind. As a software tester, I’ve come to value the following:

  • bug advocacy over bug counts
  • testable software over exhaustive requirements docs
  • measuring product success over measuring process success
  • team collaboration over departmental independence

Point 1: Project or individual bug counts are meaningless unless the important bugs are getting fixed. There are useful bug count related measurements, provided they are used in the right context. However, bug counts themselves don’t have a direct impact on the customer. Frequently, testers are motivated much more by how many bugs they log than they are by how many important bugs they found, reported, advocated and pitched in to help get fixed before the product went out the door.

Point 2: We usually don’t sell requirements documents to our customers, (we tend to sell software products) and these docs often provide a false sense of all that is testable. Given a choice, I’d rather test the software finding requirements by interacting with customers and collaborating with the team than following requirements documents. At least we can start providing feedback on the software. At best, requirements docs are an attempt to put tacit knowledge on paper. At worst, they are out of date, and out of touch with what the customer wants. Only test planning off of requirements documents leaves us open to faults of omission.

Point 3: I find the obsession with processes in software development a bit puzzling, if not absurd. “But to have good software, we need to have a good process!” you say. Sure, but I fear we measure the wrong things when we look too much at the process. I’ve seen wonderful processes produce terrible product too many times. As a customer, I haven’t bought any software processes yet, but I do buy software products. I don’t think about processes at all as a consumer. I’ll take product excellence over “process excellence” any day. The product either works or doesn’t work as expected. If it doesn’t, I quietly move on and don’t do business with that company any more.

I have seen what I would call process zealotry where teams were pressured not to talk about project failures because they “would cast a bad light” on the process that was used. I have seen this in “traditional” waterfall-inspired projects, and interestingly enough, in the agile world as well. If we have some problems with the product, learn from the mistakes and strive to do better. Don’t cover up failures because you fear that your favorite process might get some bad press. Fix the product, and make the customer happy. If you don’t they will quietly move on and you will eventually be out of business.

Point 4: The “QA” line of thinking that advocates an independent testing team doesn’t always work well in my experience. Too often, the QA folks end up as the process police at odds with everyone else, and not enough testing is getting done. Software testing is a challenging intellectual exercise, and software programs are very complex. The more testing we can do, and the more collaboration we can do to do more effective testing, the better. The entire team should be the Quality Assurance department. We succeed or fail as a team, and product quality, as well as adherence to development processes are everyone’s responsibility.

Conventional Testers on Agile Projects – Getting Started Continued

Some of what you find out about agile methods may sound familiar. In fact, many development projects have adopted solutions that some agile methods employ. You may have already adjusted to some agile practices as a conventional tester without realizing it. For example, before the term “agile” was formally adopted, I was on more traditional projects that had some “agile” elements:

  • when I started as a tester, I spent a lot of my first year pair testing with developers
  • during the dot com bubble, we adopted an iterative life cycle with rapid releases at least every two weeks
  • one project required quick builds, so the team developed something very similar to a continuous integration build system with heavy test automation
  • developers I worked with had been doing refactoring since the early ’80s. they didn’t call it by that name, and used checkpoints in their code instead of xUnit tests that would be used now
  • in a formal waterfall project, we had a customer representative on the team, and did quick iterations in between the formal signoffs from phase to phase
  • one project adapted Open Source-inspired practices and rapid prototyping

These actions were done by pragmatic, product-focused companies who needed to get something done to please the customer. Many of these projects would not consider themselves to be “agile” – they were just getting the job done. The difference between them and an agile development team is that the agile methods are a complete methodology driven towards a certain goal rather than a team who has adjusted some practices to improve what they are doing.

Other conventional testers tell me about projects they were on that were not agile, but did agile-like things. This shouldn’t be surprising. The iterative lifecycle has been around for many years (at least back to the 1940s). There are a lot of methodologies that people have used, but not necessarily codified into a formal method within the iterative lifecycle as some agile champions have. A lot of what agile methods talk about isn’t new. Jerry Weinberg has said that methods employed on the Mercury project team he was on in the early ’60s looks to be indistinguishable from what is now known as Extreme Programming.1

Another familiar aspect of agile methods is the way projects are managed. Much of the agile management theory draws very heavily from the quality movement, lean manufacturing, and what some might call Theory Y management. Like the quality pundits of past, many agile management writers are once again educating workers about the problems of Taylorism or Theory X management.

What is new with agile methods, are comprehensive methodology descriptions that are driven from experience. From these practices, discplined design and development methodologies have improved rapidly, such as Test-Driven Development. Most importantly, a shared language has emerged for practices like “unit testing”, “refactoring”, “continuous integration” and others – many of which might have been widely practiced but called different things. This shared language helps a community of practice share and improve ideas much more efficiently. Common goals are much more easily identified when everyone involved is using the same terminology. As a result, the needs of the community have been quickly addressed by tool makers, authors, consultants and practitioners.

This has several implications for conventional testers that require some adjustments:

  • a new vocabulary of practices, rituals, tools and roles
  • getting involved in testing from day one
  • testing in iterations which are often 2-4 weeks long
  • an absence of detailed, formalized requirements documents developed up front
  • requirements done by iteration in backlogs or on 3×5 story cards
  • often, a lack of a formal bug-tracking system
  • working knowledge of tools such as refactoring and TDD-based IDEs, xUnit automation and continuous integration build tools
  • a team focus over individual performance
  • developers who are obsessed with testing
  • working closely with the entire team in the same work area
  • not focusing on individual bug counts or lines of code
  • less emphasis on detailed test plans and scripted test cases
  • heavy emphasis on test automation using Open Source tools

Some of these changes sound shocking to a conventional tester. Without a detailed requirements document, how can we test? Why would a team not have a bug tracking database? What about my comprehensive test plans and detailed manual regression test suites? Where are the expensive capture/replay GUI automation tools? How can we keep up with testing when the project is moving so quickly?

A good place to address some of these questions is: Lessons Learned in Software Testing: A Context-Driven Approach by Cem Kaner, James Bach and Bret Pettichord.

We’ll address some of these challenges in this series, as well as examples of testing activities that conventional testers can engage in on agile projects.

1 p. 48 “Iterative and Incremental Development: A Brief History”, Larman and Basili, 2003