Category Archives: exploratory testing

The Kick in the Discovery

“Why do you like software testing?” is a question that I get asked frequently. A phrase from Richard Feynman comes to mind. When Feynman was asked about how he felt about the reward of his Nobel Prize, he said one of the real rewards of the work he did was “the kick in the discovery.”1 This has stuck with me. As a software tester, I enjoy discovering bugs. I seem to be one of those people who enjoys seeing how a system works when stressed to its limits. I get a kick out of discovering something new in a system, or being one of the first people to use a new system. Scientists like Feynman fascinate me, and a lot of what they say resonates with my thoughts on testing. Software testing can learn a lot from scientific theory; the parallels are very interesting.

Exploratory Testing: Exploring Unintended Test Results

Many of the great scientific discoveries have come about by accident during a typical scientific process of conjecture and refutation. A controlled experiment is a movement towards proving the fallibility of a hypothesis. However, when an experiment has unintended consequences, some scientists do a great job handling things that don’t go according to plan. This often leads to great discoveries.

One parallel to software testing is Ernest Rutherford and his work in nuclear physics. (Someone who spent time “smashing atoms” sounds like someone who might be good at software testing.) Rutherford observed unintended consequences, built on other work and collaborated with peers. He saw unintended consequences in experiments done by his peers, and developed patterns of thought around what was being observed.

The work that eventually led to the discovery of the nucleus of an atom is an interesting topic for software testers to study. If those doing the experiment to push alpha particles through the gold foil had executed the experiment like routine-school testers, and had scripted the test case of firing the particles through the gold foil, and the expected results of all the particles having to go through the foil, would they have noticed the ones that bounced back? If so, what would they have done if the particles that bounced back weren’t in the plan, and weren’t in the experiment test script’s expected results? What if they were so focused on the particles that were supposed to go through the foil that they didn’t notice the ones that did not?

Not knowing exactly how much Rutherford and his colleagues had formalized the experiment, I can’t make any claims on exactly what they did. However, we see the results of the way Rutherford thought about scientific experimentation. What he and his colleagues observed changed the initial hypothesis, and subsequent experiments led to discovering the nucleus of the atom. They had an idea, tested it out, got unintended results and Rutherford explored around those unintended results. He found something new that contradicted conventional knowledge, but transformed the face of modern physics. Like a good exploratory tester, we might infer that Rutherford was more concerned with thinking about what he was doing than in following a formula by rote to prove his hypothesis.

Software testers don’t make discoveries that transform scientific knowledge, but the discoveries that are made can transform project knowledge. At the very least, these discoveries potentially save companies a lot of money. Bug discoveries are hard to measure, but every high-impact bug that is discovered and fixed prior to shipping saves the vendor money. Discoveries of high-impact bugs may be minimized by the team at first, but many times those discoveries are the difference between project success and failure.

James Bach and Cem Kaner say that exploratory testing is a way of thinking about testing. Exploratory testing, like scientific experimentation allows for improvisation and for the exploration of unintended results. Those unintended results are where the real discoveries lie many times in science, and where the bugs often lie in software testing. Detailed test plans and pre-scripted test cases based on limited knowledge may discourage discovery. Tim Van Tongeren and others have done work researching directed observation and the weaknesses associated with it.

One way of thinking about exploratory testing is to see it as a way of observing unintended consequences, exploring the possibilities, forming a hypothesis or theory of the results, and experimenting again to see if the new theory works under certain circumstances. This cycle continues, and testing is as much about making new discoveries as it is confirming intended behaviours . Pre-scripting steps and intended consequences can discourage observing these unintended consequences in the first place. A hypothesis, or testing mission or test case is fine to detail prior to testing, but slavishly sticking to pre-scripted results can stifle discovery.

I have had some testers call exploratory testing “unscientific”. A good scientific experiment to them is about carefully scripted test cases that outline every step and the expected results of that test case. However, many times science doesn’t really work that way. A good deal of care is put into the variables in an experiment, but a lot of exploration also goes on. What is important is not necessarily the formula, but how to deal with unintended consequences. Scientific theory is often more about thinking, dealing with empirical data, and making inferences based on experiments.

Scientific theories go far beyond empirical data, and new experiments confirm and disconfirm theories all the time. Yesterday’s scientific truth becomes today’s scientific joke. “Can you believe that people once thought the world was flat?” As a software tester I’ve known “zero defect” project managers who thought the software was bug-free when it shipped. It wasn’t funny when they were proved wrong, but the software testers were treated like “round ballers” when they provided disconfirming information prior to release.

Good scientists deal with a lot of uncertainty. Good software testers need to be comfortable with uncertainty as well. Software systems are becoming so complicated, it is impossible to predict all the consequences of system interaction. Directed observation requires predictability and has a danger of not noticing the results that aren’t predictable.

Exploratory testing is a way of thinking about testing that can be modelled after the scientific method. It doesn’t need to be some ad-hoc, fly by the seat of your pants kind of testing that lacks discipline. Borrow a little thinking from the scientific community, and you can have very disciplined, adaptable, discovery-based testing that can reliably cope with unintended consequences.

1 p.12 The Pleasure of Finding Things Out, Richard Feynman

Exploratory Testing

Exploratory Testing is an effective way of thinking about software testing that skilled testers can use to think of techniques to find important bugs quickly, provide rapid feedback to the rest of the team, and add diversity to their testing activities.

To learn more about Exploratory Testing, check out James Bach’s site for articles. Cem Kaner has also written about it. Either of those sources can explain it better than I can.

I sometimes get confused looks from some practitioners when I tell them that I’ve found Exploratory Testing to be effective for my own testing. Some of the confusion may come from not knowing exactly what Exploratory Testing is.

I’ve worked as a testing lead on projects and have directed others to do Exploratory Testing to complement automated tests, and have sometimes met with resistance. Those who resisted later told me that they weren’t used to finding so many defects when they tested, but still were uncomfortable doing unscripted testing even though it seemed to be more effective. When I repeat what James Bach says, that “…testing is an interactive cognitive activity” and I value them for their brains and expertise, they are smart testers who can add a lot of value and I am pleased with the results, it’s rewarding to watch their confidence grow.

Some of the confusion may come from working in an unscripted environment when one is used to following a script. When confidence begins to displace confusion, testers often get more creative, they seem to improvise more when testing, and they seem to want to collaborate and communicate more with the rest of the team. When they are finding important issues quickly, sharing them with the developers and getting rapid positive feedback on their own work, it seems to help build team cohesion as well as individual confidence.
Here’s where I think another source of some of the confusion is. Some people seem to think that you can only do Exploratory Testing on software you haven’t used before. At least that’s the impression I get. If I say I will do Exploratory Testing on each build after the automated test suite runs, some people act puzzled that I would be doing Exploratory Testing on a program I am already familiar with.

As a software tester, I can still explore, inquire and discover testing software I’m familiar with. If we think about it, that’s often how scientific research works. Scientists deal with the familiar and look for patterns or occurrences that don’t seem to fit known models. Reviewing the familiar to verify that the models still hold true is common. Discovery can readily come from exploring something we are familiar with. A known behavior might change under certain conditions that we haven’t seen yet. We may try a new experiment (test) in a different way that yields results that we haven’t seen before.

Exploratory Testing isn’t always working as a discoverer working through a program for the first time. Maybe we are thinking of an explorer analogy like Lewis and Clark when we should be thinking of a scientific method exploration analogy. Sometimes Exploratory Testing can be like a voyage of discovery charting the unknown, but it is very often like a scientific experiment where we are exploring changing variables based on observed behavior.

The pursuit of knowledge and creative problem solving are facilitated with Exploratory Testing. Scripted testing or Directed Observation is a common “best practice” in software testing. How often do we miss problems because we direct what we want to see in a program and miss the obvious? Exploratory Testing is one way to help test with diversity and look at the familiar in new ways.