Category Archives: descriptive testing

Descriptive and Prescriptive Testing

While many of us drone on about scripted testing vs. exploratory testing, the reality is on real projects we tend to execute testing with a blend of both. It often feels lop-sided – on many projects, scripted testing is the norm, and exploratory testing isn’t acknowledged or supported. On others, the opposite can be true. I’ll leave the debate on this topic up to others – I don’t care what you do on your projects to create value. I would encourage you to try some sort of blend, particularly if you are curious about trying exploratory testing. However, I’m more interested in the styles and why some people are attracted to one side of the debate or the other.

Recently, David Hussman and I have been collaborating, and he pointed out the difference between “prescriptive” and “descriptive” team activities. A prescriptive style is a preference towards direction (“do this, do that”) while a descriptive style is more reflective (“this is what we did”). Both involve desired outcomes or goals, but one attempts to plan the path to the outcome in more detail in advance, and the other relies on trying to reach the goals with the tools you have at hand, reflecting on what you did and identifying gaps, improving as you go, and moving towards that end goal.

With a descriptive style of test execution, you try to reach a goal using lightweight test guidance. You have a focus and more coarse-grained support for it than what scripted testing provides. (The guidance is there, it just isn’t as explicit.) As you test, and when you report testing, you describe things like coverage, what you discovered, bugs, and your impressions and feelings. With a prescriptive style of testing, you are directed by test plans and test cases for testing guidance, and follow a more direct process of test execution.

Scripted testing is more prescriptive (in general) and exploratory testing is more descriptive (in general.) The interesting thing is that both styles work. There are merits and drawbacks of both. However, I have a strong bias towards a descriptive style. I tend to prefer an exploratory testing approach, and I can implement this with a great deal of structure, traceability utilizing different testing techniques and styles. I prefer the results the teams I work with get when they use a more descriptive style, but there are others who have credible claims that they prefer to do the opposite. I have to respect that there are different ways of solving the testing problem, and if what you’re doing works for you and your team, that’s great.

I’ve been thinking about personality styles and who might be more attracted to different test execution styles. For example, I helped a friend out with a testing project a few weeks ago. They directed me to a test plan and classic scripted test cases. Since I’ve spent a good deal of time on Agile teams over the past almost decade, I haven’t been around a lot of scripted tests for my own test execution. Usually we use coverage outlines, feature maps, checklists, and other sources of information that are more lightweight to guide our testing. It took me back to the early days of my career and it was kind of fun to try something else for a while.

Within an hour or two of following test cases, I got worried about my mental state and energy levels. I stopped thinking and engaging actively with the application and I felt bored. I just wanted to hurry up and get through the scripted tests I’d signed on to execute and move on. I wanted to use the scripted test cases as lightweight guidance or test ideas to explore the application in far greater detail than what was described in the test cases. I got impatient and I had to work hard to keep my concentration levels up to do adequate testing. I finally wrapped up later that day, found a couple of problems, and emailed my friend my report.

The next day, mission fulfilled, I changed gears and used an exploratory testing approach. I created a coverage outline and used the test cases as a source of information to refer to if I got stuck. I also asked for the user manual and release notes. I did a small risk assessment and planned out different testing techniques that might be useful. I grabbed my favorite automated web testing tool and created some test fixtures with it so I could run through hundreds of tests using random data very quickly. That afternoon, I used my lightweight coverage to help guide my testing and found and recorded much more rich information, more bugs, and I had a lot of questions about vague requirements and inconsistencies in the application.

What was different? The test guidance I used had more sources of information, and models of coverage, and it wasn’t an impediment to my thinking about testing. It put the focus on my test execution, and I used tools to help do more, better, faster test execution to get as much information as I could, in a style that helps me implement my mission as a tester. I had a regression test coverage outline to repeat what needed to be repeated, I had other outlines and maps that related to requirements, features, user goals,etc. that helped direct my inquisitive mind, and helped me be more consistent and thorough. I used tools to support my ideas and to help me extend my reach rather than try to get them to repeat what I had done. I spent more time executing tests, and many different kinds of tests using different techniques than managing the test cases, and the results reflected that.

My friend was a lot happier with my work product from day 2 (using a descriptive style) than on day 1 (using a prescriptive style). Of course, some of my prescriptive friends could rightly argue that it was my interpretation and approach that were different than theirs. But, I’m a humanist on software projects and I want to know why that happens. Why do I feel trapped and bored with much scripted testing while they feel fearful doing more exploratory testing? We tend to strike a balance somewhere in the middle on our projects, and play to the strengths and interests of the individuals anyway.

So what happened with my testing? Part of me thinks that the descriptive style is superior. However, I realize that it is better for me – it suits my personality. I had a lot of fun and used a lot of different skills to find important bugs quickly. I wasn’t doing parlor trick exploratory testing and finding superficial bugs – I had a systematic, thorough traceable approach. More importantly for me, I enjoyed it thoroughly. Even more importantly, my friend, the stakeholder on the project who needed me to discover information they could use, was much happier with what I delivered on day 2 than on day 1.

I know other testers who aren’t comfortable working the way I did. If I attack scripted testing, they feel personally attacked, and I think that’s because the process suits their personality. Rather than debate, I prefer we work using different tools and techniques and approaches and let our results do the talking. Often, I learn something from my scripting counterpart, and they learn something from me. This fusion of ideas helps us all improve.

That realization started off my thinking in a different direction. Not in one of those “scripted testing == bad, exploratory testing == good” debates, but I wondered about testing styles and personality and what effect we might have when we encourage a style and ignore or vilify another. Some of that effect might be to drive off a certain personality type who looks at problems differently and has a different skill set.

In testing, there are often complaints about not being able to attract skilled people, or losing skilled people to other roles such as programming or marketing or technical writing. Why do we have trouble attracting and keeping skilled people in testing. Well, there are a lot of reasons, but might one be that we discourage a certain kind of personality type and related skill set by discouraging descriptive testing styles like exploratory testing? Also, on some of our zealous ET or Agile teams, are we also marginalizing worthwhile people who are more suited to a prescriptive style of working?

We also see this in testing tools. Most are geared towards one style of testing, a prescriptive model. I’m trying to help get the ball rolling on the descriptive side with the Session Tester project. There are others in this space as well, and I imagine we will see this grow.

There has to be more out there testing-style-wise other than exploratory testing and scripted testing, and manual vs. automated testing. I personally witness a lot of blends, and encourage blends of all of the above. I wonder if part of the problem with the image of testing and our problem attracting talented people is in how we insist testing must be approached. I try to look at using all the types of testing we can use on projects to discover important information and create value. Once we find the right balance, we need to monitor and change it over time to adjust to dynamics of projects. I don’t understand the inflexibility we often display towards different testing ideas. How will we know if we don’t try?

What’s wrong with embracing different styles and creating a testing mashup on our teams? Why does it have to be one way or the other? Also, what other styles of testing other than exploratory approaches are descriptive? What other prescriptive styles other than scripted testing (test plan, test case driven) are there? I have some ideas, but email me If you’d like to see your thoughts appear in this blog.