Category Archives: test cases

Descriptive and Prescriptive Testing

While many of us drone on about scripted testing vs. exploratory testing, the reality is on real projects we tend to execute testing with a blend of both. It often feels lop-sided – on many projects, scripted testing is the norm, and exploratory testing isn’t acknowledged or supported. On others, the opposite can be true. I’ll leave the debate on this topic up to others – I don’t care what you do on your projects to create value. I would encourage you to try some sort of blend, particularly if you are curious about trying exploratory testing. However, I’m more interested in the styles and why some people are attracted to one side of the debate or the other.

Recently, David Hussman and I have been collaborating, and he pointed out the difference between “prescriptive” and “descriptive” team activities. A prescriptive style is a preference towards direction (“do this, do that”) while a descriptive style is more reflective (“this is what we did”). Both involve desired outcomes or goals, but one attempts to plan the path to the outcome in more detail in advance, and the other relies on trying to reach the goals with the tools you have at hand, reflecting on what you did and identifying gaps, improving as you go, and moving towards that end goal.

With a descriptive style of test execution, you try to reach a goal using lightweight test guidance. You have a focus and more coarse-grained support for it than what scripted testing provides. (The guidance is there, it just isn’t as explicit.) As you test, and when you report testing, you describe things like coverage, what you discovered, bugs, and your impressions and feelings. With a prescriptive style of testing, you are directed by test plans and test cases for testing guidance, and follow a more direct process of test execution.

Scripted testing is more prescriptive (in general) and exploratory testing is more descriptive (in general.) The interesting thing is that both styles work. There are merits and drawbacks of both. However, I have a strong bias towards a descriptive style. I tend to prefer an exploratory testing approach, and I can implement this with a great deal of structure, traceability utilizing different testing techniques and styles. I prefer the results the teams I work with get when they use a more descriptive style, but there are others who have credible claims that they prefer to do the opposite. I have to respect that there are different ways of solving the testing problem, and if what you’re doing works for you and your team, that’s great.

I’ve been thinking about personality styles and who might be more attracted to different test execution styles. For example, I helped a friend out with a testing project a few weeks ago. They directed me to a test plan and classic scripted test cases. Since I’ve spent a good deal of time on Agile teams over the past almost decade, I haven’t been around a lot of scripted tests for my own test execution. Usually we use coverage outlines, feature maps, checklists, and other sources of information that are more lightweight to guide our testing. It took me back to the early days of my career and it was kind of fun to try something else for a while.

Within an hour or two of following test cases, I got worried about my mental state and energy levels. I stopped thinking and engaging actively with the application and I felt bored. I just wanted to hurry up and get through the scripted tests I’d signed on to execute and move on. I wanted to use the scripted test cases as lightweight guidance or test ideas to explore the application in far greater detail than what was described in the test cases. I got impatient and I had to work hard to keep my concentration levels up to do adequate testing. I finally wrapped up later that day, found a couple of problems, and emailed my friend my report.

The next day, mission fulfilled, I changed gears and used an exploratory testing approach. I created a coverage outline and used the test cases as a source of information to refer to if I got stuck. I also asked for the user manual and release notes. I did a small risk assessment and planned out different testing techniques that might be useful. I grabbed my favorite automated web testing tool and created some test fixtures with it so I could run through hundreds of tests using random data very quickly. That afternoon, I used my lightweight coverage to help guide my testing and found and recorded much more rich information, more bugs, and I had a lot of questions about vague requirements and inconsistencies in the application.

What was different? The test guidance I used had more sources of information, and models of coverage, and it wasn’t an impediment to my thinking about testing. It put the focus on my test execution, and I used tools to help do more, better, faster test execution to get as much information as I could, in a style that helps me implement my mission as a tester. I had a regression test coverage outline to repeat what needed to be repeated, I had other outlines and maps that related to requirements, features, user goals,etc. that helped direct my inquisitive mind, and helped me be more consistent and thorough. I used tools to support my ideas and to help me extend my reach rather than try to get them to repeat what I had done. I spent more time executing tests, and many different kinds of tests using different techniques than managing the test cases, and the results reflected that.

My friend was a lot happier with my work product from day 2 (using a descriptive style) than on day 1 (using a prescriptive style). Of course, some of my prescriptive friends could rightly argue that it was my interpretation and approach that were different than theirs. But, I’m a humanist on software projects and I want to know why that happens. Why do I feel trapped and bored with much scripted testing while they feel fearful doing more exploratory testing? We tend to strike a balance somewhere in the middle on our projects, and play to the strengths and interests of the individuals anyway.

So what happened with my testing? Part of me thinks that the descriptive style is superior. However, I realize that it is better for me – it suits my personality. I had a lot of fun and used a lot of different skills to find important bugs quickly. I wasn’t doing parlor trick exploratory testing and finding superficial bugs – I had a systematic, thorough traceable approach. More importantly for me, I enjoyed it thoroughly. Even more importantly, my friend, the stakeholder on the project who needed me to discover information they could use, was much happier with what I delivered on day 2 than on day 1.

I know other testers who aren’t comfortable working the way I did. If I attack scripted testing, they feel personally attacked, and I think that’s because the process suits their personality. Rather than debate, I prefer we work using different tools and techniques and approaches and let our results do the talking. Often, I learn something from my scripting counterpart, and they learn something from me. This fusion of ideas helps us all improve.

That realization started off my thinking in a different direction. Not in one of those “scripted testing == bad, exploratory testing == good” debates, but I wondered about testing styles and personality and what effect we might have when we encourage a style and ignore or vilify another. Some of that effect might be to drive off a certain personality type who looks at problems differently and has a different skill set.

In testing, there are often complaints about not being able to attract skilled people, or losing skilled people to other roles such as programming or marketing or technical writing. Why do we have trouble attracting and keeping skilled people in testing. Well, there are a lot of reasons, but might one be that we discourage a certain kind of personality type and related skill set by discouraging descriptive testing styles like exploratory testing? Also, on some of our zealous ET or Agile teams, are we also marginalizing worthwhile people who are more suited to a prescriptive style of working?

We also see this in testing tools. Most are geared towards one style of testing, a prescriptive model. I’m trying to help get the ball rolling on the descriptive side with the Session Tester project. There are others in this space as well, and I imagine we will see this grow.

There has to be more out there testing-style-wise other than exploratory testing and scripted testing, and manual vs. automated testing. I personally witness a lot of blends, and encourage blends of all of the above. I wonder if part of the problem with the image of testing and our problem attracting talented people is in how we insist testing must be approached. I try to look at using all the types of testing we can use on projects to discover important information and create value. Once we find the right balance, we need to monitor and change it over time to adjust to dynamics of projects. I don’t understand the inflexibility we often display towards different testing ideas. How will we know if we don’t try?

What’s wrong with embracing different styles and creating a testing mashup on our teams? Why does it have to be one way or the other? Also, what other styles of testing other than exploratory approaches are descriptive? What other prescriptive styles other than scripted testing (test plan, test case driven) are there? I have some ideas, but email me If you’d like to see your thoughts appear in this blog.

Testing Debt

When I’m working on an agile project, (or any process using an iterative lifecycle), an interesting phenomenon occurs. I’ve been struggling to come up with a name for it, and conversations with Colin Kershaw have helped me settle on “testing debt”. (Note: Johanna Rothman has touched on this before, she considers it to be part of technical debt.) Here’s how it works:

  • in iteration one, we test all the stories as they are developed, and are in synch with development
  • in iteration two, we remain in synch testing stories, but when we integrate what has been developed in iteration one with the new code, we now have more to test than just the stories developed in that iteration
  • in iteration three, we have the stories to test in that iteration, plus the integration of the features developed in iterations that came before

As you can see, integration testing piles up. Eventually, we have so much integration testing to do as well as story testing, we have to sacrifice one or the other because we are running out of time. To end the iteration (often two to four weeks in length) some sort of testing needs to be cut in this iteration to be looked at later. I prefer keeping in synch with development, so I consciously incur “integration testing debt”, and we schedule time at the end of development to test a completed system.

Colin and I talked about this, and we explored other kinds of testing we could be doing. Once we had a sufficiently large list of testing (unit testing, “ility” testing, etc.), it became clear that the “testing debt” was more appropriate than “integration testing debt”.

Why do we want to test that much? As I’ve noted before, we can do testing in three broad contexts: the code context (addressed through TDD), the system context and the social context. The social context is usually the domain of conventional software testers, and tends to rely on testing through a user interface. At this level, the application becomes much more complex, greater than the sum of its parts. As a result, we have a lot of opportunity for testing techniques to satisfy coverage. We can get pretty good coverage at the code level, but we end up with more test possibilities as we move towards the user interface.

I’m not talking about what is frequently called “iteration slop” or “trailer-hitched QA” here. Those occur when development is done, and testing starts at the end of an iteration. The separate QA department or testing group then takes the product and deems it worthy of passing the iteration after they have done their testing in isolation. This is really still doing development and testing in silos, but within an iterative lifecycle.

I’m talking about doing the following within an iteration, alongside development:

  • work as a sounding board with development on emerging designs
  • help generate test ideas prior to story development (generative TDD)
  • help generate test ideas during story development (elaborative TDD)
  • provide initial feedback on a story under development
  • test a story that has completed development
  • integration test the product developed to date

Of note, when we are testing alongside development, we can actually engage in more testing activities than when working in phases (or in a “testing” phase near the end). We are able to complete more testing, but that can require that we use more testers to still meet our timelines. As we incur more testing debt throughout a project, we have some options for dealing with it. One is to leave off story testing in favour of integration testing. I don’t really like this option; I prefer keeping the feedback loop as tight as we can on what is being developed now. Another is to schedule a testing phase at the end of the development cycle to do all the integration, “ility”, system testing etc. Again I find this can cause a huge lag in the feedback loop.

I prefer a trade-off. We have as tight a feedback loop on testing stories that are being developed so we stay in synch with the developers. We do as much integration, system, “ility” testing as we can in each iteration, but when we are running out of time, we incur some testing debt in these areas. As the product is developed more (and there is now much more potential for testing), we bring in more testers to help address the testing debt, and bring on the maximum number we can near the end. We schedule a testing iteration at the end to catch up on the testing debt that we determine will help us mitigate project risk.

There are several kinds of testing debt we can incur:

  • integration testing
  • system testing
  • security testing
  • usability testing
  • performance testing
  • some unit testing

And the list goes on.

This idea is very much a work-in-progress. Colin and I have both noticed that on the development side, we are also incurring testing debt. Testing is an area with enormous potential, as Cem Kaner has pointed out in “The Impossibility of Complete Testing” (Presentation) (Article).

Much like technical debt we can incur it unknowingly. Unlike refactoring, I don’t know of a way to repay this other than to strategically add more testers, and to schedule time to pay it back when we are dealing with contexts other than program code. Even in the code context, we still may incur testing debt that refactoring doesn’t completely pay down.

How have you dealt with testing debt? Did you realize you were incurring this debt, and if so, how did you deal with it? Please drop me a line and share your ideas.

Virtual Bugs

Mark McSweeny and I were talking about some of the challenges facing conventional testers on agile projects. One such challenge is what to do with bugs found during development, such as during Test-Driven Development when a tester and developer are pairing. It doesn’t seem fair to the developer to formally log bugs on a story before they have completed development on it. Many of them will be moot once the story is done, but some of them might squeak through. How do we keep track of them in a constructive way?

When they are pairing and developing tests, new test cases are added as they are generated, and the code is added to make them pass. However, sometimes when bugs are found during story development, the tester can overwhelm the developer and impede development progress. At this point, the developer and tester pair can split up, and the develoepr can pair with another developer and work towards completing the story. As a tester, what do I do with the bugs we discovered but couldn’t get the unit tests finished for?

On small teams, I will keep a running tab in my notes of these bugs. When the story is complete, I check my notes and test these scenarios first, and then log the ones that weren’t fixed during the story as “bug stories”. This is fine if there are a small number of developers, and I’m the only tester. This doesn’t scale well though. On slightly larger teams, I have also used a wiki to record these bugs which the other testers also reviewed and used. When they tested a story when it was complete, they would check the wiki first for any of these bugs. Any of these that weren’t addressed in development were then logged as bug stories or in a fault-tracking system. This can create classes of bugs which can create problems. It was hard to maintain the two systems, the wiki and the bug tracker.

As I was describing some of the problems I’ve come across with bugs found during story development, Mark cut through my usual verbosity with clarity, and said I was describing: “virtual bugs”. This is a lot more concise than my five minute hand waving explanation of this different class of bugs.

I have started calling the bugs found during story development “virtual bugs”. My question to other conventional testers on agile projects is: “How do you deal with virtual bugs”? Please share your experiences.

Dehumanizing Software Testing

I was talking with Daniel Gackle, a software developer and philosophical thinker, about developing software and systems. Daniel mentioned in passing that we often forget about the humanity in the system – the motivation of the human who is using the tool, or doing the work. This comment resonated with me, particularly with regards to some software testing practices.

Sometimes when we design test plans, test cases, and use testing tools, we try to minimize the humanity of the tester. The feeling might be that humans are prone to error, and we want solid, repeatable tests, so we do things to try to minimize human error. One practice is the use of procedural test scripts. The motivation often is that if the test scripts are detailed enough, anyone can repeat them, with a benefit that there will be less chance for variability or error. The people who write these tests try to be as detailed as possible with one motivation being that the test must be repeated the same way each time. It is not desirable to have variation because we want this exact test case to be run the same way every time it is run.

This type of thinking spills over into test automation as well. There is a push to have tools that can be used by anyone. These tools take care of the details, all a user needs to do is learn the basics and point and click and the tool does the rest for us. We don’t need specialists then, the tool will handle test case design, development and execution.

Without going into the drawbacks that both of these approaches entail, I want to focus on what I would call the “dehumanizing of software testing”. When we place less value on the human doing the work, and try to minimize their direct interaction with software under test, what do we gain, and what do we lose?

In The Dumbing Down of Programming, Ellen Ullman describes some of what is lost when using tools that do more work for us. Ullman says:

the desire to encapsulate complexity behind a simplified set of visual representations, the desire to make me resist opening that capsule — is now in the tools I use to write programs for the system.

It is today, (as it was in 1998 when this article was written), also in procedural test cases (or scripts) and in the popular “capture/replay” or event recording testing tools. Both seek to “encapsulate complexity” behind an interface. Both practices I would argue, lead to dehumanizing software testing.

Ullman mentions that the original knowledge is something we give up when we encapsulate complexity:

Over time, the only representation of the original knowledge becomes the code itself, which by now is something we can run but not exactly understand.

When we don’t think about the human interaction that I believe is so valuable to software testing, we need to be aware of what we are giving up. There is a trade-off between having interactive, cognitive testing, and testing through interfaces where the complexity is hidden from us. There are cases where this is useful, we don’t always need to understand the underlying complexity of an application when we are testing against an interface. However, as Ullman says, we should be aware of what is going on:

Yet, when we allow complexity to be hidden and handled for us, we should at least notice what we’re giving up. We risk becoming users of components, handlers of black boxes that don’t open or don’t seem worth opening.

When is hiding the complexity dehumanizing software testing, and when is it a valuable practice? I would argue that as soon as we encapsulate complexity to the point that the tester is discouraged from interacting with the software with their mind fully engaged, we are dehumanizing software testing. If a tester is merely following a script for most of their testing work, they are going to have trouble being completely mentally engaged. If a testing tool is trying to do the work of test design and execution, it will be difficult for the tester to be completely engaged with testing. I believe we miss out on their skills of observation, inquiry and discovery when we script what the tester needs to do and they do most of their work by rote.

The important thing to me is to understand what the trade-offs are before we adopt a testing practice. When we forget the humanity of the system, we will probably get results other than what we intended. We need to be aware of how people impact the use of tools, and how people work on projects, because after all, it’s still the people who are key to project success.