Dehumanizing Software Testing

I was talking with Daniel Gackle, a software developer and philosophical thinker, about developing software and systems. Daniel mentioned in passing that we often forget about the humanity in the system – the motivation of the human who is using the tool, or doing the work. This comment resonated with me, particularly with regards to some software testing practices.

Sometimes when we design test plans, test cases, and use testing tools, we try to minimize the humanity of the tester. The feeling might be that humans are prone to error, and we want solid, repeatable tests, so we do things to try to minimize human error. One practice is the use of procedural test scripts. The motivation often is that if the test scripts are detailed enough, anyone can repeat them, with a benefit that there will be less chance for variability or error. The people who write these tests try to be as detailed as possible with one motivation being that the test must be repeated the same way each time. It is not desirable to have variation because we want this exact test case to be run the same way every time it is run.

This type of thinking spills over into test automation as well. There is a push to have tools that can be used by anyone. These tools take care of the details, all a user needs to do is learn the basics and point and click and the tool does the rest for us. We don’t need specialists then, the tool will handle test case design, development and execution.

Without going into the drawbacks that both of these approaches entail, I want to focus on what I would call the “dehumanizing of software testing”. When we place less value on the human doing the work, and try to minimize their direct interaction with software under test, what do we gain, and what do we lose?

In The Dumbing Down of Programming, Ellen Ullman describes some of what is lost when using tools that do more work for us. Ullman says:

the desire to encapsulate complexity behind a simplified set of visual representations, the desire to make me resist opening that capsule — is now in the tools I use to write programs for the system.

It is today, (as it was in 1998 when this article was written), also in procedural test cases (or scripts) and in the popular “capture/replay” or event recording testing tools. Both seek to “encapsulate complexity” behind an interface. Both practices I would argue, lead to dehumanizing software testing.

Ullman mentions that the original knowledge is something we give up when we encapsulate complexity:

Over time, the only representation of the original knowledge becomes the code itself, which by now is something we can run but not exactly understand.

When we don’t think about the human interaction that I believe is so valuable to software testing, we need to be aware of what we are giving up. There is a trade-off between having interactive, cognitive testing, and testing through interfaces where the complexity is hidden from us. There are cases where this is useful, we don’t always need to understand the underlying complexity of an application when we are testing against an interface. However, as Ullman says, we should be aware of what is going on:

Yet, when we allow complexity to be hidden and handled for us, we should at least notice what we’re giving up. We risk becoming users of components, handlers of black boxes that don’t open or don’t seem worth opening.

When is hiding the complexity dehumanizing software testing, and when is it a valuable practice? I would argue that as soon as we encapsulate complexity to the point that the tester is discouraged from interacting with the software with their mind fully engaged, we are dehumanizing software testing. If a tester is merely following a script for most of their testing work, they are going to have trouble being completely mentally engaged. If a testing tool is trying to do the work of test design and execution, it will be difficult for the tester to be completely engaged with testing. I believe we miss out on their skills of observation, inquiry and discovery when we script what the tester needs to do and they do most of their work by rote.

The important thing to me is to understand what the trade-offs are before we adopt a testing practice. When we forget the humanity of the system, we will probably get results other than what we intended. We need to be aware of how people impact the use of tools, and how people work on projects, because after all, it’s still the people who are key to project success.