Category Archives: tools

John Kordyback on Testing

Testing is an exercise of discovering interfaces.

If this interests you, think about the above statement, and share your conclusions. I’ll post my own thoughts shortly.

(edit – I’ve added a couple of my own thoughts)

From a development perspective, driving out interfaces and making the code more testable will improve the overall design when following test-driven development. The developers drive out the design with unit tests using a tool like JUnit which helps them create interfaces. But JUnit itself uses an interface to exercise areas of the code. Now things get interesting…

From a tester’s perspective, this concept came clear to me when exploring areas of the application behind the GUI layer to write automated tests against. For example, if a developer makes a particular part of the code testable, we can use some sort of tool to write tests and exercise areas of the code. In this case, we find an area we’d like to test, and the developers create an interface for us to test against. If we create fixtures with FIT, we are essentially creating another testable interface. When we use WTR Ruby scripts to drive the browser via the DOM, we are using the DOM interface to test the application. Screen-scraping automated functional testing tools create their own testable interface.

From this perspective, the difference between JUnit or HTTPUnit tests, FIT or WTR tests or some other testing tool is the type of interface they use.

An interesting side-effect of seeking testable interfaces in a product from the code level up: the more we learn about the interfaces, the more we explore the product and the more we learn about it. The more we learn, the more information we gather about the applications strengths and weaknesses. It kind of sounds like testing software doesn’t it?

We can take a cue from our test-driven developer colleagues and how they use interfaces to design better code, and seek to drive out interfaces to help us test the product more thoroughly. This is another view of testing which takes us away from a strict black-box perspective. Think of peeling back the GUI layer (with web applications it can be simple to peel back the browser and start looking at the HTTP layer and down into the code), and look for testable interfaces, or areas that could use them. You might be surprised at what you find, and a new world of testing may be waiting to be discovered.

Test Automation as a Testing Tool

When we think of GUI-level test automation, we usually think of taking some sort of test case, designing a script with a language like Perl, Ruby or a vendor testing tool, and developing the test case programatically from beginning to end. The prevailing view is often that there is some sort of manual test case that we need to repeat, so we automate it in its entirety with a tool. This is a shallow view of test automation as James Bach, Bret Pettichord, Cem Kaner, Brian Marick and others have pointed out. I prefer the term “Computer Assisted Testing” (I believe coined by Cem Kaner) over “test automation” for this reason.

While developing automated tests, an interesting side effect came out in my development efforts. While I was debugging script code, I would run a portion of a test case many times. This would be a sequence of events; not an entire test case. I started to notice behavior changes from build to build when I watched a series of steps play back on my screen. When I would investigate, I found bugs that I may not have discovered doing pure manual testing, or running unattended automated tests. I started keeping snippets of scripts around to aid in exploratory testing activities, and found them very useful as another testing tool.

There are some benefits to automating a sequence of steps and blending this type of testing with manual testing. For example, if a test case has a large number of steps required to get to the area we want to focus testing on, using a script to automate that process helps us get there faster, frees us from distractions and helps us focus on the feature or unit under test. Humans get tired repeating tasks over and over and are prone to error. If precision is needed, computers can be programmed to help us. We can also test the software in a different way using a tool. We can easily control and vary test inputs, and measure what was different from previous test runs when bugs are discovered.

There are certain tasks that a human can do much better than a computer when it comes to testing. As James Bach says, testing is an interactive, cognitive process. Human reasoning and inference simply cannot be programmed into a test script, so the “test automation” notion will not replace a tester. Blending the investigative skills and natural curiousity of a tester with a tool that helps them discover bugs is a great way to focus some test automation efforts.

If you think of your automation efforts as “Computer Assisted Testing”, many more testing possibilities will come to mind. You just might harness technology in a more effective way, and it will show in your testing efforts.

I currently use Ruby as my Computer Assisted Testing tool, with the WTR Controller as my web application testing tool of choice. (cf. A Testing Win Using Ruby)

A Testing Win Using Ruby

Brian Marick has written lately about Exploratory Testing using Ruby. I decided to try this out on a web application using Ruby and the WTR(Web Testing with Ruby) IE Controller. I’m not at the level with Ruby yet where I can test an application using the Ruby Command interpreter like Brian does, but I thought I could write a short script and run it in an Exploratory manner. I decided to automate steps that would be tedious, time consuming and as a result, very much prone to error if run by a human. After a few minutes, I felt I had a script that was testing the application in a way I hadn’t tested manually. I ran it and watched it play back on my monitor.

When I ran it for the first time, the application responded in a strange manner. I had seen this behavior a couple of days previously when testing it manually, but couldn’t repeat it. When I re-ran the script, the behavior occurred again. I edited the script to hone in on the problem area and was able to repeat the problem each time. I edited it again to narrow down the cause, and I talked to the developer about the behavior. He had a suggestion to check a similar action, so I edited the script to try out the developer’s idea. In the end, I probably spent about a half hour on script development.

In a short period of time, I was able to track down a defect using Ruby that I had been trying to replicate manually. I’m confident that I would have spent a much longer time replicating the problem manually, and probably would have narrowed it down much later in the release when I was more familiar with the product. The test script that capitalized on the computer’s strengths over my human abilities run in a “what would happen if” scenario really paid off.