TDD Pairing – What I Missed

Since this is a training exercise, and I am working with a senior developer who is a good teacher, we spent some time reviewing our first pairing session. To teach me while we paired, the developer had created a situation to see if I could spot a problem. I of course missed it. During our TDD session, my primary focus was on thinking of testing ideas. The developer however was simultaneously thinking of making the code testable, improving the software design, and continuously improving the testability of the code. These three activities he says are the hallmarks of a good design.

Leading me down the garden path in the hopes of teaching me something, he deliberately developed a bad code smell and tried to guide me into seeing it. I was so focussed on generating testing ideas that I missed it. He deliberately made the unit tests awkward and difficult to implement. I trusted his design and took that for granted as a technical issue that I didn’t understand. I didn’t realize that the fact that the tests were onerous and difficult to set up and code was a bad test smell.

The lesson that I learned is that if we can add tests simply, it’s a sign of a good code design. Since the tests were awkward and I was completely dependent on the developer to add them, I needed to be concerned. As a tester, part of my job when pairing in a test-driven development situation is to watch for bad testing smells. Those bad smells in the tests are symptoms that something is wrong with the code. The developer pointed out that when it’s hard to test, it’s time to improve the code. When testability is improved, a byproduct is a better design.

At the end of the day, I was thinking about more test ideas and felt we needed to add much more to the existing design. The developer however realized we were in trouble and needed to refactor the existing code to make it more testable. Lesson learned – I need to watch that we can add tests easily. That is a sign of a good design. It doesn’t take a lot of programming skill to realize that some unit tests are awkward while others are simple and elegant. I’m also fairly confident that testers who don’t program would be able to learn to see the difference quite quickly after spending time with a developer who can demonstrate good and poor unit tests.

Pairing in Test Driven Development: Day One Report

An area of Agile Development where testers are usually absent is in Test-Driven Development and other developer testing activities. Since I like to collaborate with developers as much as I can, I asked them what other areas I could support them in. (Brian Marick calls these types of activities Technology-facing programmer support.) They told me that an area they would like to see testers work with them was in unit test development, especially when using Test-Driven Development. While I have pair tested with developers to help generate unit testing ideas, I haven’t actually worked with them during development in a pair programming kind of role. They felt that pairing with a tester would help them generate test ideas, and that it would be a good fit. The developer is thinking about programming most of the time while the tester is thinking about tests most of the time. I encourage developers to use test-infected development techniques, so I decided to stop theorizing and actually give it a try. I can’t very well answer the question about how a tester can add value to Test-Driven Development unless I’ve tried it.

Yesterday I paired with a senior developer who was developing an application in Java using the Intellij IDEA IDE which has JUnit integration. He kindly agreed to take me through the paces, but I have to admit I was a bit nervous. While I have basic Java programming skills, I am not a great coder and I usually work with scripting languages when I develop automated test cases. I wasn’t sure if I would be able to add any value to a programming activity or not.

After we walked through the business problem the coding effort for the day was to address, and some of the code that was in place, we began looking at the test framework. In this case, the developer had already written the first test, and an implementation that worked well enough to get that test to pass. We picked up at this point, and looked at the business rules and designed a new test. When we ran JUnit, the test failed, so that told us that the code implementation needed some work. The developer added some more logic to get that test to pass, and then we added another test case. We continued on adding a test cases which would initially fail, and the developer would work on the implementation to get the test cases to pass. At a certain point, he felt that we had a good basic set of test cases that covered enough of the business logic.

At the end of our session, we had an implementation that satisfied some basic test cases. I suggested test ideas, but was completely dependent on the developer to write the JUnit tests. I was absorbed in thoughts around more test ideas and what would seem feasible. I suggested a lot of test ideas that we could tackle the next day, and we discussed what tests we could justify doing. My intitial response was to test as much as possible, while the developer was thinking about the big picture and what time we could spend on testing. I realized this would be a trade-off; testing everything as robustly as possible would cause a lot of duplicated effort.

I didn’t feel like I added a lot of value to this session. Granted, I was getting trained in how Test – Driven Development works. I caught some minor syntax errors as we paired; the developer caught a few in his own work as well. We both missed some coding errors, but the compiler and the unit tests caught those.

My thoughts at the end of the day were that I had learned a lot, but added little value. Clearly, I need to learn more about the process and do this more in the hopes of adding value to a developer. The basic test ideas that I had generated were already ideas that the developer had thought of. I then started to think of more complex test ideas, and felt that we had very little coverage. The developer agreed that the coverage was light and we needed to work on more tests the next day. I left at the end of the day thinking about further tests we could write – I could add value there.

Describing Software Testing Using Inference Theories

I am re-reading Peter Lipton’s Inference To The Best Explanation which I first encountered in an Inductive Logic class I took in University. Lipton explores this model to help shed some light on how humans observe phenomena, explain what has been observed, and come to conclusions (make an inference) about what they have observed. Lipton says on p. 1:

We are forever inferring and explaining, forming new beliefs about the way things are and explaining why things are as we have found them to be. These two activities are central to our cognitive lives, and we usually perform them remarkably well. But it is one thing to be good at doing something, quite another to understand how it is done or why it is done so well. It’s easy to ride a bicycle, but very hard to describe how to do it. In the cases of inference and explanation, the contrast between what we can do and what we can describe is stark, for we are remarkably bad at principled description. We seem to have been designed to perform the activities, but not to analyze or defend them.

I had studied Deductive Logic and worked very hard trying to master various techniques in previous courses. I was taken aback in the first lecture on Inductive Logic when the professor told us that humans are terrible at Deductive Logic, and instead use Inductive Logic much more when making decisions. Deductive Logic is structured, has a nice set of rules, is measurable and can be readily explained. Inductive Logic is difficult to put parameters around, and the inductive activities are usually explained in terms of themselves. The result of explaining inductive reasoning is often a circular argument. For this reason, David Hume argued against induction in the 18th century, and attempts through the years to counter Hume rarely get much further than he did.

This all sounds familiar from a software testing perspective. Describing software testing projects in terms of a formalized theory is much easier than trying to describe what people actually do on testing projects, most of the time. It’s nice to have parameters around testing projects, and use a set of formal processes to justify the conclusions, but are the formalized policies an accurate portrayal of what actually goes on? My belief is that software testing is much more due to inference than deduction, and attempts to formalize testing into a nice set of instructions or policies are not a reflection of what good testing actually is.

What constitutes good software testing is very difficult to describe. I’m going to go out on a limb and use some ideas from Inductive Logic and see how they match software testing activities from my own experiences. Feel free to challenge my conclusions regarding inference and testing as I post them here.

Superbugs

My wife and I have friends and family in the health-care profession who tell us about “superbugs” – bacteria which are resistant to antibiotics. In spite of all the precautions, new technology and the enormous efforts of health care professionals, bugs still manage to mutate and respond to the environment they are in and still pose a threat to human health. In software development projects, I have encountered bugs that at least on the surface appear to exhibit this “superbug” behavior.

Development environments that utilize test-driven development, automated unit testing tools and other test-infected development techniques, in my experience, tend to generate very robust applications. When I see how much the developers are testing, and how good the tests are, I wonder if I’ll be able to find any bugs in their code at all. I do find bugs (sometimes to my surprise), but it can be much harder than in traditional development environments. Gone are the easy bugs an experienced tester can find in minutes in a newly developed application component. These include bounds conditions tests, integration tests and others that may not be what first come to mind to a developer testing their own code. However, in a test-infected development environment, most of these have already been thought of, and tested for by developers. As a tester, I have to get creative and inventive to find bugs in code that has already been thoroughly tested by the developers.

In some cases, I have collaborated with the developer to help in their unit test development efforts. <shameless plug> I talk about this more in next month’s edition of Better Software. </shameless plug> The resulting code is very hard for me to find bugs in. Sometimes to find any bugs at all, I have to collaborate with the developer to generate new testing ideas based on their knowledge of interactions in the code itself. The bugs that are found in these efforts are often tricky, time consuming and difficult to replicate. Nailing down the cause of these bugs often requires testers and developers pair testing. These bugs are not only hard to find, they are often difficult to fix, and seem to be resistant to the development efforts which are so successful in catching many bugs during the coding process. I’ve started calling these bugs “superbugs”.

It may be the case that certain bugs are resistant to the developer testing techniques, but I’m not sure if this is the case or not. I’ve thought until recently that these bugs also exist in traditionally developed code, but since testers spend so much time dealing with the bugs that test-infected development techniques tend to catch, they don’t have the time in the life of the project to find these types of bugs as frequently. Similarily, since they are difficult to replicate, they may not get reported as much by actual users, or several users may report the same problem in the form of several “unrepeatable” bugs.

Another reason *I* find them difficult to find might be due to my own testing habits and rules of thumb, particularly if the developer and I are working together quite closely. When we test together, I teach the developer some of my techniques, and they teach me theirs. When I finally test the code, both of our usual techniques have been tested quite well in development. Now I’m left with usability problems, some integration bugs that the unit testing doesn’t catch, and these so-called “superbugs”. Maybe the superbugs aren’t superbugs at all. Another tester might think of them as regular bugs, and may find them much more easily than I can because of their own toolkit of testing techniques and rules of thumb.

This behavior intrigues me none the less. Are we now able to find bugs that we didn’t have the time to find before, or are we now having to work harder as testers and push the bounds of our knowledge to find bugs in thoroughly developer-tested code? Or is it possible that our test-infected development efforts have resulted in a new strain of bugs?

A Testing Win Using Ruby

Brian Marick has written lately about Exploratory Testing using Ruby. I decided to try this out on a web application using Ruby and the WTR(Web Testing with Ruby) IE Controller. I’m not at the level with Ruby yet where I can test an application using the Ruby Command interpreter like Brian does, but I thought I could write a short script and run it in an Exploratory manner. I decided to automate steps that would be tedious, time consuming and as a result, very much prone to error if run by a human. After a few minutes, I felt I had a script that was testing the application in a way I hadn’t tested manually. I ran it and watched it play back on my monitor.

When I ran it for the first time, the application responded in a strange manner. I had seen this behavior a couple of days previously when testing it manually, but couldn’t repeat it. When I re-ran the script, the behavior occurred again. I edited the script to hone in on the problem area and was able to repeat the problem each time. I edited it again to narrow down the cause, and I talked to the developer about the behavior. He had a suggestion to check a similar action, so I edited the script to try out the developer’s idea. In the end, I probably spent about a half hour on script development.

In a short period of time, I was able to track down a defect using Ruby that I had been trying to replicate manually. I’m confident that I would have spent a much longer time replicating the problem manually, and probably would have narrowed it down much later in the release when I was more familiar with the product. The test script that capitalized on the computer’s strengths over my human abilities run in a “what would happen if” scenario really paid off.

Presenting Testing Activities to Business Stakeholders

Brian Marick’s series on Agile Testing Directions begins with a test matrix that describes testing activities as “Business Facing”, “Technology Facing”, “Support Programming” and “Critique Product”. This resonated with me, but it wasn’t until he pointed out that in my pair work with developers I did both Business Facing and Technology Facing activities that this seemed to click. I think this matrix he has developed provides testers with a common language to identify and communicate these activities.

I recently did presentations to business stakeholders on testing activities in Agile projects. I’ve generally found it difficult to explain the testing activities I engage in to fellow testers, let alone business stakeholders. In one meeting, I thought of the test matrix and brought up the “Business Facing” testing and “Technology Facing” areas of testing while I was explaining how I test on Agile projects. People seemed to understand this, so I started working on it more.

I started thinking of the matrix rotated on its side with Technology Facing on the left and Business Facing on the right. Instead of “support programming”, I went with “support” to capture both areas. The “business support” would involve activities like setting up meetings with developers and business stakeholders after each development iteration to ensure that the working code is what the business expects, and to get people communicating. I also thought that business support would involve helping the business people with acceptance tests and things like that.

I initially thought of naming each quadrant of the matrix, but when explaining it to my wife Elizabeth, she said: “Why don’t you just put that in a tree diagram?” I did just that, and presented Agile Testing activities like this:

I felt that “technical testing” was a simple way to describe “technology-facing product critiques”, and “business testing” would describe “business-facing product critiques”. Keeping it simple seems to work well when communicating testing concepts to non-technical people.

I described some of the testing techniques in each area. For example, a technical testing activity I use involves collaboration with the developers to write tests that can be run in the absence of a user interface. This can involve adding tests to drive a layer of the application at the controller level. Once the developers make this area testable, we co-design and develop a test case. I can then own the test case and run it with different kinds of test data.

Under the programmer support activity, we can pair together to generate testing ideas. In a test-driven development environment, we can pair program to come up with tests that drive the code, or the tester can use a scripting language to write the tests for the developers.

Business people and technical people seemed to understand this tree diagram and the explanations I gave. I heard later that business stakeholders were starting to use this language in other contexts when they were talking about testing.

Testers on Agile Projects

When I began reading articles and books on Agile Development and attended lectures by well-known experts in the subject, I was impressed. This style of development resonated with me, combining what I had learned from the Open Source world and aspects of successful projects I had been on. As a believer in W. Edwards Deming’s 14 Points, I felt that Agile Methods seemed be addressing many of the same issues.

I welcomed Agile Development, and have championed it now that I’ve experienced it. I didn’t feel a threat to my job as a tester, but I knew things were going to change. The only thing that bothered me was a new impression towards testers that seemed to be emerging. The attitude sounded like: “we’re doing testing now thank you very much, so we don’t know where you will fit in Agile projects”. This attitude is changing, but the role of testers on Agile teams is still emerging.

Since I am a tester working on Agile projects, I want to share my experiences. When I first thought about Agile Development not needing dedicated testers, my intitial reaction was to think of a writer/editor analogy. As my experience with Agile projects grows, I am less confident of the need for a dedicated tester. I still think there is a need, but I have to be willing to admit that there may be no role for dedicated testers on Agile Projects, even if I want there to be. The role may be a diminished one, or as Brian Marick points out in his blog series on Agile Testing Directions, testers may be specialists called in for certain tasks like Security or Performance testing. However, the agile testing role might evolve and change into something completely different from what we know of as testing today. My goal is to see if and where I fit in on Agile projects. I’m relying on my development colleagues to provide me with honest feedback which I will try to share here.

One aspect of the tester role I’m exploring is pair testing with developers. I can support the programmers and help them generate test ideas, especially if they are using test driven development. We can also test together on a machine to test the product – simultaneously generating testing ideas. A senior developer noted that pair programming with a tester provides a developer with someone to help them generate tests. After all, the testers see the world in terms of tests, and the developers see the world in terms of patterns or objects or algorithms. One creates code while the other creates testing ideas. It sounds like a good match. It will be interesting to see how this type of testing pans out.

Thoughts on product development, management, design, mobile and other topics.