Category Archives: TDD

More on Test Driven Development:

William Wake writes:

At least half the time in TDD, I feel like I’m doing the design part. I talk of generative tests, that yield design, and elaborative tests, that test it “the rest of the way”; the generative tests are pushing the design forward, and it’s somewhat of a distraction to stop and do elaborative tests too soon.

(posted with permission)

This was in response to a note by me describing some of the challenges facing programmers and testers pairing together in TDD activities. I really like the terms: “generative tests” that yield design, and “elaborative tests” that yield more verification or critiquing tests.

As a tester pairing in TDD efforts, I found some of our test development to be a natural fit, while some of it seemed more difficult. When William emailed me with the above comment, I had an “aha!” moment. The natural fit I’d encountered was in test idea generation when we worked on elaborative test development. The generative test development was new to me. When I was the one driving developing new functionality, I found it hard to write tests for classes and methods that hadn’t been developed yet. I clearly need to do more tester developer pairing in TDD activities, and need to work more on TDD when I’m coding. As my pairing partner says, the only key is practice.

Another area of difficulty is the disruptive nature of testing. Some developers might appreciate the disruption as they create code, but others have told me that they wouldn’t want a tester there during what William calls the generative test development. It seems that developers need a protected time of creativity and development to get something together that can be tested. Having a tester interrupting them in this process may not be helpful. Furthermore, when developers spend time making interfaces more testable, the dedicated tester may not be the best person for them to pair with. The tester may not add a lot of value until the interface has been refactored. At that point, they could step in and see if it is testable or not.

Testers who have strong development and testing skills (and who understand and have insight into TDD, bad coding smells, interface discovery, etc.) should fit well into both categories – generative and elaborative. Most testers I might wager, will fit in better in the elaborative test development aspect of TDD. We are used to Business-Facing Product Critiques, and test idea generation activities. Generative test development will probably be new to many dedicated testers. If you want to pair with a developer doing TDD, this might be something to keep in mind.

It may be a good practice to have developer-tester pairing during the elaborative test development activities of TDD, and not during the generative test development. More testers and developers pairing and reporting their experiences may help us learn if we can do this well.

I encourage testers and developers to pair together, and feel free to let me know how you do with TDD pairing. What did you struggle with? What worked well?

John Kordyback on Testing

Testing is an exercise of discovering interfaces.

If this interests you, think about the above statement, and share your conclusions. I’ll post my own thoughts shortly.

(edit – I’ve added a couple of my own thoughts)

From a development perspective, driving out interfaces and making the code more testable will improve the overall design when following test-driven development. The developers drive out the design with unit tests using a tool like JUnit which helps them create interfaces. But JUnit itself uses an interface to exercise areas of the code. Now things get interesting…

From a tester’s perspective, this concept came clear to me when exploring areas of the application behind the GUI layer to write automated tests against. For example, if a developer makes a particular part of the code testable, we can use some sort of tool to write tests and exercise areas of the code. In this case, we find an area we’d like to test, and the developers create an interface for us to test against. If we create fixtures with FIT, we are essentially creating another testable interface. When we use WTR Ruby scripts to drive the browser via the DOM, we are using the DOM interface to test the application. Screen-scraping automated functional testing tools create their own testable interface.

From this perspective, the difference between JUnit or HTTPUnit tests, FIT or WTR tests or some other testing tool is the type of interface they use.

An interesting side-effect of seeking testable interfaces in a product from the code level up: the more we learn about the interfaces, the more we explore the product and the more we learn about it. The more we learn, the more information we gather about the applications strengths and weaknesses. It kind of sounds like testing software doesn’t it?

We can take a cue from our test-driven developer colleagues and how they use interfaces to design better code, and seek to drive out interfaces to help us test the product more thoroughly. This is another view of testing which takes us away from a strict black-box perspective. Think of peeling back the GUI layer (with web applications it can be simple to peel back the browser and start looking at the HTTP layer and down into the code), and look for testable interfaces, or areas that could use them. You might be surprised at what you find, and a new world of testing may be waiting to be discovered.

TDD Pairing – What I Missed

Since this is a training exercise, and I am working with a senior developer who is a good teacher, we spent some time reviewing our first pairing session. To teach me while we paired, the developer had created a situation to see if I could spot a problem. I of course missed it. During our TDD session, my primary focus was on thinking of testing ideas. The developer however was simultaneously thinking of making the code testable, improving the software design, and continuously improving the testability of the code. These three activities he says are the hallmarks of a good design.

Leading me down the garden path in the hopes of teaching me something, he deliberately developed a bad code smell and tried to guide me into seeing it. I was so focussed on generating testing ideas that I missed it. He deliberately made the unit tests awkward and difficult to implement. I trusted his design and took that for granted as a technical issue that I didn’t understand. I didn’t realize that the fact that the tests were onerous and difficult to set up and code was a bad test smell.

The lesson that I learned is that if we can add tests simply, it’s a sign of a good code design. Since the tests were awkward and I was completely dependent on the developer to add them, I needed to be concerned. As a tester, part of my job when pairing in a test-driven development situation is to watch for bad testing smells. Those bad smells in the tests are symptoms that something is wrong with the code. The developer pointed out that when it’s hard to test, it’s time to improve the code. When testability is improved, a byproduct is a better design.

At the end of the day, I was thinking about more test ideas and felt we needed to add much more to the existing design. The developer however realized we were in trouble and needed to refactor the existing code to make it more testable. Lesson learned – I need to watch that we can add tests easily. That is a sign of a good design. It doesn’t take a lot of programming skill to realize that some unit tests are awkward while others are simple and elegant. I’m also fairly confident that testers who don’t program would be able to learn to see the difference quite quickly after spending time with a developer who can demonstrate good and poor unit tests.

Pairing in Test Driven Development: Day One Report

An area of Agile Development where testers are usually absent is in Test-Driven Development and other developer testing activities. Since I like to collaborate with developers as much as I can, I asked them what other areas I could support them in. (Brian Marick calls these types of activities Technology-facing programmer support.) They told me that an area they would like to see testers work with them was in unit test development, especially when using Test-Driven Development. While I have pair tested with developers to help generate unit testing ideas, I haven’t actually worked with them during development in a pair programming kind of role. They felt that pairing with a tester would help them generate test ideas, and that it would be a good fit. The developer is thinking about programming most of the time while the tester is thinking about tests most of the time. I encourage developers to use test-infected development techniques, so I decided to stop theorizing and actually give it a try. I can’t very well answer the question about how a tester can add value to Test-Driven Development unless I’ve tried it.

Yesterday I paired with a senior developer who was developing an application in Java using the Intellij IDEA IDE which has JUnit integration. He kindly agreed to take me through the paces, but I have to admit I was a bit nervous. While I have basic Java programming skills, I am not a great coder and I usually work with scripting languages when I develop automated test cases. I wasn’t sure if I would be able to add any value to a programming activity or not.

After we walked through the business problem the coding effort for the day was to address, and some of the code that was in place, we began looking at the test framework. In this case, the developer had already written the first test, and an implementation that worked well enough to get that test to pass. We picked up at this point, and looked at the business rules and designed a new test. When we ran JUnit, the test failed, so that told us that the code implementation needed some work. The developer added some more logic to get that test to pass, and then we added another test case. We continued on adding a test cases which would initially fail, and the developer would work on the implementation to get the test cases to pass. At a certain point, he felt that we had a good basic set of test cases that covered enough of the business logic.

At the end of our session, we had an implementation that satisfied some basic test cases. I suggested test ideas, but was completely dependent on the developer to write the JUnit tests. I was absorbed in thoughts around more test ideas and what would seem feasible. I suggested a lot of test ideas that we could tackle the next day, and we discussed what tests we could justify doing. My intitial response was to test as much as possible, while the developer was thinking about the big picture and what time we could spend on testing. I realized this would be a trade-off; testing everything as robustly as possible would cause a lot of duplicated effort.

I didn’t feel like I added a lot of value to this session. Granted, I was getting trained in how Test – Driven Development works. I caught some minor syntax errors as we paired; the developer caught a few in his own work as well. We both missed some coding errors, but the compiler and the unit tests caught those.

My thoughts at the end of the day were that I had learned a lot, but added little value. Clearly, I need to learn more about the process and do this more in the hopes of adding value to a developer. The basic test ideas that I had generated were already ideas that the developer had thought of. I then started to think of more complex test ideas, and felt that we had very little coverage. The developer agreed that the coverage was light and we needed to work on more tests the next day. I left at the end of the day thinking about further tests we could write – I could add value there.