When I’m working on an agile project, (or any process using an iterative lifecycle), an interesting phenomenon occurs. I’ve been struggling to come up with a name for it, and conversations with Colin Kershaw have helped me settle on “testing debt”. (Note: Johanna Rothman has touched on this before, she considers it to be part of technical debt.) Here’s how it works:
- in iteration one, we test all the stories as they are developed, and are in synch with development
- in iteration two, we remain in synch testing stories, but when we integrate what has been developed in iteration one with the new code, we now have more to test than just the stories developed in that iteration
- in iteration three, we have the stories to test in that iteration, plus the integration of the features developed in iterations that came before
As you can see, integration testing piles up. Eventually, we have so much integration testing to do as well as story testing, we have to sacrifice one or the other because we are running out of time. To end the iteration (often two to four weeks in length) some sort of testing needs to be cut in this iteration to be looked at later. I prefer keeping in synch with development, so I consciously incur “integration testing debt”, and we schedule time at the end of development to test a completed system.
Colin and I talked about this, and we explored other kinds of testing we could be doing. Once we had a sufficiently large list of testing (unit testing, “ility” testing, etc.), it became clear that the “testing debt” was more appropriate than “integration testing debt”.
Why do we want to test that much? As I’ve noted before, we can do testing in three broad contexts: the code context (addressed through TDD), the system context and the social context. The social context is usually the domain of conventional software testers, and tends to rely on testing through a user interface. At this level, the application becomes much more complex, greater than the sum of its parts. As a result, we have a lot of opportunity for testing techniques to satisfy coverage. We can get pretty good coverage at the code level, but we end up with more test possibilities as we move towards the user interface.
I’m not talking about what is frequently called “iteration slop” or “trailer-hitched QA” here. Those occur when development is done, and testing starts at the end of an iteration. The separate QA department or testing group then takes the product and deems it worthy of passing the iteration after they have done their testing in isolation. This is really still doing development and testing in silos, but within an iterative lifecycle.
I’m talking about doing the following within an iteration, alongside development:
- work as a sounding board with development on emerging designs
- help generate test ideas prior to story development (generative TDD)
- help generate test ideas during story development (elaborative TDD)
- provide initial feedback on a story under development
- test a story that has completed development
- integration test the product developed to date
Of note, when we are testing alongside development, we can actually engage in more testing activities than when working in phases (or in a “testing” phase near the end). We are able to complete more testing, but that can require that we use more testers to still meet our timelines. As we incur more testing debt throughout a project, we have some options for dealing with it. One is to leave off story testing in favour of integration testing. I don’t really like this option; I prefer keeping the feedback loop as tight as we can on what is being developed now. Another is to schedule a testing phase at the end of the development cycle to do all the integration, “ility”, system testing etc. Again I find this can cause a huge lag in the feedback loop.
I prefer a trade-off. We have as tight a feedback loop on testing stories that are being developed so we stay in synch with the developers. We do as much integration, system, “ility” testing as we can in each iteration, but when we are running out of time, we incur some testing debt in these areas. As the product is developed more (and there is now much more potential for testing), we bring in more testers to help address the testing debt, and bring on the maximum number we can near the end. We schedule a testing iteration at the end to catch up on the testing debt that we determine will help us mitigate project risk.
There are several kinds of testing debt we can incur:
- integration testing
- system testing
- security testing
- usability testing
- performance testing
- some unit testing
And the list goes on.
This idea is very much a work-in-progress. Colin and I have both noticed that on the development side, we are also incurring testing debt. Testing is an area with enormous potential, as Cem Kaner has pointed out in “The Impossibility of Complete Testing” (Presentation) (Article).
Much like technical debt we can incur it unknowingly. Unlike refactoring, I don’t know of a way to repay this other than to strategically add more testers, and to schedule time to pay it back when we are dealing with contexts other than program code. Even in the code context, we still may incur testing debt that refactoring doesn’t completely pay down.
How have you dealt with testing debt? Did you realize you were incurring this debt, and if so, how did you deal with it? Please drop me a line and share your ideas.