Category Archives: testing debt

Designing a Gamification Productivity Tool

Gamification and Software Testing

I haven’t spoken about this project publicly because we never got to a public release. Software testing tools represent a tiny market, so they are incredibly difficult to fund. Some of you have asked me about gamification tools with testing, so I thought I would share this brain dump.

A few years ago, I was asked to help a development team that had significant regulatory issues, and frequently accrued testing debt. The product owner’s solution was to periodically have “testing sprints” where other team members helped the over burdened test teams catch up. There was just one problem: the developers HATED helping out with 2 weeks of testing, so I was asked to do what I could to help.

A couple of the senior architects at this company were very interested in Session Tester and asked me why I had put game mechanics in a testing tool. I didn’t really realize at the time I had put game mechanics in, I was just trying to make something useful and engaging for people. So I started talking with them more about game design, and they encouraged me to look into MMOs and co-operative games. The team played games together a great deal, so I learned about the games they enjoyed and tried to incorporate mechanics

I set up a game-influenced process to help structure testing for the developers, and taught them the basics of SBTM. They LOVED it, and started having fun little side contests to try to find bugs in each other’s code. In fact, they were enjoying testing so much, they would complain about having to go back to coding to fix bugs. They didn’t want to do it full time, but a two week testing sprint under a gamified, co-operative model with some structure (and no horrible boring test cases) really did the trick.

Eventually, I worked with some of the team members with a side-project, and the team lead proposed creating a tool to capture what I had implemented. This was actually extremely difficult.  We started with what had been done with Session Tester, and went far beyond that, looking at a full stack testing productivity tool. One of the key aspects of our approach that differed from the traditional ET and scripted testing approaches was the test quest. As I was designing this test tool, I stumbled on Jane McGonigal’s work and found it really inspiring. She was also a big proponent of the quest as a model for getting things done in the real world. Also, we were very careful in how we measured testing progress. Bug counts are easily gamed and have a lot of chance. I have worked in departments that measured on bug counts in the past, and they are depressing if you are working on a mature product while your coworkers are working on a buggy version 1.0.

One thing Cem Kaner taught me was to reward testers based on approach rather than easily counted results, because they can’t control how many bugs there may or may not be in a system. So we set up a system around test quests. Also, many people find pure exploratory testing (ET) too free form and it doesn’t provide a sense of completion the way scripted test case management tools do. And when you are in a regulatory environment, you can’t do ET all the time, and test cases are too onerous and narrow focused. We were doing something else that wasn’t pure ET and it wasn’t traditional scripted testing. It turns out test quest was a perfect repository for everything that we needed to be done. Also, you didn’t finish the quest until you cleaned up data, entered bugs and other things people might find unpleasant after a test session or two. There is more here on quests: Test Quests – Gamification Applied to Software Test Execution

As I point out in that post, Chore Wars is interesting, but it was challenging for sustained testing because of different personalities and motivations of different people. So we used some ideas from ARGs to sprinkle within our process rather than use it as a foundation. Certain gamer types are attracted to things like Chore Wars, but others are turned off by them, so you have to be careful with a productivity tool.

We set up a reward system that reminded people to do a more thorough job. Was there a risk assessment? Were there coverage outlines? Session sheets? How were they filled out? Were they complete? What about bug reports? Were they complete and clear?  I fought with the architects over having a leaderboard, but eventually I relented and we reached a compromise. Superstar testers can dominate a system like this, causing others to feel demoralized and not want to try anymore. We decided to overcome that by looking at chance events, which are a huge part of what makes games fun, so no one could stay and dominate the testing leaderboard, they would get knocked to the bottom randomly and would have to work their way back up. Unfortunately, we ran into regulatory issues with the leaderboard – while we forbade the practice of ranking employees based on the tool, this sort of thing can run afoul of labor laws in some countries, so we were working on alternatives but ran out of resources before we could get it completed.

Social aspects of gaming are a massive part of online games in particular, but board games are more fun with more people too. We set up a communication system similar to a company IRC system we had developed in the past. We also designed a way to ask for help and for senior testers to provide mentoring, and like MMOs, we rewarded people who worked together more than if they worked alone. Like developer tools, we set up flags for review to help get more eyes on a problem.

We also set up a voting system so testers could nominate each other for best bug, or best bug report, best bug video, and encouraged sharing bug stories and technical information with each other within the tool.

An important design aspect was interoperability with other tools, so we designed testing products to be easily exported so they could be incorporated with tools people already use. Rather than try to compete or replace, we wanted to complement what testers were already doing in many organizations, and have an alternative to the tired and outdated test case management systems. However, if you had one of those systems, we wanted to work with it, rather than against it.

Unfortunately, we ran out of resources and weren’t able to get the tool off the ground. It had the basics of Session Tester embedded in it, with improvements and a lot of game approaches mixed in with testing fundamentals.

We learned three lessons with all of this:

  1. Co-operative game play works well for productivity over a sustained period of time, while competitive game play can be initially productive, but over time it can be destructive. Competition is something that has to be developed within a co-operative structure with a lot of care. Many people shut down when others get competitive, and rewarding for things like bugs found, or bugs fixed causes people to game the system, rather than focus on value.
  2. Each team is different, and there are different personalities and player types. You have to design accordingly and make implementations customizable and flexible. If you design too narrowly, the software testing game will no longer be relevant. If design is more flexible and customizable from the beginning, the tool has a much better chance of sustained use, even if the early champions move on to other companies. I’ve had people ask me for simple approaches and get disappointed when I don’t have a pat answer on how to gamify their testing team approach without observing and working with them first. There is no simple approach that fits all.
  3. Designing a good productivity tool is very difficult, and game approaches are much more complex than you might anticipate. There were unintended consequences when using certain approaches, and we really had to take different personality and player styles into account. (There are also labour and other game-related laws to explore.) Thin layer gamification (points, badges, leaderboards) had limited value over time and only appealed to a narrow group of people.

 If some of you are looking at gamification and testing productivity, I hope you find some of these ideas useful. If you are interested in some of the approaches we used, these Gamification Inspiration Cards are a good place to start.

Testing Debt

When I’m working on an agile project, (or any process using an iterative lifecycle), an interesting phenomenon occurs. I’ve been struggling to come up with a name for it, and conversations with Colin Kershaw have helped me settle on “testing debt”. (Note: Johanna Rothman has touched on this before, she considers it to be part of technical debt.) Here’s how it works:

  • in iteration one, we test all the stories as they are developed, and are in synch with development
  • in iteration two, we remain in synch testing stories, but when we integrate what has been developed in iteration one with the new code, we now have more to test than just the stories developed in that iteration
  • in iteration three, we have the stories to test in that iteration, plus the integration of the features developed in iterations that came before

As you can see, integration testing piles up. Eventually, we have so much integration testing to do as well as story testing, we have to sacrifice one or the other because we are running out of time. To end the iteration (often two to four weeks in length) some sort of testing needs to be cut in this iteration to be looked at later. I prefer keeping in synch with development, so I consciously incur “integration testing debt”, and we schedule time at the end of development to test a completed system.

Colin and I talked about this, and we explored other kinds of testing we could be doing. Once we had a sufficiently large list of testing (unit testing, “ility” testing, etc.), it became clear that the “testing debt” was more appropriate than “integration testing debt”.

Why do we want to test that much? As I’ve noted before, we can do testing in three broad contexts: the code context (addressed through TDD), the system context and the social context. The social context is usually the domain of conventional software testers, and tends to rely on testing through a user interface. At this level, the application becomes much more complex, greater than the sum of its parts. As a result, we have a lot of opportunity for testing techniques to satisfy coverage. We can get pretty good coverage at the code level, but we end up with more test possibilities as we move towards the user interface.

I’m not talking about what is frequently called “iteration slop” or “trailer-hitched QA” here. Those occur when development is done, and testing starts at the end of an iteration. The separate QA department or testing group then takes the product and deems it worthy of passing the iteration after they have done their testing in isolation. This is really still doing development and testing in silos, but within an iterative lifecycle.

I’m talking about doing the following within an iteration, alongside development:

  • work as a sounding board with development on emerging designs
  • help generate test ideas prior to story development (generative TDD)
  • help generate test ideas during story development (elaborative TDD)
  • provide initial feedback on a story under development
  • test a story that has completed development
  • integration test the product developed to date

Of note, when we are testing alongside development, we can actually engage in more testing activities than when working in phases (or in a “testing” phase near the end). We are able to complete more testing, but that can require that we use more testers to still meet our timelines. As we incur more testing debt throughout a project, we have some options for dealing with it. One is to leave off story testing in favour of integration testing. I don’t really like this option; I prefer keeping the feedback loop as tight as we can on what is being developed now. Another is to schedule a testing phase at the end of the development cycle to do all the integration, “ility”, system testing etc. Again I find this can cause a huge lag in the feedback loop.

I prefer a trade-off. We have as tight a feedback loop on testing stories that are being developed so we stay in synch with the developers. We do as much integration, system, “ility” testing as we can in each iteration, but when we are running out of time, we incur some testing debt in these areas. As the product is developed more (and there is now much more potential for testing), we bring in more testers to help address the testing debt, and bring on the maximum number we can near the end. We schedule a testing iteration at the end to catch up on the testing debt that we determine will help us mitigate project risk.

There are several kinds of testing debt we can incur:

  • integration testing
  • system testing
  • security testing
  • usability testing
  • performance testing
  • some unit testing

And the list goes on.

This idea is very much a work-in-progress. Colin and I have both noticed that on the development side, we are also incurring testing debt. Testing is an area with enormous potential, as Cem Kaner has pointed out in “The Impossibility of Complete Testing” (Presentation) (Article).

Much like technical debt we can incur it unknowingly. Unlike refactoring, I don’t know of a way to repay this other than to strategically add more testers, and to schedule time to pay it back when we are dealing with contexts other than program code. Even in the code context, we still may incur testing debt that refactoring doesn’t completely pay down.

How have you dealt with testing debt? Did you realize you were incurring this debt, and if so, how did you deal with it? Please drop me a line and share your ideas.

Iterative Testing (intro)

Edit April 11/2006 – This is a topic I’ll be talking about more in the near future. In the mean time, check out this test debt post which describes an iterative testing challenge.

A few people have asked me recently about how I test on Agile projects, or on iterative projects in general. I’ll post some of my experiences here with the disclaimer that these practices are very much a work-in-progress.

To begin this discussion, I’ll point readers back to this post which is about describing testing activities to business stakeholders. Testing activities are broken up into Business Facing and Technology Facing activities (terms coined by Brian Marick) not just over a release, but through each iteration. These two sets of activities not only guide test exection, but test planning as well. Furthermore, each area of testing activity has collaborative components.

In the coming days, I’ll share some of the techniques that I have been working on.