Category Archives: gamification

Gamification Principles for Product Management Revisited

A few of you pointed out that the link to my 2014 article for commercelab is down, so I have uploaded a PDF of it: Gamification principles you should consider in mobile product management

As I read this article, I realized there are two more important aspects that I have learned in the past couple of years that I should share. This is what I would add if I were to write the article today:

Focus on your core, and build out from there

Games have to focus on their core function or they won’t get used. For example, a lot of games focus on movement and actions (fire, gather, etc.) and then add functionality on that. If you don’t have the basics, you won’t have a very good app. If your game has a poor core function, it won’t get used, so their survival depends on it. It’s easy to get caught up in neat features, or cool ideas, but if the core isn’t solid, the experience will suffer. How do you identify the core? Take away everything, and then add back technology and features until people can get the job done with your app and no more.

Summarize and display of important information

Games are creating and operating with huge amounts of data, yet they don’t overwhelm players. They use clever views, HUDs and maps to show you what you need in context, and right now. Some games are brilliant at showing you what is important within your current location and context, yet providing cues to change focus and look elsewhere for important events or information. They are brilliant at information architecture and information display. We are often at a loss with non-game apps and easily overwhelm and confuse users. Games on the other hand have a lot of approaches for showing just the right amount of information, in a way that grabs your attention, right when you need it.

Designing a Gamification Productivity Tool

Gamification and Software Testing

I haven’t spoken about this project publicly because we never got to a public release. Software testing tools represent a tiny market, so they are incredibly difficult to fund. Some of you have asked me about gamification tools with testing, so I thought I would share this brain dump.

A few years ago, I was asked to help a development team that had significant regulatory issues, and frequently accrued testing debt. The product owner’s solution was to periodically have “testing sprints” where other team members helped the over burdened test teams catch up. There was just one problem: the developers HATED helping out with 2 weeks of testing, so I was asked to do what I could to help.

A couple of the senior architects at this company were very interested in Session Tester and asked me why I had put game mechanics in a testing tool. I didn’t really realize at the time I had put game mechanics in, I was just trying to make something useful and engaging for people. So I started talking with them more about game design, and they encouraged me to look into MMOs and co-operative games. The team played games together a great deal, so I learned about the games they enjoyed and tried to incorporate mechanics

I set up a game-influenced process to help structure testing for the developers, and taught them the basics of SBTM. They LOVED it, and started having fun little side contests to try to find bugs in each other’s code. In fact, they were enjoying testing so much, they would complain about having to go back to coding to fix bugs. They didn’t want to do it full time, but a two week testing sprint under a gamified, co-operative model with some structure (and no horrible boring test cases) really did the trick.

Eventually, I worked with some of the team members with a side-project, and the team lead proposed creating a tool to capture what I had implemented. This was actually extremely difficult.  We started with what had been done with Session Tester, and went far beyond that, looking at a full stack testing productivity tool. One of the key aspects of our approach that differed from the traditional ET and scripted testing approaches was the test quest. As I was designing this test tool, I stumbled on Jane McGonigal’s work and found it really inspiring. She was also a big proponent of the quest as a model for getting things done in the real world. Also, we were very careful in how we measured testing progress. Bug counts are easily gamed and have a lot of chance. I have worked in departments that measured on bug counts in the past, and they are depressing if you are working on a mature product while your coworkers are working on a buggy version 1.0.

One thing Cem Kaner taught me was to reward testers based on approach rather than easily counted results, because they can’t control how many bugs there may or may not be in a system. So we set up a system around test quests. Also, many people find pure exploratory testing (ET) too free form and it doesn’t provide a sense of completion the way scripted test case management tools do. And when you are in a regulatory environment, you can’t do ET all the time, and test cases are too onerous and narrow focused. We were doing something else that wasn’t pure ET and it wasn’t traditional scripted testing. It turns out test quest was a perfect repository for everything that we needed to be done. Also, you didn’t finish the quest until you cleaned up data, entered bugs and other things people might find unpleasant after a test session or two. There is more here on quests: Test Quests – Gamification Applied to Software Test Execution

As I point out in that post, Chore Wars is interesting, but it was challenging for sustained testing because of different personalities and motivations of different people. So we used some ideas from ARGs to sprinkle within our process rather than use it as a foundation. Certain gamer types are attracted to things like Chore Wars, but others are turned off by them, so you have to be careful with a productivity tool.

We set up a reward system that reminded people to do a more thorough job. Was there a risk assessment? Were there coverage outlines? Session sheets? How were they filled out? Were they complete? What about bug reports? Were they complete and clear?  I fought with the architects over having a leaderboard, but eventually I relented and we reached a compromise. Superstar testers can dominate a system like this, causing others to feel demoralized and not want to try anymore. We decided to overcome that by looking at chance events, which are a huge part of what makes games fun, so no one could stay and dominate the testing leaderboard, they would get knocked to the bottom randomly and would have to work their way back up. Unfortunately, we ran into regulatory issues with the leaderboard – while we forbade the practice of ranking employees based on the tool, this sort of thing can run afoul of labor laws in some countries, so we were working on alternatives but ran out of resources before we could get it completed.

Social aspects of gaming are a massive part of online games in particular, but board games are more fun with more people too. We set up a communication system similar to a company IRC system we had developed in the past. We also designed a way to ask for help and for senior testers to provide mentoring, and like MMOs, we rewarded people who worked together more than if they worked alone. Like developer tools, we set up flags for review to help get more eyes on a problem.

We also set up a voting system so testers could nominate each other for best bug, or best bug report, best bug video, and encouraged sharing bug stories and technical information with each other within the tool.

An important design aspect was interoperability with other tools, so we designed testing products to be easily exported so they could be incorporated with tools people already use. Rather than try to compete or replace, we wanted to complement what testers were already doing in many organizations, and have an alternative to the tired and outdated test case management systems. However, if you had one of those systems, we wanted to work with it, rather than against it.

Unfortunately, we ran out of resources and weren’t able to get the tool off the ground. It had the basics of Session Tester embedded in it, with improvements and a lot of game approaches mixed in with testing fundamentals.

We learned three lessons with all of this:

  1. Co-operative game play works well for productivity over a sustained period of time, while competitive game play can be initially productive, but over time it can be destructive. Competition is something that has to be developed within a co-operative structure with a lot of care. Many people shut down when others get competitive, and rewarding for things like bugs found, or bugs fixed causes people to game the system, rather than focus on value.
  2. Each team is different, and there are different personalities and player types. You have to design accordingly and make implementations customizable and flexible. If you design too narrowly, the software testing game will no longer be relevant. If design is more flexible and customizable from the beginning, the tool has a much better chance of sustained use, even if the early champions move on to other companies. I’ve had people ask me for simple approaches and get disappointed when I don’t have a pat answer on how to gamify their testing team approach without observing and working with them first. There is no simple approach that fits all.
  3. Designing a good productivity tool is very difficult, and game approaches are much more complex than you might anticipate. There were unintended consequences when using certain approaches, and we really had to take different personality and player styles into account. (There are also labour and other game-related laws to explore.) Thin layer gamification (points, badges, leaderboards) had limited value over time and only appealed to a narrow group of people.

 If some of you are looking at gamification and testing productivity, I hope you find some of these ideas useful. If you are interested in some of the approaches we used, these Gamification Inspiration Cards are a good place to start.

Interview with Anna Sort: Gamification in Health Care – Part 2

This is Part 2 of my interview with Anna Sort. If you haven’t already, check out Part 1 of the interview. Anna is a professional nurse who is working to bring together smartphone and video-game technology into healthcare. She is also an Associate Professor at the University of Barcelona, and works both as a gamification and as a serious games consultant.

Designing a Better Life Interview with Anna Sort – Part 2

Jonathan: What can go wrong with a system that utilizes technology and gaming mechanisms with a worldwide pool of contributors (aka “the crowd”)? For example, how do you design your game to prevent or deal with the abuse the rules or take advantage of bugs or loopholes?

Anna:
I think you cannot really prevent people from trying to cheat in a game, it is part of the challenge and the fun for some people to try to exploit games, so you just have to do your best designing to make it fun to play the regular way rather to make it “uncheatable.”

In our World of Warcraft diabetes add-on we are still deciding if we want the add-on in itself to be fun or not. However, what is clear is that if people wantsto cheat they are allowed to. The add-on is created to fit an exploring environment rather than a “win” situation, it is there for the player to decide whether to play using risky behavior and try what happens if you mix certain things/eat certain foods or to try their best to always “keep in track” with their glucose. It will definitely be exciting as well to see what people will do with the add-on in the end! We are here to learn about how people interact and react to “serious gaming” (using an already existing game for other learning purposes).

Jonathan: What are the biggest challenges you face from a design perspective to create something that people will interact with? How do they apply their virtual learning to their own real lives?

Anna:
This is a very interesting question and is something we ask ourselves. This isn’t explored that much in “serious games” in general. We hope to find out more about this with this experiment. However, we hope to help people understand how the diet and exercise affects diabetes, so in this first experiment we aren’t looking to change behavior in real life, but we will have a couple of questions regarding whether users have made any changes in their lifestyle at the end of the experiment.

Jonathan: What does success look like for your World of Warcraft Diabetes mod project?

Anna:
This is a very difficult question. In this first trial it will be complicated to have a bar to measure to say: “Yes, I’ve succeeded”. At the moment we are working on the add-on but there is a lot of work to do regarding questionnaires. How many will download it? How many will play it? What will be their reaction? There is a lot of work still to be done to determine a good answer for this question.

Jonathan: No problem. When you are truly innovating, you don’t really know until you try out an idea, discover what happens, and refine. Moving on, what new thing/trend/innovation are you keeping your eye on at the moment? Is there anything else you’d like to share with us?

Anna:
I’m very much into gamification and self-tracking apps. I’m a nurse so I’m all about prevention and I think mHealth (mobile health) offers us a great opportunity to focus on that and giving the user tools to take charge in their well being, lifestyle and health.

For more on these ideas and more, check out Anna’s latest presentation: Video Games and Gamification for Health Care .

Jonathan:
Thank you again Anna. I hope for the best in your work, and thanks for contributing to something that is important to all of us!

If you’d like to help out Anna with her WoW diabetes mod project, contact me and I’ll put you in touch with her and her project team.

Interview with Anna Sort: Gamification in Health Care – Part 1

For my first blog interview in my Designing a Better Life series, we will be chatting with Anna Sort. Anna is a professional nurse who is working to bring together smartphone and video-game technology into healthcare. She is also an Associate Professor at the University of Barcelona, and works both as a gamification and as a serious games consultant. I find Anna inspiring because she is working hard in the area of mHealth (mobile health) and games for health.

anna_sort
Anna Sort speaking at the Gamification World Congress

Anna is based in Spain, and graciously agreed to this interview in English for me, and for you, our readers. For more about Anna, check out her blog: Lost Nurse in the Digital Era and these two videos on youtube of her presenting: Designing Games as a Nurse, Gamification of Health Products.You can find her on Twitter here: @LostNurse.

Designing a Better Life Interview with Anna Sort – Part 1

Jonathan: Please tell us a bit about yourself. How did you get involved in the games for health field?

Anna:
I have always been a gamer but programming never attracted me. I was looking for a job as a nurse abroad when Blizzard Entertainment called me to be a customer support representative in France. They offered to take care of housing and banking, making it easy to move and start work in a new country. Also the multinational office work was something I had always wanted to try (and being a CS for the game World of Warcraft also seemed very cool). After 6 months I took the nurse spot in the company, and being a gamer working with gamers made me realize how much easier it would be to communicate and experiment some health information through games rather than 30 minute talks plus a flyer to never read at home.

After a while I started to look for Masters degrees that would allow me to go into a “techie-nurse” path. I found an interesting Master’s degree called CSIM, which was focused on a multidisciplinary approach to problem solving. I was the only healthcare student in the class, most of my classmates were designers, artists and programmers, but surprisingly 2 of the 8 Masters Thesis projects would have benefited from a healthcare professional, a rehabilitation system and an exergaming platform. In a way that reassured my idea that this “new profession” I wanted to pursue would eventually exist.

My thesis was about the quantity of exercise the children did while playing on an inflatable slide that had a game projected on and kids interacted with through an infrared system (it’s called an “exergaming platform”). I took part in the game design as well as the exercise experiment for my thesis so it was really interesting. I worked on a multidisciplinary team and I loved it. After my master’s I started my career alone, and soon enough I was contacted by Homero Rivas in Stanford, whom I talked about my vision in games and we are currently developing with MIT the first World of Warcraft health add-on to raise awareness on Diabetes.

Jonathan: The research and work that you do sounds fascinating. Can you explain how your work can help make our lives better?

Anna:
Behavior-wise humans are prone to play, and games offer a wide variety of play, such as exploring, competing, collaborating and self expression. Taking gaming into healthcare is a way to make taking charge of one’s health more interesting, intriguing and motivating. It is not about making fun of having diseases or trivializing them, making it less important because it’s a game. It’s about providing the tools and inspiring the motivation and behavior change to be healthy and improve your lifestyle.

Jonathan: I love your World of Warcraft Diabetes mod project. Can you tell us about this project and your goals? Is there anything we can do to help you?

Anna:
Thank you. It is a very exciting project indeed, and challenging! Especially because the game World of Warcraft has a pool of 9 million users, which means if 5% of these users download and play the add-on, we will have the biggest Health Game research experiment ever made!

The add-on is an “add” on the game which you download that changes the user interface. What we have done is add a glucometer on the side that is impacted by the player’s actions, such as running, fighting and eating foods. World of Warcraft has a lot of foods, drinks and alcoholic beverages so it makes the experimentation part very interesting. It isn’t focused on the disease itself, as we are aware not everyone reacts the same way to foods and drinks, and compositions aren’t equal worldwide. We want to raise awareness, and maybe even make new users having youngsters encourage family to play to see what is it like to live with diabetes.

We are not sure what people we will attract and we also are still debating whether we should gamify the add-on or if the game is good enough as-is people will still enjoy the game with the add-on. All the steps should be taken care of very wisely as the amount of research information might overwhelm us otherwise and turn out to be unusable.

How can people help? We still need a good experiment designer to take part of the team, so anything on that regard is helpful. And programmers. Hands are always needed!

Jonathan: What design concepts do you find the most useful for this project?

Anna:
Possibly the most important part of our design process will be focused on the tutorial. World of Warcraft has an excellent tutorial to help new players get on board, and since the new players we might attract will already have a whole game to learn, we want to make sure the on-boarding of the add-on doesn’t collide with the game tutorial and so it doesn’t overwhelm the player.

Stay tuned for Part 2.

Creating Great Storytelling to Enhance Software Testing Scenarios

Recently, I wrote about Using Storytelling Games in Software Testing, and pointed you to a paper by Martin Jansson and Greger Nolmark. Now I want to give you some tips on creating great storytelling for your testing projects.

First of all, check out Cem Kaner’s work on Scenario Testing: An Introduction to Scenario Testing. I want you to pay special attention to the CHAT (cultural, historical activity theory) model that he talks about. For more on CHAT and testing, read this paper: Putting the Context in Context-Driven Testing (an
Application of Cultural Historical Activity Theory)
.Pay special attention to the descriptions of networks of activity, and tensions. These are vital to help construct variations and different forces within our storytelling. Both of these pieces are foundational and worth the effort to dig into.

Now, I want you to read Hans Buwalda’s article on Soap Opera Testing. This is a nice variation on scenario testing. Buwalda uses television soap operas as inspiration for a story arcs, for structure, and for variation. Remember, there are lots of variations on a theme in testing, as well as real life! Further to that, look into testing tours. Cem Kaner has a blog post with a link or two to help get some background info: Testing tours: Research for Best Practices?.

Soap Opera tests, Testing Tours and Test Scenarios are a great place to start creating good testing stories.

Next, read up on personas in user experience work. Jenny Cham has a really nice description, with lots of helpful links on creating personas here: Creating design personas. Remember to explore her links in this blog, she has great advice here. I wrote a position paper about using UX personas in testing years ago (I will have to dig it up, there’s a dead link) in this blog post. Elisabeth Hendrickson introduced me to this idea, but she recommended using extreme personas such as cartoon characters. I prefer the standard UX methods pioneered by people like Alan Cooper, but the cartoon or other characters are a great place to start, especially if you feel stuck. Personas are a great way to start developing characters for your story that are relevant. What are their motivations when they use our software? What are their fears? What are their cares and worries and distractions?

Next, I want you to read this piece on telling a great story by a famous author: Kurt Vonnegut at the Blackboard. (I am getting to the gamification side of this project, and I asked Andrzej Marczewski for good references on storytelling in games, and this was the first link he sent me. Thanks Andrzej!) Notice the different options for structuring a good story. In testing, we can use different ones for the same scenario, if we think about activity patterns, tensions, characters, and variations during real life product use. Several versions of one story will yield different kinds of important information and observations. Vonnegut provides a simple framework for story creation that we can easily adapt and apply.

Finally, I want you to look at story telling in games. Andrzej talks about it here: I want to experience games not just play them. Notice that within a game context, of a well designed game, he has a sense of cause and effect: decisions made here can impact things in other areas of the game. That’s just like real life, and it is important to add dimensions to storytelling in games for testing. Variation and dimensions have different effects in a system, and they are rewarding to exercise. Now read this piece on Gamasutra The Designer’s Notebook: Three Problems for Interactive Storytellers, Resolved by Ernest Adams. The points about character amnesia, internal consistency and narrative flow are pure gold for testers. We often arrive into a system without really knowing what is going on, especially at first. However, our customers are also starting from scratch when they use our app for the first time. These problems are areas we should also address when creating stories to test around.

There is also a lot of really useful information here: Environmental Storytelling: Creating Immersive 3D Worlds Using Lessons Learned from the Theme Park Industry by Don Carson, particularly with regards to environmental conditions being so important to incorporate (particularly for you mobile testers!) and the idea of an all-encompassing world, rather than one, linear story.

Andrzej also recommends reading Uncle Computer, Tell Me A Story, and Story Structure 104: The Juicy Details.

As testers, we can incorporate more than a linear scenario into our work. We can add so much more depth to our test approach using stories and worlds. Story development in games is incredibly similar to the story telling we need to do in testing. There is a lot to be learned about creating virtual worlds and stories within them to help change our perspective, explore variations and make important discoveries about the software and systems we test. We can leverage these various works that have been provided with us to create something new and powerful.

Some final points to put this all together:

  • Combine the elements from each of the areas I asked you to study above to create a great story, or even better, sets of stories
  • Use structure to create real life conditions: different people, motivations, different environmental conditions, and change.
  • Add plot twists, surprises and ulterior motives, and look for unintended consequences in systems and people
  • Don’t stop at one scenario – create variations on a theme, and change the setting, or the entire world you have created to help change your perspective
  • Introduce different characters – are they interrupting? Helping?
  • Create a beginning, middle and an end
  • Move beyond all happy endings – also try to leave things unresolved, or end on a bad note

I have compiled several foundational concepts to help influence your storytelling, so now the rest is up to you. How you combine them to create something useful is up to you and your team. You have an opportunity to create rich perspectives to kickstart your testing efforts.

Happy storytelling!

Exploratory Test Adventures – Using Storytelling Games in Software Testing

I love to see creative work from people in the industry, and Martin Jansson always impresses me with his insatiable desire to learn, to do better and to take risks with ideas to push the craft forward. While I have been looking at gamification lately, it was exciting to learn that he and his colleagues had already been applying some of these ideas by looking at storytelling and games, and using some of those ideas to add more fuel to test idea generation during exploratory testing work.

Cem Kaner’s work on scenario testing is a powerful approach to testing. This is an approach to quickly create useful testing scenarios and ideas where we create a compelling story about the people who use our software, describe typical usage, possible outcomes, and human activity patterns surrounding usage. One of the most interesting outcomes of this kind of work is that it puts us in the role of our end users, and helps us quickly identify problems that they are likely to encounter. It also helps us understand when our software actually delivers, we can tell project stakeholders that our software works within the narrative of real-life scenarios. So not only do we uncover important problems, we also provide information that validates what we have done. “Yes! It works in an emergency scenario we didn’t think of during requirements definition!”

There are a lot of ways that we can frame scenario tests to provide structure and help with creative test idea generation. Using gaming as an influence, Martin Jansson and Greger Nolmark wrote a paper on adding structure to scenarios during exploratory testing sessions using storytelling as a guide:
Exploratory Test Adventure – a Creative, Collaborative Learning Experience.

I got excited when I started reading this paper because any kind of creative structure that we can add to test idea generation helps us be more thorough, and helps create more and better ideas. As Martin says, “…by setting up scenes, just like in a roleplaying adventure (or RPG game), you and your testers will have an increased learning experience that lets you explore beyond regular boundaries, habits and thought patterns.”

I often lament that testing information focuses too much on the negative, when we should also tell stakeholders when the team has done a great job. As a designer and programmer, sometimes I get worn down by constant criticism and ask the testers to also give me some positive feedback along with the criticism. After all, critiquing isn’t all about the bad news. It sometimes feels hopeless if all we get is the negative, with no positive feedback at all. Testers on the other hand, often feel like they are failing if they don’t find bugs and provide consistent negative feedback. But if we look at a story, some of them have happy endings. They have twists and turns and there are negatives, but there are also positives. Both are important factors to a story or game (or else they are too sappy and silly if it is all positive, or too depressing if they are all negative) and they are also important factors for determining whether a product or project has merit, or if we are ready to ship. Storytelling is one mechanism we can look to to help us get beyond mere bug hunting, and to provide quality-related information, both positive and negative. This pleases me.

Check it out, it is another example of looking at game mechanics, and applying one gamification aspect to software testing to help us make testing more valuable, more effective, more creative, and hopefully, more fun.

Test Quests – Gamification Applied to Software Test Execution

I decided to analyze a game feature, the “quest“, which is used in popular video games, particularly MMORPGs. Quests have some compelling aspects for structuring testing activitues. Jane McGonigal‘s book “Reality is Broken” provided me with a solid analysis of quests, and how they can be adapted to real life activities. Working from her example of a quest (ch. 3 pp. 56) , I created a basic test quest format:

  1. Goal statement (what we intend to accomplish with our testing work)
  2. Why the goal matters (why are we testing this?)
  3. Where to go in the application (what technique or approach are we using to test?)
  4. Guidance (not detailed steps, but enough to help. Bonus points for using video or other rich media examples.)
  5. Proof of completion (how do you know when you are finished?)

A quest is larger than a single testing mission (or a test case), but is smaller than a test plan. It’s a way we can organize testing tasks to help provide a sense of completion and interest, but in areas that require exploration and creativity. Just like in a video game, there are multiple ways to satisfy a quest. Once we have fulfilled a quest, which might take days or hours, depending on how it is created, we can move on to another one. It’s another way of organizing people, with the added bonus of leveraging years of game design success. Furthermore, modern technology involves a lot of collaboration between people in different locations, using different technology to reach a common goal, and we need to adapt testing to meet that. Testing a mobile app in your lab, one tester at a time, won’t really provide useful testing for an app that requires real-time communication and collaboration for people all over the world. MMO’s do a fabulous job of getting people to work hard and co-ordinate activities in a virtual world, and people have fun doing it. I decided to apply it to testing.

Where do quests fit? Think in terms of a hierarchy of activities:

  • test strategy and plan
  • risks that are mitigated through testing
  • different models of coverage that map to risk mitigation
  • test quests
  • sessions, tours, tasks
  • feedback and reporting

A good test approach will have more than one model of coverage (check I SLICED UP FUN for 12 mobile coverage models), and under each model of coverage, there will be multiple quests. Sometimes quests will be repeated when regressions are required.

So why add this structure?

One area I have worked on over the years is using structure and guidance to help manage exploratory testing efforts. In the past, test case management systems provided some measure of coverage and oversight, but they have little in the way of intrinsic value for testers. People get tired of repeating the same tests over and over, but management love the metrics and they provide even though they are incredibly easy to cheat with. Furthermore, from a tester’s perspective there is an extrinsic reward that is inherent in the design of the tools, and they are easy to use. There is also a sense of completion, once I have run through X number of test cases, I feel like I have accomplished something.

With exploratory testing, the rewards are more intrinsic. The approach can be more fulfilling; I personally feel like I am approaching testing in a more effective way, and I can spend my time on high value activities. However, it is harder to measure coverage, and it is more difficult to direct people in areas where coverage is required without adding some guidance. There have been a lot of different approaches to adding structure to exploratory testing over the years to find a balance. Test quests are another approach to adding structure and finding that balance between the intrinsic rewards of pure exploratory testing, and the extrinsic rewards of scripted testing. This is an idea to provide a blend.

As many of you have heard me argue over the years, test cases and test case management systems are merely one form of guidance, there are others. In the exploratory testing community, you will see coverage outlines, checklists, mind maps, charter lists, session sheets, and media such as video demonstrations and all sorts of alternatives. When it comes to managing exploratory testing, one of the first places we start is to use session-based testing management. This approach helps us focus testing in particular areas, and provides a reviewable result, which makes our auditors and stakeholders happy. I’ve used it a lot over the years.

I’ve also used Bach’s General Functionality and Stability Procedure for over a decade to help organize exploratory testing. However, through experience, unique projects and contexts, I have adapted and moved away from the orthodoxy where I saw fit. However, when I started analyzing why people on my teams have fun with testing, SBTM and Bach’s General Functionality and Stability Procedures were big reasons why. Even though I often use a much more lightweight version of SBTM than he has created, people appreciate the structure. The General Functionality and Stability Procedures is a great example of guidance for analysis, exploration, and great things to do as testers.

The other side of fun on the teams I work on are related to humour, collaboration and technology. We often come up with nicknames, and divide up testing into teams and hold contests. Who can come up with the best test approach? Who recorded the best bug report video? Who found the most difficult to find bug last week? What team has the most pop culture references in their work? Testing is filled with laughter, excitement and learning, and some good plain old fashioned silly fun. We communicate constantly using technology to help stay up to speed on changes and progress, and often other team members want to get in on the action. Sometimes, it’s hard to get the coders to code, the product owners to product own, and the managers to manage, because everyone wants in on the fun. In the midst of this fun is incredibly valuable testing. Stakeholders are blown away by the productivity of testing, the volume of useful information produced, the quality of bugs, and the detailed, useful information from bug reports to status reports and quality criteria that is produced. While there is laughter and fun, there is hard work going on. I learned why this is so effective reading Jane McGonigal’s work.

In Reality is Broken, Jane McGonigal describes Augmented Reality Games (ARGs). These are real life activities that are gamified – they have a game-like structure applied to them. She mentions Chore Wars, and how gamifiying something as mundane as household chores can turn it into a fun activity. She mentions that since cleaning the bathroom is a high value activity in the game, her and her husband have to work hard to try to clean it before the other does. McGonigal explains that since there is a choice, and meaning attached to the task, people choose to do it under the mechanism of the game. It’s not that awful thing no one wants to do anymore because it is unpleasant, when framed within a game context, it is a highly sought after quest or task to complete. You get points in the game, you get bragging rights, you get intrinsic rewards as well as the extrinsic clean bathroom. Amazing.

If we apply that to testing, how about using lessons from ARGs to gamify things like regression testing, or test data creation, or other maintenance tasks we don’t like doing? One way we can do this is to sprinkle these tasks within quests. You can only complete the quest by finishing up one of these less desirable tasks.

In Reality is Broken, McGonigal defines a game as having four traits: a goal, rules, a feedback system, and voluntary participation (pp.21). Working backwards, in exploratory testing, a lot of what we do is voluntary because testers have some degree freedom to make decisions about what they are going to test, even if it is within narrow parameters of coverage. Furthermore, we can choose a different model of coverage to reach a goal. For example, I was working with an e-commerce testing team who were bored to death of testing the purchasing engine because they were following the same set of functional test scripts. To help them be more effective and to enjoy what they were doing, I introduced a new model of coverage to test the purchasing engine: user scenarios. Suddenly, they were engaged and interested and found bugs they had previously missed. I then helped them develop more models of coverage so that they could change their perspective and test the same thing, but with variation to keep them engaged and interested while still satisfying coverage requirements. As humans, we need to mix things up. Previously, they had no choice – they were told to execute the tests in the test case management system, and that was the end of it.

Feedback systems are often linked to bug reporting systems in testing. But I like to go beyond that. Bring in other people to test with you in pairs, trios or whatever combination to bring more ideas to the table. This isn’t duplicated testing, but a redoubling of brain power and effort. I also utilize instant messaging, IRC, and big visible charts to help encourage feedback across functional areas of teams.

Rules in testing are often related to what is dictated to us by managers, developers, and tradition. It boggles my mind how many so-called Agile programmers will demand their testers work in un-Agile ways, expecting them to create test plans, test cases and use test case management systems. When I ask the programmers if they would like to work that way, they usually say no. Well guess what, not many other homo sapiens like to work that way either. I prefer to have rules around approach. We have identified risks, and models of coverage to mitigate those risks, and we use people, tools and automation to help us reach our goals. Rather than count test cases and bugs, we rate our team on our ability to get great coverage and information that helps stakeholders make quality-related decisions.

Finally, a goal in testing needs to be project-specific. If you want to fail, you just copy what you did last time on your test project. The problem with that is you are unaware of any new risks or changes and you’ll likely be blind to them. Every project has a goal, a way we can measure whether we did the right sort of work to help reach that goal, rather than “run the regression tests, automate as many as possible, and if there is time, do other testing”, we have something specific that helps ensure we aren’t doing busy work, but we’re creating value.

When it comes to quests, they can have this format as well. A goal, a feedback system, rules or parameters on where to test, and voluntary participation. As long as all the quests are fulfilled for a project, it doesn’t matter who did them.

It turns out that my application of SBTM, Bach’s General Funcationality and Stability Procedure, plus some zany fun and utilizing technology to help socialize, report and record information, I was right next door to gamification. Using gamification as a guide, I hope to provide tools for others who also want to make testing effective and fun. A test quest is one option to try. Consider using avatars, fun names and anything that resonates with your team members to help make the activity more fun. Also consider rewards for difficult quests and tasks such as a free meal, public kudos, or time off in lieu. Get creative and use as much or as little from the video game world as you like.

Some of my goals with test quests are:

  • Enough structure to provide guidance to testers so they know where to focus efforts
  • Not so much structure (like scripted test cases) that personal choice, creativity and exploration are discouraged or forbidden
  • Guidance and structure is lightweight so that it doesn’t become a maintenance burden like our scripted regression test cases become (both manual and automated)
  • Testers get a sense of purpose, they get a sense of meaning in their work, and completion by completing a set of tasks in a quest
  • Utilize tools (automated tests, automated tasks, simulators, high volume test automation, monitoring and reporting) to help boost the power of the testers and be more efficient and effective, and to do things no human could do on their own
  • Encourage collaboration and sharing information so that testers can provide feedback to other project team members on the quality of the products, but also get feedback on their own work and approaches
  • Encourage test teams to use multiple models of coverage (changing perspectives, using different testing techniques and tools) on a project instead of thinking of coverage as a singular thing
  • Utilize an effective gaming structure to augment reality and encourage people to have fun working hard at testing activities

I am encouraging testing teams to use this as a structure for organizing test execution to help make testing more engaging and fun. Feel free to add as many (or few) elements from video game quests as you see fit, and alter to match the unique personalities and goals of the people on your team. Or, study them and analyze how you organize your testing work for you and your teams. Does your structure encourage people to have fun and work hard at accomplishing something great? If not, you might learn something from how others have managed to get people to work hard in games.

Happy questing!

Applying Gamification to Software Testing

I wrote an article for Better Software magazine this month called “Software Testing is a Game”, available here in PDF format. I wrote about using gamification as an approach to analyze and help make software testing more engaging. I encouraged readers to apply some ideas from gamification to their own testing efforts. Now, why would I do a thing like that? And what do I mean by using game mechanics when we are testing? Games are all well and good, and I may enjoy them, but we are talking about serious work here, why would we make it look like a game?

Let me give you a bit of background information.

I was working with my friends Monroe Thomas and David McFadzean on product strategy when they started bringing up my gamification design ideas. I use gamification in mobile app design to help them be more engaging for users. That doesn’t mean that I make an app look like a game, it means I use ideas from games to help make the app more interesting and easier to use. However, we weren’t talking about mobile apps, so I was a bit surprised. They pointed out that the same concepts that make gamification in mobile apps apply to other apps, after all, David and I even wrote an article about using gaming when creating software processes. Why couldn’t I use those ideas in a product strategy meeting for something else?

Good point.

In fact, they even urged me to look at some of my other prior app designs, they felt I would find gamification-style aspects in those as well, because I always worry about making apps more engaging. Once I started thinking about the implications of what they were saying, an entire new world of possibility opened up. I felt like they had just kicked open a big door of perception for me.

But wait a minute. What is this business about games? Well, the thing with gamification is that when I use those tools correctly in an app, you don’t know it is there. I don’t put childish badges and leaderboards in a productivity app and then say: “Look! gamification at work!” for example. Andrzej Marczewski describes gamification mechanics in terms we can relate to in his blog Game Mechanics in Gamification as: Desired Behavior, Motivation and Supporters.

Andrzej uses a game format to illustrate his point, but it should be obvious that these three themes are not limited to games. Where game designers shine, and where policy wonks and enterprise or productivity designers tend to fail is in the structure around desired behavior. Too often, we just expect people to excel in a work place environment with little support. Games on the other hand tickle our emotions, they captivate us, and they encourage us to work hard at solving problems and reaching goals.

Framing something like software testing in terms of gaming, and borrowing some of their ideas and mechanics, applying them and experimenting can be incredibly worthwhile. After all, as I state in the article, it is difficult to get people involved in software testing, and as technology becomes more pervasive and more enmeshed in our every day lives, it has more potential to do harm. We need new people and new ideas and new approaches, and I want to figure out how to make it more engaging for people. Why can’t effective testing be fun?

It can.

If you work on a team with me, you will notice that there is a lot of laughter, a lot of collaboration, a lot of discovery and learning. And everyone tests from time to time. Sometimes, it can be difficult to get the coders to code, the designers to design and the managers to manage, because everyone wants to test. Why is that? Well, gamification can help provide a structure to analyze what we do and learn why some things are fun and help us work hard, while others cause us to avoid them.

Speaking of analyzing something from a gamification perspective, remember in the Better Software article how I described several aspects from gaming and asked you to apply it to your testing work? Prior to writing the article, I did exactly that with a product I designed called Session Tester. Aaron West and I developed a tool to help testers capture information while using an approach called Session-Based Testing. We had high hopes for the project, but after several setbacks, it’s now dormant. However, a back of the napkin analysis of the tool using a gamification approach was incredibly useful. This is what we came up with, using game concepts from Michael Wilson’s “Gamification: You’re Doing it Wrong!” presentation:

  1. Guidelines and Behaviors:
    Context and rules around the tool was hit and miss. The tool enforces the basic form of session-based testing which helps people learn how to approach testing from this perspective. People are required to fill in the minimum information to create a session sheet. There are strategy ideas readily at hand, and the elements are easily added by using tags. The tool was helpful to teach beginners on the basic form of SBT, but we didn’t enforce the original SBTM rules as set out by James and Jon Bach. This hurt the tool’s effectiveness. While we value the ability for people to modify and adapt, we should have started with the known rules and then provided the ability to adapt, rather than design it from an adapted view. This caused confusion and controversy.
  2. Strategies and Tasks:
    Elisabeth Hendrickson’s ET Heuristics Cheatsheet is provided in the tool to help people think about strategy, and there are oblique strategies to help create test ideas using the Prime Me! button. There could be more resources added to help with strategy, and in fact a lot of the strategy work can be done outside of the tool. We could have done more feature-wise to help with strategy. Tasks can be pre-planned outside of the tool, or done on the fly and recorded with the @tasks tag, which is saved in session sheets. We could also have done more to support tasks.
  3. Risks and Rewards:
    There is a risk that you don’t have a productive session, or your session sheet is woefully inadequate. The timer was a good motivator since you run the risk of running out of time, so there was a bit of a game there with trying to beat the clock and have a focused, productive session. I designed that to be analogous to the “red bar green bar game” used in Test Driven Development tools. There is a reward inherent in getting your mission completed and having a good session sheet you can be proud to share, but it is completely intrinsic. You are also rewarded a bit with the Prime Me! button to help you get a new idea, or break a creativity log jam. We could have done a lot more to help people plan and manage risks, and add features to reward testers for using a good assortment of tags, or a peer-reference or reward system for great testing. The full bar showing once time has run out helps tickle an intrinsic reward of completion. As a tester, I did all I could in that session, and now I can move on to other things.
  4. Skill and Chance Events:
    Skilled testers often like to record what they discover, to have the freedom to investigate areas of high value, and take pride in having a varied approach to their testing. However, there is no extrinsic reward for completion of session sheets. Sheets with more tags having a higher score might have been a good option to add,to help people learn how to improve what they record. Outside of discovering bugs, chance events are brought in by the Prime Me! button. Like rolling a dice, people can click the button until an oblique strategy jiggles their brain in a different direction. The Prime Me! button is the most popular feature of the tool and is still demonstrated at testing conferences by people like Jon Bach. People find it fun and useful.
  5. Cheating and Compliance:
    Cheating: Anyone who uses a test case management system will have a high degree of cheating. People just get tired of the regression tests they run over and over and start clicking pass or fail to show progress. They are very easy to cheat, but a session-based approach is much more difficult to cheat, because you have to show a description of a testing session. However, there is nothing to prevent people from saving an empty session sheet. I have seen this happen on over worked teams, and it wasn’t discovered for weeks. We could possibly have looked at flagging incomplete or blank session sheets in the system so there is visibility on them /prior/ to an audit, or encourage people to do something about it within the tool. Compliance was a big miss because we altered the original SBTM rules, which caused a lot of controversy and prevented more widespread adoption. We should have enforced the original rules by supporting the Bach SBTM format first, then added the ability to adapt it instead of approaching it from the other direction.

It’s interesting to note that the aspects that made this tool popular and engaging can also be viewed in terms of gaming mechanics. A couple of them were there by design, but the others were just there because I was trying to make the app more engaging. However, if we had used this gamification structure during design of the tool, we would have had different results, and arguably a better tool, because it provides a more thorough structure. Areas of fun such as the Prime Me! button, and trying to automate some of the processes of SBTM helped make the experience more enjoyable for our users.

However, if you didn’t look at the tool from a gaming perspective, you wouldn’t notice that there are game mechanics at play within it. This is an example of using a gamification approach that goes beyond superficial leaderboards and rewards, and I encourage you to try it not only with your testing tools, but your processes and practices in testing. Use it as a system to analyze: What is working well? Where are you lacking? It’s a useful, systematic approach.

That analysis doesn’t look like a childish game does it? Bottom line: if you aren’t a gamer, you probably won’t notice the gaming aspects I bring into testing process and tools. If you are a gamer, you’ll notice the parallels right away, and will hopefully appreciate them. For both groups, hopefully gamification will be one tool we can use to help make testing more engaging and fun.

Software Testing Training and Gaming

If you spend time at conferences, or hire a well-known testing consultant to provide some training for your company, it’s likely that one or more of them have used game mechanics as teaching tools. In fact, they probably used them on you. You may not be aware that they did, but they used gaming mechanics to help you learn something important.

James Bach is famous for using magic tricks and puzzle solving as teaching tools. When I spent time with James learning about how to be a more effective trainer, he told me that magic tricks are great teaching tools because we all love to be fooled. When we are fooled by something, we are entertained, and our mind is primed for learning about what we missed during the trick. That is an ideal state for the introduction to new ideas. If you spend any time with James or any of his adherents at a conference or peer workshop, you will likely be inundated with puzzles to solve. There is always a testing lesson to be learned at the end, and it is a novel way of helping people learn through solving a tangible problem. If you love to solve puzzles and learning about testing, you’ll enjoy these experiences.

Dorothy Graham has a board game that she developed for testing tutorials. It’s a traditional style game that she created as a training aid, and Dot loves to deliver this course. The tutorial attendees have a lot of fun, and they learn some important lessons, but Dot admits she may even have more fun than they do. Dot loves training, and the game takes the entertainment value of learning up a few notches. I’ve taught next door to Dot and heard attendees as they play the game and learn with her, and I’ve seen their smiling faces during breaks and after the course. There is something inherently positive about using a real, physical game, designed for a specific purpose (and fun) in this way.

Fiona Charles and Michael Bolton also created a board game for a software development game workshop they facilitated in 2006. Fiona says:  “Our experience with the game highlighted the power of games and simulations in teaching: their ability to teach the participants (and the teachers) more than was consciously intended.”

Ben Simo uses a variation on a board game. I’m not going to give it away, since it’s highly effective, but he used it on me when I was moving from a dabbler in performance and load testing to working on some serious projects. Ben is an experienced and talented performance tester, and he has taught a lot of people how to do the job well. Ben spent hours with me using pieces from a board game, and posing problems for me and having me work on solving them. It was highly interactive, was chock full of performance testing analysis lessons, and we enjoyed working together on it. He would set up the scenario, enhanced by the board game, and I would work on approaches to solve it. I had about 15 pages of notes from this game play activity to take back and apply to my work on Monday. After playing this training game with Ben, I had much more confidence and I was able to spot far more performance anomaly patterns than I had prior to working with him. (We worked through this in a hotel lounge, and we got a lot of weird looks. We didn’t care, we were having fun! Besides, channeling Ralph Wiggum: I was “learnding”!)

James Lyndsay developed a fascinating course on exploratory testing, and with it, simple “black box test machines” that he developed in Flash to aid in experiential learning. These machines had no text on them, and they are difficult to start using, because there are no outward signs of what they are for. This is done on purpose, and each machine helps each class participant experience the lesson through their own exploration and discovery. This is one of my favorite game-like experiences in a testing training course. The machine exercises remind me of a puzzle adventure game. One of my favorites of this type of game is Myst. You have to explore and go off of your observations and clues to figure out what to do, and the possibilities for application and experience are wide open. James managed to create 4 incredibly simple programs that can replicate this sort of game experience during training. Simply brilliant.

Those of you who follow Jerry Weinberg, or the many consultants who have been influenced by him have likely worked through simulations during a workshop or tutorial. Much like an RPG (role playing game), attendees are organized around different goals, roles, activities and tasks to create an improvised simulation of a real-life problem. This involves drawing on improvisation, your “pretending” skills and applying your problem solving techniques in a different context than a work context. Many people report having very positive experiences and “aha!” moments when learning from these sorts of activities.

Another theme in Jerry’s people working is physical activity. Jerry gets people to move around, and he can influence the mood of the room by adding in physical activity to a workshop. In the book, the Gift of Time, Fiona Charles shares a poignant story about Jerry using a movement activity to calm down a room full of people during a workshop when they first learned about the events of September 11. Michael Bolton has told me several stories of how Jerry changes the learning dynamic by getting people to move and work in different parts of the room, or grouping people and having them move and work with others in creative combinations. Movement is a huge part of many games, especially sports and outdoor activities, and it gets different parts of our brain working. If you couple movement with learning concepts, it brings together more of your senses to help with concept retention. It is also associated with good health, a sense of well being and fun.

(Speaking of experiential learning, pretty much everyone I have mentioned here, including me, (and a lot more trainers you have heard of) have been influenced either directly or indirectly by Jerry Weinberg’s work on experiential learning. He even has a series books on the topic on Leanpub. The first one: Experiential Learning: Beginning , the second: Experiential Learning: Inventing and the third: Experiential Learning: Simulation.)

There are other examples of trainers using game structures in software testing, and I’ve probably missed some obvious ones. (I haven’t even told you about the ones I use, but that doesn’t matter.) These are some good examples off the top of my head that demonstrate the use of game mechanics in teaching.

I wanted to point out that each of them use game mechanics to teach serious lessons. While people may have fun, they come away with real-world skills that they can apply to their work as soon as they are back in the office.

Don’t be turned off by the term “game” when it comes to serious business – if you look at gaming with an open mind, you’ll see that it is all around us, being used in effective ways.

Did I miss a good software testing training gaming example? Please add them in the comments.

Edit: I just discovered an interesting post on games and learning on the testinggeek.com blog: Software Testing Games – Do They Help?

Test Automation Games

As I mentioned in a prior post: Software Testing is a Game, two dominant manual testing approaches to the software testing game are scripted and exploratory testing. In the test automation space, we have other approaches. I look at three main contexts for test automation:

  1. Code context – eg. unit testing
  2. System context – eg. protocol or message level testing
  3. Social context – eg. GUI testing

In each context, the automation approach, tools and styles differ. (Note: I first introduced this idea publicly in my keynote “Test Automation: Why Context Matters” at the Alberta Workshop on Software Testing, May 2005)

In the code context, we are dominated now by automated unit tests written in some sort of xUnit framework. This type of test automation is usually carried out by programmers who write tests to check their code as they develop products, and to provide a safety net to detect changes and failures that might get introduced as the code base changes over the course of a release. We’re concerned that our code works sufficiently well in this context. These kinds of tests are less about being rewarded for finding bugs – “Cool! Discovery!” and more about providing a safety net for coding, which is a different high value activity that can hold our interest.

In the social context, we are concerned about automating the software from a user’s perspective, which means we are usually creating tests using libraries that drive a User Interface, or GUI. This approach of testing is usually dominated by regression testing. People would rather get the tool to repeat the tests than deal with the repetition inherent in regression testing, so they use tools to try to automate that repetition. In other words, regression testing is often outsourced to a tool. In this context, we are concerned that the software works reasonably well for end users in the places that they use it, which are social situations. The software has emergent properties by combining code, system and user expectations and needs at this level. We frequently look to automate away the repetition of manual testing. In video game design terms, we might call repetition that isn’t very engaging as “grinding“. (David McFadzean introduced this idea to me during a design session.)

The system context is a bit more rare, but we test machine to machine interaction, or simulate various messaging or user traffic by sending messages to machines without using the GUI. There are integration paths and emergent properties that we can catch at this level that we will miss with unit testing, but by stripping the UI away, we can create tests that run faster, and track down intermittent or other bugs that might be masked by the GUI. In video or online games, some people use tools to help enhance their game play at this level, sometimes circumventing rules. In the software testing world, we don’t have explicit rules against testing at this level, but we aren’t often rewarded for it either. People often prefer we look at the GUI, or the code level of automation. However, you can get a lot of efficiency for testing at this level by cutting out the slow GUI, and we can explore the emergent properties of a system that we don’t see at the unit level.

We also have other types of automation to consider.

Load and performance testing is a fascinating approach to test automation. As performance thought leaders like Scott Barber will tell you, performance testing is roughly 20% of the automation code development and load generation work, and 80% interpreting results and finding problem areas to address. It’s a fascinating puzzle to solve – we simulate real-world or error conditions, look at the data, find anomalies and investigate the root cause. We combine a quest with discovery and puzzle solving game styles.

If we look at Test-Driven Development with xUnit tools, we even get an explicit game metaphor: The “red bar/green bar game.” TDD practitioners I have worked with have used this to describe the red bar (test failed), green bar (test passed) and refactor (improve the design of existing code, using the automated tests as a safety net.) I was first introduced to the idea of TDD being a game by John Kordyback. Some people argue that TDD is primarily a design activity, but it also has interesting testing implications, which I wrote about here: Test-Driven Development from a Conventional Software Testing Perspective Part 1, here: Test-Driven Development from a Conventional Software Testing Perspective Part 2, and here: Test-Driven Development from a Conventional Software Testing Perspective Part 3.
(As an aside, the Session Tester tool was inspired by the fun that programmers express while coding in this style.)

Cem Kaner often talks about high volume test automation, which is another approach to automation. If you automate a particular set of steps, or a path through a system and run it many times, you will discover information you might otherwise miss. In game design, one way to deal with the boredom of grinding is to add in surprises or rewarding behavior when people repeat things. That keeps the repetitiveness from getting boring. In automation terms, high volume test automation is an incredibly powerful tool to help discover important information. I’ve used this particularly in systems that do a lot of transactions. We may run a manual test several dozen times, and maybe an automated test several hundred or a thousand times in a release. With high volume test automation, I will run a test thousands of times a day or overnight. This greatly increases my chance of finding problems that only appear in very rare events, and forces seemingly intermittent problems to show themselves in a pattern. I’ve enhanced this approach to mutate messages in a system using fuzzing tools, which helps me greatly extend my reach as a tester over both manual testing, and conventional user or GUI-based regression automated testing.

Similarly, creating simulators or emulators to help generate real-world or error conditions that are impossible to create manually are powerful approaches to enhance our testing game play. In fact, I have written about some of these other approaches are about enhancing our manual testing game play. I wrote about “interactive automated testing” in my “Man and Machine” article and in Chapter 19 of the book “Experiences of Test Automation“. This was inspired by looking at alternatives to regression testing that could help testers be more effective in their work.

In many cases, we attempt to automate what the manual testers do, and we fail because the tests are much richer when exercised by humans, because they were written by humans. Instead of getting the computer to do things that we are poor at executing (lots of simple repetition, lots of math, asynchronous actions, etc.) we try to do an approximation of what the humans do. Also, since the humans interact with a constantly changing interface, the dependency of our automation code on a changing product creates a maintenance nightmare. This is all familiar, and I looked at other options. However, another inspiration was code one of my friends wrote to help him play an online game more effectively. He actually created code to help him do better in gaming activities, and then used that algorithm to create an incredibly powerful Business Intelligence engine. Code he wrote to enhance his manual game play in an online game was so powerful at helping him do better work in a gaming context, when he applied it to a business context, it was also powerful.

Software test automation has a couple of gaming aspects:

  1. To automate parts of the manual software testing game we don’t enjoy
  2. It’s own software testing game based on our perceived benefits and rewards

In number 1 above, it’s interesting to analyze why we are automating something. Is it to help our team and system quality goals, or are we merely trying to outsource something we don’t like to a tool, rather than look at what alternatives fit our problem best? In number 2, if we look at how we reward people who do automation, and map automation styles and approaches to our quality goals or quality criteria, not to mention helping our teams work with more efficiency, discover more important information, and make the lives of our testers better, there are a lot of fascinating areas to explore.