Category Archives: exploratory testing

I SLICED UP FUN Mobile Testing Infographic

Twelve years on since I created and shared the mobile testing mnemonic I SLICED UP FUN, I see that people are still using it and finding it valuable. I still use it myself on projects, so I decided to create an infographic to make it more shareable.

I call this a mnemonic because it is a memory aid to help me with my work. A catchy phrase helps me remember everything I need to think about to be thorough when testing mobile apps. Sometimes these are called heuristics, or listicles. Whatever you want to call it, it’s a helpful thinking framework to help quickly generate lots of useful testing ideas.

I SLICED UP FUN is a testing framework for mobile apps, but I use it for more than testing. As a product manager I use it in a generative or creative way as well, not only to help evaluate an existing app design, but to create something new.

If you haven’t used a thinking framework like this before, it’s quite simple to use. Read each section, and determine which ones apply to your product. If a section doesn’t apply, skip it and move to the next. Once you have a list that is applicable to your work, use each item in the list to generate ideas for that category. Once you have a few relevant ideas under that section, move to the next. Then review what you have, and see if there are gaps. Whenever you’re able, include other people to help you generate more and better ideas.

Once you have generated enough ideas, put them into action, whether it is testing, design, or other work you need to do.

You can download the infographic here:
ISLICEDUPFUN mobile testing infographic

Exploratory Test Adventures – Using Storytelling Games in Software Testing

I love to see creative work from people in the industry, and Martin Jansson always impresses me with his insatiable desire to learn, to do better and to take risks with ideas to push the craft forward. While I have been looking at gamification lately, it was exciting to learn that he and his colleagues had already been applying some of these ideas by looking at storytelling and games, and using some of those ideas to add more fuel to test idea generation during exploratory testing work.

Cem Kaner’s work on scenario testing is a powerful approach to testing. This is an approach to quickly create useful testing scenarios and ideas where we create a compelling story about the people who use our software, describe typical usage, possible outcomes, and human activity patterns surrounding usage. One of the most interesting outcomes of this kind of work is that it puts us in the role of our end users, and helps us quickly identify problems that they are likely to encounter. It also helps us understand when our software actually delivers, we can tell project stakeholders that our software works within the narrative of real-life scenarios. So not only do we uncover important problems, we also provide information that validates what we have done. “Yes! It works in an emergency scenario we didn’t think of during requirements definition!”

There are a lot of ways that we can frame scenario tests to provide structure and help with creative test idea generation. Using gaming as an influence, Martin Jansson and Greger Nolmark wrote a paper on adding structure to scenarios during exploratory testing sessions using storytelling as a guide:
Exploratory Test Adventure – a Creative, Collaborative Learning Experience.

I got excited when I started reading this paper because any kind of creative structure that we can add to test idea generation helps us be more thorough, and helps create more and better ideas. As Martin says, “…by setting up scenes, just like in a roleplaying adventure (or RPG game), you and your testers will have an increased learning experience that lets you explore beyond regular boundaries, habits and thought patterns.”

I often lament that testing information focuses too much on the negative, when we should also tell stakeholders when the team has done a great job. As a designer and programmer, sometimes I get worn down by constant criticism and ask the testers to also give me some positive feedback along with the criticism. After all, critiquing isn’t all about the bad news. It sometimes feels hopeless if all we get is the negative, with no positive feedback at all. Testers on the other hand, often feel like they are failing if they don’t find bugs and provide consistent negative feedback. But if we look at a story, some of them have happy endings. They have twists and turns and there are negatives, but there are also positives. Both are important factors to a story or game (or else they are too sappy and silly if it is all positive, or too depressing if they are all negative) and they are also important factors for determining whether a product or project has merit, or if we are ready to ship. Storytelling is one mechanism we can look to to help us get beyond mere bug hunting, and to provide quality-related information, both positive and negative. This pleases me.

Check it out, it is another example of looking at game mechanics, and applying one gamification aspect to software testing to help us make testing more valuable, more effective, more creative, and hopefully, more fun.

Test Quests – Gamification Applied to Software Test Execution

I decided to analyze a game feature, the “quest“, which is used in popular video games, particularly MMORPGs. Quests have some compelling aspects for structuring testing activitues. Jane McGonigal‘s book “Reality is Broken” provided me with a solid analysis of quests, and how they can be adapted to real life activities. Working from her example of a quest (ch. 3 pp. 56) , I created a basic test quest format:

  1. Goal statement (what we intend to accomplish with our testing work)
  2. Why the goal matters (why are we testing this?)
  3. Where to go in the application (what technique or approach are we using to test?)
  4. Guidance (not detailed steps, but enough to help. Bonus points for using video or other rich media examples.)
  5. Proof of completion (how do you know when you are finished?)

A quest is larger than a single testing mission (or a test case), but is smaller than a test plan. It’s a way we can organize testing tasks to help provide a sense of completion and interest, but in areas that require exploration and creativity. Just like in a video game, there are multiple ways to satisfy a quest. Once we have fulfilled a quest, which might take days or hours, depending on how it is created, we can move on to another one. It’s another way of organizing people, with the added bonus of leveraging years of game design success. Furthermore, modern technology involves a lot of collaboration between people in different locations, using different technology to reach a common goal, and we need to adapt testing to meet that. Testing a mobile app in your lab, one tester at a time, won’t really provide useful testing for an app that requires real-time communication and collaboration for people all over the world. MMO’s do a fabulous job of getting people to work hard and co-ordinate activities in a virtual world, and people have fun doing it. I decided to apply it to testing.

Where do quests fit? Think in terms of a hierarchy of activities:

  • test strategy and plan
  • risks that are mitigated through testing
  • different models of coverage that map to risk mitigation
  • test quests
  • sessions, tours, tasks
  • feedback and reporting

A good test approach will have more than one model of coverage (check I SLICED UP FUN for 12 mobile coverage models), and under each model of coverage, there will be multiple quests. Sometimes quests will be repeated when regressions are required.

So why add this structure?

One area I have worked on over the years is using structure and guidance to help manage exploratory testing efforts. In the past, test case management systems provided some measure of coverage and oversight, but they have little in the way of intrinsic value for testers. People get tired of repeating the same tests over and over, but management love the metrics and they provide even though they are incredibly easy to cheat with. Furthermore, from a tester’s perspective there is an extrinsic reward that is inherent in the design of the tools, and they are easy to use. There is also a sense of completion, once I have run through X number of test cases, I feel like I have accomplished something.

With exploratory testing, the rewards are more intrinsic. The approach can be more fulfilling; I personally feel like I am approaching testing in a more effective way, and I can spend my time on high value activities. However, it is harder to measure coverage, and it is more difficult to direct people in areas where coverage is required without adding some guidance. There have been a lot of different approaches to adding structure to exploratory testing over the years to find a balance. Test quests are another approach to adding structure and finding that balance between the intrinsic rewards of pure exploratory testing, and the extrinsic rewards of scripted testing. This is an idea to provide a blend.

As many of you have heard me argue over the years, test cases and test case management systems are merely one form of guidance, there are others. In the exploratory testing community, you will see coverage outlines, checklists, mind maps, charter lists, session sheets, and media such as video demonstrations and all sorts of alternatives. When it comes to managing exploratory testing, one of the first places we start is to use session-based testing management. This approach helps us focus testing in particular areas, and provides a reviewable result, which makes our auditors and stakeholders happy. I’ve used it a lot over the years.

I’ve also used Bach’s General Functionality and Stability Procedure for over a decade to help organize exploratory testing. However, through experience, unique projects and contexts, I have adapted and moved away from the orthodoxy where I saw fit. However, when I started analyzing why people on my teams have fun with testing, SBTM and Bach’s General Functionality and Stability Procedures were big reasons why. Even though I often use a much more lightweight version of SBTM than he has created, people appreciate the structure. The General Functionality and Stability Procedures is a great example of guidance for analysis, exploration, and great things to do as testers.

The other side of fun on the teams I work on are related to humour, collaboration and technology. We often come up with nicknames, and divide up testing into teams and hold contests. Who can come up with the best test approach? Who recorded the best bug report video? Who found the most difficult to find bug last week? What team has the most pop culture references in their work? Testing is filled with laughter, excitement and learning, and some good plain old fashioned silly fun. We communicate constantly using technology to help stay up to speed on changes and progress, and often other team members want to get in on the action. Sometimes, it’s hard to get the coders to code, the product owners to product own, and the managers to manage, because everyone wants in on the fun. In the midst of this fun is incredibly valuable testing. Stakeholders are blown away by the productivity of testing, the volume of useful information produced, the quality of bugs, and the detailed, useful information from bug reports to status reports and quality criteria that is produced. While there is laughter and fun, there is hard work going on. I learned why this is so effective reading Jane McGonigal’s work.

In Reality is Broken, Jane McGonigal describes Augmented Reality Games (ARGs). These are real life activities that are gamified – they have a game-like structure applied to them. She mentions Chore Wars, and how gamifiying something as mundane as household chores can turn it into a fun activity. She mentions that since cleaning the bathroom is a high value activity in the game, her and her husband have to work hard to try to clean it before the other does. McGonigal explains that since there is a choice, and meaning attached to the task, people choose to do it under the mechanism of the game. It’s not that awful thing no one wants to do anymore because it is unpleasant, when framed within a game context, it is a highly sought after quest or task to complete. You get points in the game, you get bragging rights, you get intrinsic rewards as well as the extrinsic clean bathroom. Amazing.

If we apply that to testing, how about using lessons from ARGs to gamify things like regression testing, or test data creation, or other maintenance tasks we don’t like doing? One way we can do this is to sprinkle these tasks within quests. You can only complete the quest by finishing up one of these less desirable tasks.

In Reality is Broken, McGonigal defines a game as having four traits: a goal, rules, a feedback system, and voluntary participation (pp.21). Working backwards, in exploratory testing, a lot of what we do is voluntary because testers have some degree freedom to make decisions about what they are going to test, even if it is within narrow parameters of coverage. Furthermore, we can choose a different model of coverage to reach a goal. For example, I was working with an e-commerce testing team who were bored to death of testing the purchasing engine because they were following the same set of functional test scripts. To help them be more effective and to enjoy what they were doing, I introduced a new model of coverage to test the purchasing engine: user scenarios. Suddenly, they were engaged and interested and found bugs they had previously missed. I then helped them develop more models of coverage so that they could change their perspective and test the same thing, but with variation to keep them engaged and interested while still satisfying coverage requirements. As humans, we need to mix things up. Previously, they had no choice – they were told to execute the tests in the test case management system, and that was the end of it.

Feedback systems are often linked to bug reporting systems in testing. But I like to go beyond that. Bring in other people to test with you in pairs, trios or whatever combination to bring more ideas to the table. This isn’t duplicated testing, but a redoubling of brain power and effort. I also utilize instant messaging, IRC, and big visible charts to help encourage feedback across functional areas of teams.

Rules in testing are often related to what is dictated to us by managers, developers, and tradition. It boggles my mind how many so-called Agile programmers will demand their testers work in un-Agile ways, expecting them to create test plans, test cases and use test case management systems. When I ask the programmers if they would like to work that way, they usually say no. Well guess what, not many other homo sapiens like to work that way either. I prefer to have rules around approach. We have identified risks, and models of coverage to mitigate those risks, and we use people, tools and automation to help us reach our goals. Rather than count test cases and bugs, we rate our team on our ability to get great coverage and information that helps stakeholders make quality-related decisions.

Finally, a goal in testing needs to be project-specific. If you want to fail, you just copy what you did last time on your test project. The problem with that is you are unaware of any new risks or changes and you’ll likely be blind to them. Every project has a goal, a way we can measure whether we did the right sort of work to help reach that goal, rather than “run the regression tests, automate as many as possible, and if there is time, do other testing”, we have something specific that helps ensure we aren’t doing busy work, but we’re creating value.

When it comes to quests, they can have this format as well. A goal, a feedback system, rules or parameters on where to test, and voluntary participation. As long as all the quests are fulfilled for a project, it doesn’t matter who did them.

It turns out that my application of SBTM, Bach’s General Funcationality and Stability Procedure, plus some zany fun and utilizing technology to help socialize, report and record information, I was right next door to gamification. Using gamification as a guide, I hope to provide tools for others who also want to make testing effective and fun. A test quest is one option to try. Consider using avatars, fun names and anything that resonates with your team members to help make the activity more fun. Also consider rewards for difficult quests and tasks such as a free meal, public kudos, or time off in lieu. Get creative and use as much or as little from the video game world as you like.

Some of my goals with test quests are:

  • Enough structure to provide guidance to testers so they know where to focus efforts
  • Not so much structure (like scripted test cases) that personal choice, creativity and exploration are discouraged or forbidden
  • Guidance and structure is lightweight so that it doesn’t become a maintenance burden like our scripted regression test cases become (both manual and automated)
  • Testers get a sense of purpose, they get a sense of meaning in their work, and completion by completing a set of tasks in a quest
  • Utilize tools (automated tests, automated tasks, simulators, high volume test automation, monitoring and reporting) to help boost the power of the testers and be more efficient and effective, and to do things no human could do on their own
  • Encourage collaboration and sharing information so that testers can provide feedback to other project team members on the quality of the products, but also get feedback on their own work and approaches
  • Encourage test teams to use multiple models of coverage (changing perspectives, using different testing techniques and tools) on a project instead of thinking of coverage as a singular thing
  • Utilize an effective gaming structure to augment reality and encourage people to have fun working hard at testing activities

I am encouraging testing teams to use this as a structure for organizing test execution to help make testing more engaging and fun. Feel free to add as many (or few) elements from video game quests as you see fit, and alter to match the unique personalities and goals of the people on your team. Or, study them and analyze how you organize your testing work for you and your teams. Does your structure encourage people to have fun and work hard at accomplishing something great? If not, you might learn something from how others have managed to get people to work hard in games.

Happy questing!

Software Testing is a Game

David McFadzean and I wrote an article for Better Software magazine called The Software Development Game, which was published in the September/October edition. We describe applying game-like structures and approaches to software development teams. I’ve been asked for another article on applying game-like approaches to testing, so look for more in this space.

In the meantime, here are some of my current thoughts.

How is software testing like a game? If we think of software development as a game, or a situation involving cooperation and conflict with different different actors with different motivations and goals, software testing fits as a game within the larger software development game (SDG). While the SDG focuses more on policy, practices and determining an ideal mix of process, tools and technology, software testing is often an individual pursuit within a larger game format. There are two distinct styles of playing that game today: scripted testing and exploratory testing.

In the scripted testing game, we are governed by a test plan and a test case management system. We are rewarded for going through a management system and repeating test scripts and marking them as passed or failed. We are measured on our coverage of the tests that are stored in this test case management system. It is very important to stakeholders that we get as close to 100% coverage of the tests in that system as possible. This is a repetitive task, in a highly constricted environment. Anything that threatens the test team goal of high coverage of the tests within a test case management system tends to be discouraged, sometimes just implicitly. The metrics are what matter the most, so any activity, even more and better testing will be discouraged if it takes away from reaching that objective. “You, tester are rewarded on how many test cases you follow and mark as pass or fail in the system. If there is time after we get this number, you may look at other testing activities.”

In the exploratory testing game, testers are given some degree of self-determination. In fact, the latest version of the definition of exploratory testing emphasizes this:
“a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.”

The game player in exploratory testing is rewarded on the quality of their work, their approach, and the quality of the information that they provide. Exploratory testing works by being adaptable and relevant, so traditional ideas of metrics are often downplayed in favor of qualitative information. If a tester changes their mind when they are working on an assignment, and they have good reason to do so, and can make their case defensible, they are rewarded for changing things up because it helps make the product and our approach better. Coverage is important, but testers are rewarded on using multiple models of coverage (and discovering important new ones) rather than following the regression tests and counting that as complete coverage.

When I think about exploratory testing, I am reminded of the game Nomic where changing the rules is considered a valid move. Instead of a group effort like Nomic, a tester has the freedom to change their own course, and to help improve the approach of the overall team by demonstrating excellent work. “You, tester are rewarded by excellent testing, and your skill and ability to find and report important problems. We don’t care if you work through one test, or a thousand tests to get that information. The information you provide and how it improves the quality and value of our products is what we value.” It is deeply important for exploratory testers to be able to adapt to changing conditions, to uncover important information to help the team create value with their product, and for many, to get endorsement and the respect of their peers.

Now think of game play activities. Both scripted and exploratory testing approaches involve repetitive work. In the gamification space, we can explore these activities, and how we can apply game concepts to enhance the work. One area we can explore are video games. Imagine we are are playing a massively multiplayer online role-playing game (MMORPG). Some players like to perform repeated tasks that are unrelated to individual and shared game objectives. They like to repeat tasks to unlock features related to their character, such as acquiring new outfits or accessories for their character.

Other players are very achievement goal oriented – they try to reach individual goals and gain more points and unlock more achievement-based features in the game, and learn that when they co-operate with others, that they can achieve even more. One player type gets rewarded more for repeated individual tasks, while the other gets rewarded for trying to create achievement or quest value for themselves and others. The two types blend, as any player will have a subgoal of changing the appearance of my character, or acquiring accessories or currency to enhance quest or other achievement game play. People who are less adventurous and are more caught up in acquiring personal character attributes will also take part in quest events.

If you play these games and try to collaborate, you will often find other players who are at different points on this spectrum. It can be frustrating when your team members are more interested in building up their own character’s numbers than in helping the team as a whole. For example, if you are off on a raid, and one or more team members constantly stop to pick up health or currency points, your team will be let down and may have to regroup. It can be difficult to work together, since the perceived value of the game and the reward structures don’t match. One group want personal metrics and accessories, while the other care about non-character pursuits and the respect of their peers more.

In the software testing game, it is difficult to try to play an exploratory testing game in a system that is rigid, restrictive, and rewards testers for playing a scripted game. Conversely, those who are used to being rewarded on acquiring coverage counts in test case management systems can be very threatened when they are now rewarded on the quality of their testing skills and the information they provide. Testing tradition and common practice tends to think of testing as one sort of game, of which a test case management system is central. What they fail to realize is that there are many ways of playing this game, not just one. I want to explore this other alternatives in more detail through a gaming lens.

Testers and test managers, there is a lot more to our world than scripted testing and test case management tools. They are part of an ancient game (from at least the 1970s), which is struggling and showing its age in the face of new technology and new approaches to work that mobile technology and other styles of work are bringing. Modern testers work hard to play a game that fits the world around us. Are you measuring and rewarding them for playing the right kind of game, or the one you are most used to? Like the video game character who has gathered metrics and accessories, are you gathering these at the expense of the rest of the team’s goals, or are these measures and rewards helping the entire software development team create a better product?

Watch for more in this space as I explore software testing from the perspective of gamification, game theory and other game activities more deeply.

Content Pointer: Documenting Exploratory Testing

My latest testing article is on documentation and exploratory testing. You can read an online version here: Documenting Exploratory Testing. (If you don’t have a login, click the Preview button to read it.) You can order a hard copy from here.

I get this question frequently:

How do we document when we are exploratory testing?

This piece describes some of the things I do on testing projects.

New Exploratory Testing Tutorials

I’ve developed some new ET tutorials. The newest is “Managing Exploratory Testing”, addressing questions I hear the most from managers. Since I’ve done a lot of this sort of training already, it made sense to start offering this publicly. I’ll be teaching it at EuroSTAR and STAR West this fall.

The other tutorial is an evolved version of my take on “Exploratory Testing Explained”. Due to questions and interests of people who have taken the course, it has evolved into a hands-on, experiential workshop: “Exploratory Testing Interactive”. I’m also teaching it at STAR West.

If you’d like to work with me to get a glimpse into my take on exploratory testing, and learn some new skills, drop in at one of the conferences. If you don’t or can’t attend these conferences, you can also consider bringing me in to work with you and your team one-on-one.

Descriptive and Prescriptive Testing

While many of us drone on about scripted testing vs. exploratory testing, the reality is on real projects we tend to execute testing with a blend of both. It often feels lop-sided – on many projects, scripted testing is the norm, and exploratory testing isn’t acknowledged or supported. On others, the opposite can be true. I’ll leave the debate on this topic up to others – I don’t care what you do on your projects to create value. I would encourage you to try some sort of blend, particularly if you are curious about trying exploratory testing. However, I’m more interested in the styles and why some people are attracted to one side of the debate or the other.

Recently, David Hussman and I have been collaborating, and he pointed out the difference between “prescriptive” and “descriptive” team activities. A prescriptive style is a preference towards direction (“do this, do that”) while a descriptive style is more reflective (“this is what we did”). Both involve desired outcomes or goals, but one attempts to plan the path to the outcome in more detail in advance, and the other relies on trying to reach the goals with the tools you have at hand, reflecting on what you did and identifying gaps, improving as you go, and moving towards that end goal.

With a descriptive style of test execution, you try to reach a goal using lightweight test guidance. You have a focus and more coarse-grained support for it than what scripted testing provides. (The guidance is there, it just isn’t as explicit.) As you test, and when you report testing, you describe things like coverage, what you discovered, bugs, and your impressions and feelings. With a prescriptive style of testing, you are directed by test plans and test cases for testing guidance, and follow a more direct process of test execution.

Scripted testing is more prescriptive (in general) and exploratory testing is more descriptive (in general.) The interesting thing is that both styles work. There are merits and drawbacks of both. However, I have a strong bias towards a descriptive style. I tend to prefer an exploratory testing approach, and I can implement this with a great deal of structure, traceability utilizing different testing techniques and styles. I prefer the results the teams I work with get when they use a more descriptive style, but there are others who have credible claims that they prefer to do the opposite. I have to respect that there are different ways of solving the testing problem, and if what you’re doing works for you and your team, that’s great.

I’ve been thinking about personality styles and who might be more attracted to different test execution styles. For example, I helped a friend out with a testing project a few weeks ago. They directed me to a test plan and classic scripted test cases. Since I’ve spent a good deal of time on Agile teams over the past almost decade, I haven’t been around a lot of scripted tests for my own test execution. Usually we use coverage outlines, feature maps, checklists, and other sources of information that are more lightweight to guide our testing. It took me back to the early days of my career and it was kind of fun to try something else for a while.

Within an hour or two of following test cases, I got worried about my mental state and energy levels. I stopped thinking and engaging actively with the application and I felt bored. I just wanted to hurry up and get through the scripted tests I’d signed on to execute and move on. I wanted to use the scripted test cases as lightweight guidance or test ideas to explore the application in far greater detail than what was described in the test cases. I got impatient and I had to work hard to keep my concentration levels up to do adequate testing. I finally wrapped up later that day, found a couple of problems, and emailed my friend my report.

The next day, mission fulfilled, I changed gears and used an exploratory testing approach. I created a coverage outline and used the test cases as a source of information to refer to if I got stuck. I also asked for the user manual and release notes. I did a small risk assessment and planned out different testing techniques that might be useful. I grabbed my favorite automated web testing tool and created some test fixtures with it so I could run through hundreds of tests using random data very quickly. That afternoon, I used my lightweight coverage to help guide my testing and found and recorded much more rich information, more bugs, and I had a lot of questions about vague requirements and inconsistencies in the application.

What was different? The test guidance I used had more sources of information, and models of coverage, and it wasn’t an impediment to my thinking about testing. It put the focus on my test execution, and I used tools to help do more, better, faster test execution to get as much information as I could, in a style that helps me implement my mission as a tester. I had a regression test coverage outline to repeat what needed to be repeated, I had other outlines and maps that related to requirements, features, user goals,etc. that helped direct my inquisitive mind, and helped me be more consistent and thorough. I used tools to support my ideas and to help me extend my reach rather than try to get them to repeat what I had done. I spent more time executing tests, and many different kinds of tests using different techniques than managing the test cases, and the results reflected that.

My friend was a lot happier with my work product from day 2 (using a descriptive style) than on day 1 (using a prescriptive style). Of course, some of my prescriptive friends could rightly argue that it was my interpretation and approach that were different than theirs. But, I’m a humanist on software projects and I want to know why that happens. Why do I feel trapped and bored with much scripted testing while they feel fearful doing more exploratory testing? We tend to strike a balance somewhere in the middle on our projects, and play to the strengths and interests of the individuals anyway.

So what happened with my testing? Part of me thinks that the descriptive style is superior. However, I realize that it is better for me – it suits my personality. I had a lot of fun and used a lot of different skills to find important bugs quickly. I wasn’t doing parlor trick exploratory testing and finding superficial bugs – I had a systematic, thorough traceable approach. More importantly for me, I enjoyed it thoroughly. Even more importantly, my friend, the stakeholder on the project who needed me to discover information they could use, was much happier with what I delivered on day 2 than on day 1.

I know other testers who aren’t comfortable working the way I did. If I attack scripted testing, they feel personally attacked, and I think that’s because the process suits their personality. Rather than debate, I prefer we work using different tools and techniques and approaches and let our results do the talking. Often, I learn something from my scripting counterpart, and they learn something from me. This fusion of ideas helps us all improve.

That realization started off my thinking in a different direction. Not in one of those “scripted testing == bad, exploratory testing == good” debates, but I wondered about testing styles and personality and what effect we might have when we encourage a style and ignore or vilify another. Some of that effect might be to drive off a certain personality type who looks at problems differently and has a different skill set.

In testing, there are often complaints about not being able to attract skilled people, or losing skilled people to other roles such as programming or marketing or technical writing. Why do we have trouble attracting and keeping skilled people in testing. Well, there are a lot of reasons, but might one be that we discourage a certain kind of personality type and related skill set by discouraging descriptive testing styles like exploratory testing? Also, on some of our zealous ET or Agile teams, are we also marginalizing worthwhile people who are more suited to a prescriptive style of working?

We also see this in testing tools. Most are geared towards one style of testing, a prescriptive model. I’m trying to help get the ball rolling on the descriptive side with the Session Tester project. There are others in this space as well, and I imagine we will see this grow.

There has to be more out there testing-style-wise other than exploratory testing and scripted testing, and manual vs. automated testing. I personally witness a lot of blends, and encourage blends of all of the above. I wonder if part of the problem with the image of testing and our problem attracting talented people is in how we insist testing must be approached. I try to look at using all the types of testing we can use on projects to discover important information and create value. Once we find the right balance, we need to monitor and change it over time to adjust to dynamics of projects. I don’t understand the inflexibility we often display towards different testing ideas. How will we know if we don’t try?

What’s wrong with embracing different styles and creating a testing mashup on our teams? Why does it have to be one way or the other? Also, what other styles of testing other than exploratory approaches are descriptive? What other prescriptive styles other than scripted testing (test plan, test case driven) are there? I have some ideas, but email me If you’d like to see your thoughts appear in this blog.

Exploratory Testing: More than Superficial Bug Hunting

Sometimes people define exploratory testing (ET) quite narrowly, such as only going on a short-term bug hunt in a finished application. I don’t define ET that narrowly, in fact, I do ET during development whether I have a user interface to use or not. I often ask for and get some sort of testing interface that I can use to design and execute tests around before a UI appears. I’ll also execute traditional black-box testing through a UI as a product is being developed, and at the end, when development feels it is “code complete”. I’m not alone. Cem Kaner mentioned this on the context-driven testing mailing list, which prompted this blog post.
Cem wrote:

To people like James and Jon Bach and me and Scott Barber and Mike Kelly and Jon Kohl, I think the idea is that if you want useful exploratory testing that goes beyond the superficial bugs and the ones that show up in the routine quicktests (James Whittaker’s attacks are examples of quicktests), then you want the tester to spend time finding out more about the product than its surface and thinking about how to fruitfully set up complex tests. The most effective exploratory testing that I have ever seen was done by David Farmer at WordStar. He spent up to a week thinking about, researching, and creating a single test-which then found a single showstopper bug. On this project, David developed exploratory scenario tests for a database application for several months, finding critical problems that no one else on the project had a clue how to find.

In many cases, when I am working on a software development project, a good deal of analysis and planning go into my exploratory testing efforts. The strategies I outline for exploratory testing reflect this. Not only can they be used as thinking strategies in the moment, at the keyboard, testing software, but they can guide my preparation work prior to exploratory testing sessions. Sometimes, I put in a considerable amount of thought and effort modeling the system, identifying potential risk areas and designing tests that yield useful results.

In one case, a strange production bug occurred in an enterprise data aggregation system every few weeks. It would last for several days, and then disappear. I spent several days researching the problem, and learned that the testing team had only load tested the application through the GUI, and the real levels of load occurred through the various aggregation points communicating in. I had a hunch that there were several factors at work here and it took time to analyze them. It took several more days working with a customer support representative who had worked on the system for years before I had enough information to work with the rest of the team on test design. We needed to simulate not only the load on the system, but the amount of data that might be processed and stored over a period of weeks. I spent time with the lead developer, and the lead system administrator to create a home-grown load generation simulation tool we could run indefinitely to simulate production events and the related network infrastructure.

While the lead developer was programming the custom tool and the system administrator was finding old equipment to set up a testing environment, I created test scenarios against the well-defined, public Web Services API, and used a web browser library that I could run in a loop to help generate more light load.

Once we had completed all of these tasks, started the new simulation system, and waited for had the data and traffic statistics to be at the level that I wanted to generate, I began testing. After executing our first exploratory test, the system fell over, and it took several days for the programmers to create a fix. During this time, I did more analysis and we tweaked our simulation environment. I repeated this with the help of my team for several weeks, and we found close to a dozen show-stopping bugs. When we were finished, we had an enhanced, reusable simulation environment we could use for all sorts of exploratory testing. We also figured out how to generate the required load in hours rather than days with our home-grown tools.

I also did this kind of thing with an embedded device that was under development. I asked the lead programmer to add a testable interface into the new device he was creating firmware for, so he added a telnet library for me. I used a Java library to connect to the device using telnet, copied all the machine commands out of the API spec, and wrapped them in JUnit tests in a loop. I then created code to allow for testing interactively, against the API. The first time I ran a test with a string of commands in succession in the IDE, the device failed because it was writing to the input, and reading from the output. This caused the programmer to scratch his head, chuckle, and say: “so that’s how to repeat that behavior…”

It took several design sessions with the programmer, and a couple days of my time to be able to set up an environment to do exploratory testing against a non-GUI interface using Eclipse, a custom Java class, and JUnit. Once that was completed, the other testers used it interactively within Eclipse as well. We also used a simulator that a test toolsmith had created for us to great effect, and were able to do tests we just couldn’t do manually.

We also spent about a week creating test data that we piped in from real-live scenarios (which was a lot of effort to create as well, but well worth it.) We learned a good deal from the test data creation about the domain the device would work in.

Recently, I had a similar experience – I was working with a programmer who was porting a system to a Java Enterprise Edition stack and adding a messaging service (JMS.) I had been advocating testability (visibility and control – thanks James) in the design meetings I had with the programmer. As a result, he decided to use a topic reader on JMS instead of a queue so that we can see what is going on more easily, and added support for the testers to be able to see what the Object-Relational Mapping tool (JPA) is automatically generating map and SQL-wise at run-time. (By default, all you see is the connection information and annotations in Java code, which doesn’t help much when there is a problem.)

He also created a special testing interface for us, and provided me with a simple URL that passes arguments to begin exercising it. For my first test, I used JMeter to send messages to it asynchronously, and the system crashed. This API was so far below the UI, it would be difficult to do much more than scratch the surface of the system if you only tested through the GUI. With this testable interface, I could use several testing tools as simulators to help drive my and other tester’s ET sessions. Without the preparation through design sessions, we’d be trying to test this through the UI, and wouldn’t have near the power or flexibility in our testing.

Some people complain that exploratory testing only seems to focus on the user interface. That isn’t the case. In some of my roles, early in my career, I was designated the “back end” tester because I had basic programming and design skills. The less technical testers who had more knowledge of the business tested through the UI. I had to get creative to ask for and use testable interfaces for ET. I found a place in the middle that facilitated integration tests , while simulating a production environment, which was much faster than trying to do all the testing through the UI.

I often end up working with programmers to get some combination of tools to simulate the kinds of conditions I’m thinking about for exploratory testing sessions, with the added benefit of hitting some sort of component in isolation. These testing APIs allow me to do integration tests in a production-like environment, which complements the unit testing the programmers are doing, and the GUI-level testing the black box testers are doing. In most cases, the programmers also adopt the tools and use them to stress their components in isolation, or as I often like to use them for, to quickly generate test data through a non-user interface while still exercising the path the data will follow in production. This is a great way to smoke test minor database changes, or database driver or other related tool upgrades. Testing something like this through the UI alone can take forever, and many of the problems that are obvious at the API level are seemingly intermittent through the UI.

Exploratory testing is not limited to quick, superficial bug hunts. The learning, analyzing, executing, test idea generation and execution are parallel activities, but sometimes we need to focus harder in the learning and analyzing areas. I frequently spend time with programmers helping them design testable interfaces to help with exploratory testing at a layer behind the GUI. This takes preparation work including analysis and design, and testing of the interface itself, which all help feed into my learning about the system and into the test ideas I may generate. I don’t do all of my test idea generation on the fly, in front of the keyboard.

In other cases, I have tested software that was developed for very specialized use. In one case, the software was developed by scientists to be used by scientists. It took months to learn how to do the most basic things the software supported. I found some bugs in that period of learning, but I was able to find much more important bugs after I had a basic grasp of the fundamentals of the domain the software operated in. Jared Quinert has also had this kind of experience: “I’ve had systems where it took 6 months of learning before I could do ‘real’ testing.”

Update

My blog has been quiet lately, and several of you have asked me what I’ve been up to. Since I last posted I’ve:

  • Created an all new Exploratory Testing course from scratch, and presented it for a corporate client, with more on the way.
  • Created a condensed version of the course for STARWEST that I’ll be presenting as a tutorial.
  • Written an article on Interactive Automated Testing for Better Software magazine (watch for it in the December issue.)
  • Presented a short Exploratory Testing Explained talk for two Agile user groups. One in Edmonton, and another for a conference in Vancouver.
  • Taken on a technical editing role with Better Software magazine. (Have great article ideas on testing, software development or management? Let me know – we’re always looking for great content.)
  • Had rewarding client work helping teams develop ET skills and lightweight test automation solutions.

I’ll be presenting at STARWEST in Anaheim later this month, and in Sweden at Oredev in November. If you see me at either conference, stop and say hi.