Category Archives: personas

Creating Great Storytelling to Enhance Software Testing Scenarios

Recently, I wrote about Using Storytelling Games in Software Testing, and pointed you to a paper by Martin Jansson and Greger Nolmark. Now I want to give you some tips on creating great storytelling for your testing projects.

First of all, check out Cem Kaner’s work on Scenario Testing: An Introduction to Scenario Testing. I want you to pay special attention to the CHAT (cultural, historical activity theory) model that he talks about. For more on CHAT and testing, read this paper: Putting the Context in Context-Driven Testing (an
Application of Cultural Historical Activity Theory)
.Pay special attention to the descriptions of networks of activity, and tensions. These are vital to help construct variations and different forces within our storytelling. Both of these pieces are foundational and worth the effort to dig into.

Now, I want you to read Hans Buwalda’s article on Soap Opera Testing. This is a nice variation on scenario testing. Buwalda uses television soap operas as inspiration for a story arcs, for structure, and for variation. Remember, there are lots of variations on a theme in testing, as well as real life! Further to that, look into testing tours. Cem Kaner has a blog post with a link or two to help get some background info: Testing tours: Research for Best Practices?.

Soap Opera tests, Testing Tours and Test Scenarios are a great place to start creating good testing stories.

Next, read up on personas in user experience work. Jenny Cham has a really nice description, with lots of helpful links on creating personas here: Creating design personas. Remember to explore her links in this blog, she has great advice here. I wrote a position paper about using UX personas in testing years ago (I will have to dig it up, there’s a dead link) in this blog post. Elisabeth Hendrickson introduced me to this idea, but she recommended using extreme personas such as cartoon characters. I prefer the standard UX methods pioneered by people like Alan Cooper, but the cartoon or other characters are a great place to start, especially if you feel stuck. Personas are a great way to start developing characters for your story that are relevant. What are their motivations when they use our software? What are their fears? What are their cares and worries and distractions?

Next, I want you to read this piece on telling a great story by a famous author: Kurt Vonnegut at the Blackboard. (I am getting to the gamification side of this project, and I asked Andrzej Marczewski for good references on storytelling in games, and this was the first link he sent me. Thanks Andrzej!) Notice the different options for structuring a good story. In testing, we can use different ones for the same scenario, if we think about activity patterns, tensions, characters, and variations during real life product use. Several versions of one story will yield different kinds of important information and observations. Vonnegut provides a simple framework for story creation that we can easily adapt and apply.

Finally, I want you to look at story telling in games. Andrzej talks about it here: I want to experience games not just play them. Notice that within a game context, of a well designed game, he has a sense of cause and effect: decisions made here can impact things in other areas of the game. That’s just like real life, and it is important to add dimensions to storytelling in games for testing. Variation and dimensions have different effects in a system, and they are rewarding to exercise. Now read this piece on Gamasutra The Designer’s Notebook: Three Problems for Interactive Storytellers, Resolved by Ernest Adams. The points about character amnesia, internal consistency and narrative flow are pure gold for testers. We often arrive into a system without really knowing what is going on, especially at first. However, our customers are also starting from scratch when they use our app for the first time. These problems are areas we should also address when creating stories to test around.

There is also a lot of really useful information here: Environmental Storytelling: Creating Immersive 3D Worlds Using Lessons Learned from the Theme Park Industry by Don Carson, particularly with regards to environmental conditions being so important to incorporate (particularly for you mobile testers!) and the idea of an all-encompassing world, rather than one, linear story.

Andrzej also recommends reading Uncle Computer, Tell Me A Story, and Story Structure 104: The Juicy Details.

As testers, we can incorporate more than a linear scenario into our work. We can add so much more depth to our test approach using stories and worlds. Story development in games is incredibly similar to the story telling we need to do in testing. There is a lot to be learned about creating virtual worlds and stories within them to help change our perspective, explore variations and make important discoveries about the software and systems we test. We can leverage these various works that have been provided with us to create something new and powerful.

Some final points to put this all together:

  • Combine the elements from each of the areas I asked you to study above to create a great story, or even better, sets of stories
  • Use structure to create real life conditions: different people, motivations, different environmental conditions, and change.
  • Add plot twists, surprises and ulterior motives, and look for unintended consequences in systems and people
  • Don’t stop at one scenario – create variations on a theme, and change the setting, or the entire world you have created to help change your perspective
  • Introduce different characters – are they interrupting? Helping?
  • Create a beginning, middle and an end
  • Move beyond all happy endings – also try to leave things unresolved, or end on a bad note

I have compiled several foundational concepts to help influence your storytelling, so now the rest is up to you. How you combine them to create something useful is up to you and your team. You have an opportunity to create rich perspectives to kickstart your testing efforts.

Happy storytelling!

User Profiles and Exploratory Testing

Knowing the User and Their Unique Environment

As I was working on the Repeating the Unrepeatable Bug article for Better Software magazine, I found consistent patterns in cases where I have found a repeatable case to a so-called “unrepeatable bug”. One pattern that surprised me was how often I do user profiling. Often, one tester or end-user sees a so-called unrepeatable bug more frequently than others. A lot of my investigative work in these cases involves trying to get inside an end-user’s head (often a tester) to emulate their actions. I have learned to spend time with the person to get a better perspective on not only their actions and environment, but their ideas and motivations. The resulting user profiles fuel ideas for exploratory testing sessions to track down difficult bugs.

Recently I was assigned the task of tracking down a so-called unrepeatable bug. Several people with different skill levels had worked on it with no success. With a little time and work, I was able to get a repeatable case. Afterwards, when I did a personal retrospective on the assignment, I realized that I was creating a profile of the tester who had come across the “unrepeatable” cases that the rest of the dev team did not see. Until that point, I hadn’t realized to what extent I was modeling the tester/user when I was working on repeating “unrepeatable” bugs. My exploratory testing for this task went something like this.

I developed a model of the tester’s behaviour through observation and some pair testing sessions. Then, I started working on the problem and could see the failure very sporadically. One thing I noticed was that this tester did installations differently than others. I also noticed what builds they were using, and that there was more of a time delay between their actions than with other testers (they often left tasks mid-stream to go to meetings or work on other tasks). Knowing this, I used the same builds and the same installation steps as the tester; I figured out that part of the problem had to do with a Greenwich Mean Time (GMT) offset that was set incorrectly in the embedded device we were testing. Upon installation, the system time was set behind our Mountain Time offset, so the system time was back in time. This caused the system to reboot in order to reset the time (known behavior, working properly). But, as the resulting error message told me, there was also a kernel panic in the device. With this knowledge, I could repeat the bug about every two out of five times, but it still wasn’t consistent.

I spent time in that tester’s work environment to see if there was something else I was missing. I discovered that their test device had connections that weren’t fully seated, and that they had stacked the embedded device on both a router and a power supply. This caused the device to rock gently back and forth when you typed. So, I went back to my desk, unseated the cables so they barely made a connection, and—while installing a new firmware build—tapped my desk with my knee to simulate the rocking. Presto! Every time I did this with a same build that this tester had been using, the bug appeared.

Next, I collaborated with a developer. He went from, “that can’t happen,” to “uh oh, I didn’t test if the system time is back in time, *and* that the connection to the device is down during installation to trap the error.” The time offset and the flakey connection were causing two related “unrepeatable” bugs. This sounds like a simple correlation from the user’s perspective, but it wasn’t from a code perspective. These areas of code were completely unrelated and weren’t obvious when testing at the code level.

The developer thought I was insane when he saw me rocking my desk with my knee while typing to repeat the bug. But when I repeated the bugs every time, and explained my rationale, he chuckled and said it now made perfect sense. I walked him through my detective work, how I saw the device rocking out of the corner of my eye when I typed at the other tester’s desk. I went through the classic conjecture/refutation model of testing where I observed the behavior, set up an experiment to emulate the conditions, and tried to refute my proposition. When the evidence supported my proposition, I was able to get something tangible for the developer to repeat the bug himself. We moved forward, and were able to get a fix in place.

Sometimes we look to the code for sources of bugs and forget about the user. When one user out of many finds a problem, and that problem isn’t obvious in the source code, we dismiss it as user error. Sometimes my job as an exploratory tester is to track down the idiosyncrasies of a particular user who has uncovered something the rest of us can’t repeat. Often, there is a kind of chaos-theory effect that happens at the user interface, that only a particular user has the right unique recipe to cause a failure. Repeating the failure accurately not only requires having the right version of the source code and having the test system deployed in the right way, it also requires that the tester knows what a that particular user was doing at that particular time. In this case, I had all three, but emulating an environment I assumed was the same as mine was still tricky. The small differences in test environments, when coupled with slightly different usage by the tester, made all the difference between repeating the bug and not being able to repeat it. The details were subtle on their own, but each nuance, when put together, amplified each other until the application had something it couldn’t handle. Simply testing the same way we had been in the tester’s environment didn’t help us. Putting all the pieces together yielded the result we needed.

Note: Thanks to this blog post by Pragmatic Dave Thomas, this has become known as the “Knee Testing” story.


Exploratory Testing using Personas

I get asked a lot about Exploratory Testing on agile projects. In my position paper for the Canadian Agile Network Workshop last month, I described Exploratory Testing using personas. I’m reposting it here to share some of my own experience Exploratory Testing on agile projects.

Usability Testing: Exploratory Testing using Personas

Usability tests are almost impossible to automate. Part of the reason might be that usability is a very subjective thing. Another reason is that automated tests do not run within a business context. Program usability can be difficult to test at all, but working regularly with many end users can help. We usually don’t have the luxury of many users testing full-time on projects; usually there is one customer representative. Once we get information from our test users, how do we continue testing when they aren’t around? One possible method is Exploratory Testing with personas.

I’ve noticed a pattern on some agile teams. At first the customer and those testing have some usability concerns. After a while, the usability issues seem to go away. Is this because usability has improved, or has the team become too close to the project to test objectively? On one project, the sponsor rotated the customer representative due to scheduling issues. We were concerned at first, but a benefit emerged. Whenever a new customer was brought into the team, they found usability issues when they started testing. Often, the usability concerns were the same as what had been brought up by the testers and the customer earlier on, but had been contentious issues that the team wasn’t able to resolve.

On another project using XP and Scrum, a usability consultant was brought in. They did some prototyping and brought in a group of end users to try out their ideas. Any areas the users struggled with were addressed in the prototypes. The users were also asked a variety of questions about how they used the software, and their level of computer skills, which we used to create user profiles or personas. As the developers added more functionality in each iteration, testers simulated the absent end users by Exploratory Testing with personas to more effectively test the application for usability. The team wanted to automate these tests, but could not.

Exploratory Testing was much more effective at providing rapid feedback because it relies on skilled, brain-engaged testing within a context. The personas helped provide knowledge of the business context, and the way end-users interacted with the program in their absence. The customer representative working on the team also took part in these tests.

Tension on usability issues seemed to be reduced as well. These issues were no longer mere opinions. Now the team had something quantifiable to back up usability concerns. Instead of having differing opinions from developers, testers could say: “when testing with the persona ‘Mary’, we found this issue.” This proved to be effective at reducing usability debates. The team compromised with most issues being addressed, and others not. There were still three contentious issues that were outstanding when the project had completed the UI changes. We scheduled time to revisit end-users and had some surprising results.

Each end-user struggled with the three contentious usability issues the testers had discovered, which justified the approach, but there were three more areas we had completely missed. We realized that the users were using the software in a way we hadn’t intended. There was a flaw in our data gathering. Our first sample of users tested in our office, not their own. We had them work with the software at their own desks, and within their business context. Lesson learned: get the customer data when they are using the software in their own work environment.

On this project, Exploratory Testing with personas proved to be an effective way to compensate for limited full-time end user testing on the project. It also helped to provide rapid feedback in an area that automated tests couldn’t address. It didn’t replace the customer input, but worked well as a complementary testing technique with automation and customer acceptance testing. It helped to retain their voice in the usability of the product throughout development instead of sporadically, and helped to combat group think.