Category Archives: popular

Post-Agilism: Process Skepticism

I’m a fan of Agile practices, and I draw on values and ideas from Scrum and XP quite often. However, I have become a process skeptic. I’m tired of what a colleague complained to me about several years ago as: “Agile hype”. I’m tired of hearing “Agile this” and “Agile that”, especially when some of what is now branded as “Agile” was branded as “Total Quality Management”, or <insert buzzword phrase here> in the past. It seems that some are flogging the same old solutions, and merely tacking “Agile” onto them to help with marketing. It looks like Robert Martin was right when he predicted this would happen.

More and more I am meeting people who, like me, are quiet Agile skeptics. I’m not talking about people who have never been on Agile projects before that take shots from the outside. I’m talking about smart, capable, experienced Agilists. Some were early adopters who taught me Agile basics. While initially overcome with Agile as a concept, their experience has shown them that the “hype” doesn’t always work. Instead of slavishly sticking to Agile process doctrines, they use what works, in the context they are currently working with. Many cite the Agile Manifesto and values of XP as their guide for work that they do, but they find they don’t identify with the hype surrounding Agile methods.

That said, they don’t want to throw out the practices that work well for them. They aren’t interested in turning back the clock, or implementing heavyweight processes in reaction to the hype. While they like Agile practices, some early adopters I’ve talked to don’t really consider themselves “Agile” anymore. They sometimes muse aloud, wondering what the future might hold. “We’ve done that, enjoyed it, had challenges, learned from it, and now what’s next?” Maybe that’s the curse of the early adopter.

They may use XP practices, and wrap up their projects with Scrum as a project management tool, but they aren’t preachy about it. They just stick with what works. If an Agile practice isn’t working on a project, they aren’t afraid to throw it out and try something else. They are pragmatic. They are also zealous about satisfying and impressing the customer, not zealous about selling “Agile”. In fact, “Agile” zealotry deters them.

They also have stories of Agile failures, and usually can describe a watershed moment where they set aside their Agile process zeal, and just worked on whatever it took to get a project complete in order to have happy customers. To them, this is what agility is really about. Being “agile” in the dictionary sense, instead of being “Agile” in the marketing sense.

Personally, I like Agile methods, but I have also seen plenty of failures. I’ve witnessed that merely following a process, no matter how good it is, does not guarantee success. I’ve also learned that no matter what process you follow, if you don’t have skilled people on the team, you are going to find it hard to be successful. A friend of mine told a me about a former manager who was in a state of perpetual amazement. The manager felt that the process they had adopted was the key to their success, and enforced adherence to it. However, when anything went wrong (which happened frequently), he didn’t know what to do. Their project was in a constant state of disarray. I’ve seen this same behavior on Agile projects as well. (In fact, due to the blind religiosity of some Agile proponents, some Agile projects are more prone to this than we might realize.) Too many times we look for the process to somehow provide us the perfect answer instead of just using our heads and doing what needs to be done. “Oh great oracle, the white book! Please tell us what do do!” may be the reaction to uncertainty, instead of using the values and principles in that book as a general guideline to help solve problems and improve processes.

I have also seen people on Agile teams get marginalized because they aren’t Agile enough. While I appreciate keeping a process pure, (such as really doing XP rather than just calling whatever you are doing “Agile” because it’s cool right now), sometimes you have to break the rules and get a solution out the door. I have seen people be pushed out of projects because they “didn’t get Agile”. I have seen good solutions turned away because they weren’t in the white book (Extreme Programming Explained). I’ve also seen reckless, unthinking behavior justified because it was “Agile”. In one case, I saw a manager pull out the white book, and use a paragraph from it in an attempt to justify not fixing bugs. I was reminded of a religious nut pulling quotes out of context from a sacred book to justify some weird doctrine that was contrary to what the original author intended.

Here’s a spin on something I wrote earlier this spring:

In an industry that seems to look for silver-bullet solutions, it’s important for us to be skeptics. We should be skeptical of what we’re developing, but also of the methodologies, processes, and tools on which we rely. We should continuously strive for improvement.

I call this brand of process skepticism exhibited by experienced Agilists “Post-Agilism”. I admit that I have been an Agile process zealot in the past, and I don’t want to fall into that trap again. I’ve learned that skill and effective communication are much more powerful, so much so that they transcend methodologies. There are also pre-Agile ideas that are just fine to pick from. Why throw out something that works just because it isn’t part of the process that’s currently the most popular?

Test your process, and strive for improvement. Be skeptical, and don’t be afraid to follow good Post-Agilist thinking. Some good Post-Agilist thinkers are former process zealots who know and enjoy Agile development, but also don’t throw out anything that isn’t generally accepted by Agilists. Others are just plain pragmatic sorts who saw the Agile hype early on, and decided to do things on their terms. They took what they found effective from Agile methods, and didn’t worry about selling it to anyone.

Post-Agilist thinkers just don’t see today’s “Agile” as the final word on software development. They want to see the craft to move forward and generate new ideas. They feel that anything that stifles new ideas (whether “Agile-approved” or not) should be viewed with suspicion. They may benefit from tools provided by Agile methods, but are more careful now when it comes to evaluating the results of their process. In the past, many of us measured our degree of process adoption as success, sometimes forgetting to look to the product we were developing.

What does the post-Agile future hold? Will individuals adapt and change Agile method implementations as unique needs arise? Will “Agile” go too far and become a dirty word like Business Process Re-engineering? Or will it indeed be the way software is widely developed? Or will we see a future of people picking what process works for them, while constantly evaluating and improving upon it, without worrying about marketing a particular development methodology? I certainly hope it will be the latter. What do you think?

Edit: April 28, 2007. I’ve added a Post-Agilism Frequently Asked Questions post to help explain more. Also see Jason Gorman’s Post-Agilism – Beyond the Shock of the New.

Placating with Paper

Accountability. This is often a scary word for some people. Sometimes when we are fearful of the results of a decision, or of our work on a project, we will look at what we can do to offload responsibility. “At least I won’t be blamed if anything goes wrong.” might be the thinking. It’s hard to constantly offload onto other people though. After a while, we’ll run out of team members to blame, and we probably won’t be too popular. One easy way around this is to offload accountability onto an inanimate object: paper. Paper often comes in the form of specifications, design documents, test plans, test cases, or as I see more lately, proof of adherence to a popular development process. Paper might be substituted when we need to do something difficult, or provide reasons why a product isn’t working, or when we need to say something others may not want to hear. Paper with fancy words and jargon is often accepted as a short-term solution to project problems. Trouble is, what does it help in the long-term, when you are being paid to deliver software?

In one case, paper provided hope to end an unpleasant release. The development team had just come out of a meeting with our sales team and a couple of important clients. They were quite clear that we needed to deliver a particular feature in our next release. Unfortunately, it wasn’t working properly. Whenever we tested this new feature, the application crashed in a spectacular manner. There had been two attempts at redesigns, and everything else in the release seemed solid. The programmers were tired and demoralized, management didn’t want to hear about any problems anymore, and the testers were tired of testing and feeling the weight of the release resting on their shoulders. They felt singled out because whenever they logged a bug report, the rest of the team would accuse them of holding up the release. “Jonathan, why are we testing this feature in this way?” the test lead asked, and pointed to the specification document.

“We’re testing this because this is the most important feature in the release. Without this feature, our sales team is going to struggle, and our customers are going to be upset. Remember what we talked about in the meeting with both the customer and the sales team?” I asked. “Sure I do, but look at the spec. It doesn’t spell out what the customer asked for. If we test to the spec, we can ship! The tests will all pass if we only test according to the spec.”

The lead tester had found a loophole. The thinking was that if we tested only to the spec, and followed the document to the letter, we could pass the tests that were currently failing, and management would happily release the software. Everyone would be relieved, and furthermore the lead pointed out, who could blame the testers if anything went wrong? The design spec would be at fault, not the testers or developers. “But it flat-out doesn’t work!” I retorted. “Sure it does, according to the design spec.” they replied. “But the customer can’t use a design spec to do their jobs, can they? If our software doesn’t do what they expect it to do, are they supposed to print out the spec, clutch it to their chests and draw comfort and dollars from it? The spec is meaningless to the customer when they are trying to use our software. They need this feature to work in their context, to help them get their work done. They are relying on us to deliver software, not a piece of paper with a hollow excuse.”

I was a bit shocked at what this test lead proposed at first, but dealing with pressure in this way is not uncommon. A paper document can carry a lot of weight in an organization. People can draw a false sense of security from some official looking prose wrapped in corporate logos. I encouraged the tester to not fall for quick fix technicalities, especially if she wants to be a tester for very long. Fight the good fight, and while it may be hard in the short-term, the payoff in the long run is great. Happy customers, and the respect of your peers are just two payoffs. Being able to look at yourself in the mirror every day and know you have your integrity in tact is also something important, and no project deadline is worth sacrificing that for. A tester with a reputation for having no integrity will not be employable in the long-term.

On another project, paper was used in a different way. I was on one small team, and we met with other teams in the company periodically to share ideas. Another team came to a meeting and crowed about the “traceability” of documentation they were creating on a large project. We were on a small project at the time, and we were minimalists with documentation, preferring to deliver working software that could be deployed into production every two weeks. “That’s fine, but you don’t have traceability like we do!” they said. They bragged that for every method of code that would be written, there would be three documents generated as “project artifacts”.

They met with us again in a few months. We had delivered two projects that were helping the company make more money, and they were still talking about documents. “Can you show us the code?” we asked. “We’re still in the requirements-gathering phase”, they snapped. They were too busy to code, instead, they were writing documents — documents about the design, documents about a myriad of tools they could choose, and more documents documenting other documents. The company had spent several million dollars that year, and there wasn’t one line of code to show for it. They had so-called “traceability” though, with project documents, and committees reviewing documents, and committees reviewing other committees. An entire infrastructure to support this process was developed. There were literally teams of people working full-time churning out documents, but finding a programmer in all of this was not an easy task. Expensive tools to aid in paper generation were purchased, more people were hired to keep up, and project budgets were sacrificed on the altar of the great “project artifact” gods. (All stages of each properly documented and “signed off” on of course.)

Through all of this, management was placated with a blizzard of paper. “Do you have a demo yet?” they would ask, only to be inundated with mountains of UML diagrams, meeting minutes, decision matrices and vendor white papers. This satisfied management in the short-term. Look at all this paper – lots of work must be getting done.

In the end, paper didn’t deliver. There came a time when documents just didn’t cut it anymore. Management wanted a return on investment. They didn’t care about the development process du jour, detailed design documents, and requirements changes and budgets signed off in triplicate. They wanted to see the project working in production, helping them realize their business goals of a return to investors. There wasn’t much of a product there though – there were lots of documents, but precious little in the way of tested, delivered software.

The project went several years, and millions of dollars over budget. It was also a public embarrassment for the company. People identified with that project were blamed, and association with that project was an impediment to finding work with other companies who had heard about it through the grapevine.

What went wrong? The vendors, consultants and employees knew they could placate management with paper, and management didn’t demand a working solution early enough. This project could have done well, had they released working software incrementally, rather than churn out ever-growing mountains of paper. Instead of taking personal responsibility, the people working on the project placated customer demands with paper.

What were they thinking? Maybe they didn’t really know what they were doing. Maybe they hoped the programming elves would rescue them if they created enough documents. Maybe they got swept away in groupthink, and followed a process instead of working to deliver software. Maybe they collectively thought that a colleague would take responsibility. Maybe they just didn’t care. I don’t know.

The customer unwittingly paid people to generate a lot of paper, instead of getting them to write and demonstrate working software. They accepted paper designs instead of code that could be demonstrated. They accepted paper over skill, and they accepted paper instead of accountability and working software.

Accountability. Demand it from yourself, and from others. Placating with paper may get you through the short-term, but sooner or later you have to demonstrate your skill. Skill can be demonstrated individually as a programmer, as a tester, a technical writer, a project manager, and demonstrated collectively as a team. It will ultimately be demonstrated in your product. Does what you produce match your claims? If it doesn’t, a document and a nice explanation may get you out of a jam in the short-term, but in the long-term, you have to deliver what the customer needs. The customer won’t blame the document – they will blame you and your company, and move on to a competitor’s product. Where does that leave you?

Software Testing 2.0?

For so many years the Quality Assurance ideal has dominated software testing. “QA”-flavored software testing often feels like equal parts of Factory School and Quality School thrown together. When I was starting out as a tester, I quickly learned through hard experience that a lot of the popular software testing thought was built around folklore. I wanted results, and didn’t like process police, so I often found myself at odds with the Quality Assurance community. I read writings by Cem Kaner, James Bach and Brian Marick, and worked on my testing skill.

When the Agile movement kicked into gear, I loved the Agile Manifesto and the values agile leaders were espousing, but the Agile Testing community rehashed a lot of the same old Factory School folklore. Instead of outsourcing testing to lesser-skilled, cheaper human testers, testing was often outsourced to automated testing tools. While there were some really cool ideas, “Agile Testing” ideals still frequently felt like testing didn’t require skills, other than programming. I was frequently surprised at how “Agile Testing” thought was attracted to a lot of the old Factory School thoughts, like they were oppositely charged magnets. As a big proponent of skilled testing, I found I was often at odds with “Agile Testers”, even though I agreed with the values and ideals behind the movement. Testing in that community did not always feel “agile” to me.

Then Test-Driven Development really got my attention. I worked with some talented developers who taught me a lot, and wanted to work with me. They told me they wanted me to work with them because I thought like they did. I was still a systems thinker, but I came at the project from a different angle. Instead of confirming that their code worked, I creatively thought of ideas to see if it might fail. They loved those ideas because it helped them build more robust solutions, and in turn, taught me a lot about testing through TDD. I learned that TDD doesn’t have a lot to do with testing in the way I’m familiar with, but is still a testing school of thought. It is focused on the code-context, and I tend to do more testing from user contexts. Since I’m not a developer, and TDD is predominantly a design tool, I wasn’t a good candidate for membership in the TDD community.

The Context-Driven Testing School is a small, influential community. The founders all had an enormous influence on my career as a tester. One thing this community has done is build up and teach skilled testing, and has influenced other communities. Everywhere I go, I meet smart, talented, thoughtful testers. In fact, I am meeting so many, that I believe a new community is springing up in testing. A community born of experience, pragmatism and skill. Testers with different skillsets and ideas are converging and sharing ideas. I find this exciting.

I’m meeting testers in all sorts of roles, and often the thoughtful, skilled ones aren’t necessarily “QA” folks. For example, some of my thought-leader testing friends are developers who are influenced by TDD. Some skilled testers I meet are test automation experts, some are technical writers, some are skilled with applying exploratory testing concepts. All are smart, talented and have cool ideas. I am meeting more and more testers from around the world with different backgrounds and expertise who share a common bond of skill. I’m beginning to believe that a new wave of software testing is coming, and a new skills-focused software testing community is being formed through like-minded practitioners all over the world. This new community growing in the software development world is driven by skilled testers.

This is happening because skilled testers are sharing ideas. They are sharing their results by writing, speaking, and practicing skilled testing. Results mean something. Results build confidence in testers, and in the people who work with them. Skill prevails over process worship, methodology worship and tool worship. I’ve said before that skilled software testing seems to transcend the various methodologies and processes and add value on any software development project. I’m finding that other testers are finding this out as well. This new wave of skilled tester could be a powerful force.

Are you frustrated with the status quo of software testing? Are you tired of hearing the same hollow maxims like “automate all tests”, “process improvement” and “best practices”? Do you feel like something is missing in the Quality Assurance and Agile communities when it comes to testing? Do you feel like you don’t fit in a community because of your views on testing? You aren’t alone. There are many others who are working on doing a better job than we have been doing for the past few years. Let’s work together to push skilled software testing as far as it will go. Together, we are creating our own community of practice. The “second version” of software testing has begun to arrive.

Reckless Test Automation

The Agile movement has brought some positive practices to software development processes. I am a huge fan of frequent communication, of delivering working software iteratively, and strong customer involvement. Of course, before “Agile” became a movement, a self-congratulating community, and a fashionable term, there were companies following “small-a agile” practices. Years ago in the ’90s I worked for a startup with a CEO who was obsessed with iterative development, frequent communication and customer involvement. The Open Source movement was an influence on us at that time, and today we have the Agile movement helping create a shared language and a community of practice. We certainly could have used principles from Scrum and XP back then, but we were effective with what we had.

This software business involves trade-offs though, and for all the good we can get from Agile methods, vocal members of the Agile community have done testing a great disservice by emphasizing some old testing folklore. One of these concepts is “automate all tests”. (Some claimed agilists have the misguided gall to claim that manual testing is harmful to a project. Since when did humans stop using software?) Slavishly trying to reach this ideal often results in: Reckless Test Automation. Mandates of “all”, “everything” and other universal qualifiers are ideals, and without careful, skillful implementation, can promote thoughtless behavior which can hinder goals and needlessly cost a lot of money.

To be fair, the Agile movement says nothing officially about test automation to my knowledge, and I am a supporter of the points of the Agile Manifesto. However, the “automate all tests” idea has been repeated so often and so loudly in the Agile community, I am starting to hear it being equated with so-called “Agile-Testing” as I work in industry. In fact, I am now starting to do work to help companies undo problems associated with over-automation. They find they are unhappy with results over time while trying to follow what they interpret as an “Agile Testing” ideal of “100% test automation”. Instead of an automation utopia, they find themselves stuck in a maintenance quagmire of automation code and tools, and the product quality suffers.

The problems, like the positives of the Agile movement aren’t really new. Before Agile was all the rage, I helped a company that had spent six years developing automated tests. They had bought the lie that vendors and consultants spouted: “automate all tests, and all your quality problems will be solved”. In the end, they had three test cases developed, with an average of 18 000 lines of code each, and no one knew what their intended purpose was, what they were supposed to be testing, but it was very bad if they failed. Trouble was, they failed a lot, but it took testers anywhere from 3-5 days to hand trace the code to track down failures. Excessive use of unrecorded random data sometimes made this impossible. (Note: random data generation can be an incredibly useful tool for testing, but like anything else, should be applied with thoughtfulness.) I talked with decision makers and executives, and the whole point of them buying a tool and implementing it was to help reduce the feedback loop. In the end, the tool greatly increased the testing feedback loop, and worse, the testers spent all of their time babysitting and maintaining a brittle, unreliable tool, and not doing any real, valuable testing.

How did I help them address the slow testing feedback loop problem? Number one, I de-emphasized relying completely on test automation, and encouraged more manual, systematic exploratory testing that was risk-based, and speedy. This helped tighten up the feedback loop, and now that we had intelligence behind the tests, bug report numbers went through the roof. Next, we reduced the automation stack, and implemented new tests that were designed for quick feedback and lower maintenance. We used the tool to complement what the skilled human testers were doing. We were very strict about just what we automated. We asked a question: “What do we potentially gain by automating this test? And, more importantly, what do we lose?” The results? Feedback on builds was reduced from days to hours, and we had same-day reporting. We also had much better bug reports, and frankly, much better overall testing.

Fast-forward to the present time. I am still seeing thoughtless test automation, but this time under the “Agile Testing” banner. When I see reckless test automation on Agile teams, the behavior is the same, only the tools and ideals have changed. My suggestions to work towards solutions are the same: de-emphasize thoughtless test automation in favor of intelligent manual testing, and be smart about what we try to automate. Can a computer do this task better than a human? Can a human do it with results we are happier with? How can we harness the power of test automation to complement intelligent humans doing testing? Can we get test automation to help us meet overall goals instead of thoughtlessly trying to fullfill something a pundit says in a book or presentation or on a mailing list? Are our test automation efforts helping us save time, and helping us provide the team the feedback they need, or are they hindering us? We need to constantly measure the effectiveness of our automated tests against team and business goals, not “percentage of tests automated”.

In one “Agile Testing” case, a testing team spent almost all of their time working on an automation effort. An Agile Testing consultant had told them that if they automated all their tests, it would free up their manual testers to do more important testing work. They had automated user acceptance tests, and were trying to automate all the manual regression tests to speed up releases. One release went out after the automated tests all passed, but it had a show-stopping, high profile bug that was an embarassment to the company. In spite of the automated tests passing, they couldn’t spot something suspicious and explore the behavior of the application. In this case, the bug was so obvious, a half-way decent manual tester would have spotted it almost immediately. To get a computer to spot the problem through investigation would have required Artificial Intelligence, or a very complex fuzzy logic algorithm in the test automation suite, for one quick, simple, inexpensive, adaptive, yet powerful human test. The automation wasn’t freeing up time for testers, it had become a massive maintenance burden over time, so there was little human testing going on, other than superficial reviews by the customer after sprint demos. Automation was king, so human testing was de-emphasized and even looked on as inferior.

In another case, developers were so confident in their TDD-derived automated unit tests, they had literally gone for months without any functional testing, other than occasional acceptance tests by a customer representative. When I started working with them, they first defied me to find problems (in a joking way), and then were completely flabbergasted when my manual exploratory testing did find problems. They would point wide-eyed to the green bar in their IDE signifying that all their unit tests had passed. They were shocked that simple manual test scenarios could bring the application to its knees, and it took quite a while to get them to do some manual functional testing as well as their automated testing. It took them a while to leave their automation dogma aside, to become more pragmatic, and then figure out how to also incorporate important issues like state into their test efforts. When they did, I saw a marked improvement in the code they delivered once stories were completed.

In another “Agile Testing” case, the testing team had put enormous effort into automating regression tests and user acceptance tests. Before they were through, they had more lines of code in the test automation stack than what was in the product it was supposed to be testing. Guess what happened? The automation stack became buggy, unwieldly, unreliable, and displayed the same problems that any software development project suffers from. In this case, the automation was done by the least skilled programmers, with a much smaller staff than the development team. To counter this, we did more well thought out and carefully planned manual exploratory testing, and threw out buggy automation code that was regression test focussed. A lot of those tests should never have been attempted to be automated in that context because a human is much faster and much superior at many kinds of tests. Furthermore, we found that the entire test environment had been optimized for the automated tests. The inherent system variablity the computers couldn’t handle (but humans could!), not to mention quick visual tests (computers can’t do this well) had been attempted to be factored out. We did not have a system in place that was anything close to what any of our customers used, but the automation worked (somewhat). Scary.

After some rework on the testing process, we found it cheaper, faster and more effective to have humans do those tests, and we focussed more on leveraging the tool to help achieve the goals of the team. Instead of trying to automate the manual regression tests that were originally written for human testers, we relied on test automation to provide simulation. Running simulators and manual testing at the same time was a powerful investigative tool. Combining simulation with observant manual testing revealed false positives in some of the automated tests which had to been unwittingly released to production in the past. We even extended our automation to include high volume test automation, and we were able to greatly increase our test effectiveness by really taking advantage of the power of tools. Instead of trying to replicate human activities, we automated things that computers are superior at.

Don’t get me wrong – I’m a practitioner and supporter of test automation, but I am frustrated by reckless test automation. As Donald Norman reminds us, we can automate some human tasks with technology, but we lose something when we do. In the case of test automation, we lose thoughtful, flexible, adaptable, “agile” testing. In some tasks, the computer is a clear winner over manual testing. (Remember that the original “computers” were humans doing math – specifically calculations. Technology was used to automate computation because it is a task we weren’t doing so well at. We created a machine to overcome our mistakes, but that machine is still not intelligent.)

Here’s an example. On one application I worked on, it took close to two weeks to do manual credit card validation by testers. This work was error prone (we aren’t that great at number crunching, and we tire doing repetitive tasks.) We wrote a simple automated test suite to do the validation, and it took about a half hour to run. We then complemented the automated test suite with thoughtful manual testing. After an hour and a half of both automated testing (pure number crunching), and manual testing (usually scenario testing), we had a lot of confidence in what we were doing. We found this combination much more powerful than pure manual testing or pure automated testing. And it was faster than the old way as well.

When automating, look at what you gain by automating, and what you lose by automating a test. Remember, until computers become intelligent, we can’t automate testing, only tasks related to testing. Also, as we move further away from the code context, it usually becomes more difficult to automate tests, and the trade-offs have greater implications. It’s important to make considerations for automated test design to meet team goals, and to be aware of the potential for enormous maintenance costs in the long term.

Please don’t become reckless trying to fulfill an ideal of “100% test automation”. Instead, find out what the goals of the company and the team are, and see how all the tools at your disposal, including test automation can be harnessed to help meet those goals. “Test automation” is not a solution, but one of many tools we can use to help meet team goals. In the end, reckless test automation leads to feckless testing.

Update: I talk more about alternative automation options in my Man and Machine article, and in chapter 19 in the book: Experiences of Test Automation.

Procedural Test Scripts

Cem Kaner has sometimes called detailed manual procedural test scripts “an industry worst practice”. I tend to agree. At one time I thought they were a good idea, but they lead to all kinds of problems. One is a lack of diversity in testing, another is that they become a maintenance nightmare and rob time that could be spent actually testing the software with new ideas. Another problem is that we usually write them off of requirements which narrows our focus too much, and we write them early in a project when we know little about the product. But I’m not going to get into that in this post. Instead, I’m going to describe a recent conversation that outlines why we as testers should question why we do this practice.

I was recently discussing the creation of procedural test scripts prior to testing with developers. They were skeptical of my views that pre-scripting detailed manual test cases are a scourge on the testing world. “How will testers know what to test then?” I replied: “Good testers will use their judgment and skill. When they don’t have enough information, they will seek it out. They will utilize different testing heuristics to help meet the particular mission of testing that is required at the time. They will use information that is available to them, or they will test in the absence of information, but they will rely on skill to get the job done.” This didn’t resonate, so I came up with an equivalent for developers. Here is the “procedural development script” that follows what is so often done in testing. (To make it more authentic, it should be written as long before the actual development work is done as possible).

Development Procedural Script:

Purpose: write widget foo that does this business functionality

Steps:

  1. Open your Eclipse IDE. Start/Programs/Eclipse.
  2. Select your workspace for this project.
  3. In the package explorer, create a new java source code file.
  4. Begin typing in the IDE
  5. Use the such-and-such pattern to implement this functionality, type the following:
         public void <method name>(){
         etc.
         }

The development manager chuckled and said he’d fire a developer who needed this much direction. He needs people with skill that he can trust to be able to program in Java, to use their judgment and implement what needs to be done under his guidance. He would never expect developers to need things spelled out like that.

I countered that the same is true of testing. Why do we expect our testers to work this way? If we scoff at developers not needing that kind of direction, why do we use it in testing? Why do we promote bad practices that promote incompetence? Testers need to have skill to be able to figure out what to test without having everything handed to them. If we can’t trust our testers to do skilled work without having to spell everything out first, we need to get better testers.

Developers and business folk: demand skill from your testers.

Testers: demand skill from yourselves.

Are you a tester who wants to improve their skills? Cem Kaner’s free Black Box Software Testing is worth checking out.

User Profiles and Exploratory Testing

Knowing the User and Their Unique Environment

As I was working on the Repeating the Unrepeatable Bug article for Better Software magazine, I found consistent patterns in cases where I have found a repeatable case to a so-called “unrepeatable bug”. One pattern that surprised me was how often I do user profiling. Often, one tester or end-user sees a so-called unrepeatable bug more frequently than others. A lot of my investigative work in these cases involves trying to get inside an end-user’s head (often a tester) to emulate their actions. I have learned to spend time with the person to get a better perspective on not only their actions and environment, but their ideas and motivations. The resulting user profiles fuel ideas for exploratory testing sessions to track down difficult bugs.

Recently I was assigned the task of tracking down a so-called unrepeatable bug. Several people with different skill levels had worked on it with no success. With a little time and work, I was able to get a repeatable case. Afterwards, when I did a personal retrospective on the assignment, I realized that I was creating a profile of the tester who had come across the “unrepeatable” cases that the rest of the dev team did not see. Until that point, I hadn’t realized to what extent I was modeling the tester/user when I was working on repeating “unrepeatable” bugs. My exploratory testing for this task went something like this.

I developed a model of the tester’s behaviour through observation and some pair testing sessions. Then, I started working on the problem and could see the failure very sporadically. One thing I noticed was that this tester did installations differently than others. I also noticed what builds they were using, and that there was more of a time delay between their actions than with other testers (they often left tasks mid-stream to go to meetings or work on other tasks). Knowing this, I used the same builds and the same installation steps as the tester; I figured out that part of the problem had to do with a Greenwich Mean Time (GMT) offset that was set incorrectly in the embedded device we were testing. Upon installation, the system time was set behind our Mountain Time offset, so the system time was back in time. This caused the system to reboot in order to reset the time (known behavior, working properly). But, as the resulting error message told me, there was also a kernel panic in the device. With this knowledge, I could repeat the bug about every two out of five times, but it still wasn’t consistent.

I spent time in that tester’s work environment to see if there was something else I was missing. I discovered that their test device had connections that weren’t fully seated, and that they had stacked the embedded device on both a router and a power supply. This caused the device to rock gently back and forth when you typed. So, I went back to my desk, unseated the cables so they barely made a connection, and—while installing a new firmware build—tapped my desk with my knee to simulate the rocking. Presto! Every time I did this with a same build that this tester had been using, the bug appeared.

Next, I collaborated with a developer. He went from, “that can’t happen,” to “uh oh, I didn’t test if the system time is back in time, *and* that the connection to the device is down during installation to trap the error.” The time offset and the flakey connection were causing two related “unrepeatable” bugs. This sounds like a simple correlation from the user’s perspective, but it wasn’t from a code perspective. These areas of code were completely unrelated and weren’t obvious when testing at the code level.

The developer thought I was insane when he saw me rocking my desk with my knee while typing to repeat the bug. But when I repeated the bugs every time, and explained my rationale, he chuckled and said it now made perfect sense. I walked him through my detective work, how I saw the device rocking out of the corner of my eye when I typed at the other tester’s desk. I went through the classic conjecture/refutation model of testing where I observed the behavior, set up an experiment to emulate the conditions, and tried to refute my proposition. When the evidence supported my proposition, I was able to get something tangible for the developer to repeat the bug himself. We moved forward, and were able to get a fix in place.

Sometimes we look to the code for sources of bugs and forget about the user. When one user out of many finds a problem, and that problem isn’t obvious in the source code, we dismiss it as user error. Sometimes my job as an exploratory tester is to track down the idiosyncrasies of a particular user who has uncovered something the rest of us can’t repeat. Often, there is a kind of chaos-theory effect that happens at the user interface, that only a particular user has the right unique recipe to cause a failure. Repeating the failure accurately not only requires having the right version of the source code and having the test system deployed in the right way, it also requires that the tester knows what a that particular user was doing at that particular time. In this case, I had all three, but emulating an environment I assumed was the same as mine was still tricky. The small differences in test environments, when coupled with slightly different usage by the tester, made all the difference between repeating the bug and not being able to repeat it. The details were subtle on their own, but each nuance, when put together, amplified each other until the application had something it couldn’t handle. Simply testing the same way we had been in the tester’s environment didn’t help us. Putting all the pieces together yielded the result we needed.

Note: Thanks to this blog post by Pragmatic Dave Thomas, this has become known as the “Knee Testing” story.