Category Archives: XP

Reckless Test Automation

The Agile movement has brought some positive practices to software development processes. I am a huge fan of frequent communication, of delivering working software iteratively, and strong customer involvement. Of course, before “Agile” became a movement, a self-congratulating community, and a fashionable term, there were companies following “small-a agile” practices. Years ago in the ’90s I worked for a startup with a CEO who was obsessed with iterative development, frequent communication and customer involvement. The Open Source movement was an influence on us at that time, and today we have the Agile movement helping create a shared language and a community of practice. We certainly could have used principles from Scrum and XP back then, but we were effective with what we had.

This software business involves trade-offs though, and for all the good we can get from Agile methods, vocal members of the Agile community have done testing a great disservice by emphasizing some old testing folklore. One of these concepts is “automate all tests”. (Some claimed agilists have the misguided gall to claim that manual testing is harmful to a project. Since when did humans stop using software?) Slavishly trying to reach this ideal often results in: Reckless Test Automation. Mandates of “all”, “everything” and other universal qualifiers are ideals, and without careful, skillful implementation, can promote thoughtless behavior which can hinder goals and needlessly cost a lot of money.

To be fair, the Agile movement says nothing officially about test automation to my knowledge, and I am a supporter of the points of the Agile Manifesto. However, the “automate all tests” idea has been repeated so often and so loudly in the Agile community, I am starting to hear it being equated with so-called “Agile-Testing” as I work in industry. In fact, I am now starting to do work to help companies undo problems associated with over-automation. They find they are unhappy with results over time while trying to follow what they interpret as an “Agile Testing” ideal of “100% test automation”. Instead of an automation utopia, they find themselves stuck in a maintenance quagmire of automation code and tools, and the product quality suffers.

The problems, like the positives of the Agile movement aren’t really new. Before Agile was all the rage, I helped a company that had spent six years developing automated tests. They had bought the lie that vendors and consultants spouted: “automate all tests, and all your quality problems will be solved”. In the end, they had three test cases developed, with an average of 18 000 lines of code each, and no one knew what their intended purpose was, what they were supposed to be testing, but it was very bad if they failed. Trouble was, they failed a lot, but it took testers anywhere from 3-5 days to hand trace the code to track down failures. Excessive use of unrecorded random data sometimes made this impossible. (Note: random data generation can be an incredibly useful tool for testing, but like anything else, should be applied with thoughtfulness.) I talked with decision makers and executives, and the whole point of them buying a tool and implementing it was to help reduce the feedback loop. In the end, the tool greatly increased the testing feedback loop, and worse, the testers spent all of their time babysitting and maintaining a brittle, unreliable tool, and not doing any real, valuable testing.

How did I help them address the slow testing feedback loop problem? Number one, I de-emphasized relying completely on test automation, and encouraged more manual, systematic exploratory testing that was risk-based, and speedy. This helped tighten up the feedback loop, and now that we had intelligence behind the tests, bug report numbers went through the roof. Next, we reduced the automation stack, and implemented new tests that were designed for quick feedback and lower maintenance. We used the tool to complement what the skilled human testers were doing. We were very strict about just what we automated. We asked a question: “What do we potentially gain by automating this test? And, more importantly, what do we lose?” The results? Feedback on builds was reduced from days to hours, and we had same-day reporting. We also had much better bug reports, and frankly, much better overall testing.

Fast-forward to the present time. I am still seeing thoughtless test automation, but this time under the “Agile Testing” banner. When I see reckless test automation on Agile teams, the behavior is the same, only the tools and ideals have changed. My suggestions to work towards solutions are the same: de-emphasize thoughtless test automation in favor of intelligent manual testing, and be smart about what we try to automate. Can a computer do this task better than a human? Can a human do it with results we are happier with? How can we harness the power of test automation to complement intelligent humans doing testing? Can we get test automation to help us meet overall goals instead of thoughtlessly trying to fullfill something a pundit says in a book or presentation or on a mailing list? Are our test automation efforts helping us save time, and helping us provide the team the feedback they need, or are they hindering us? We need to constantly measure the effectiveness of our automated tests against team and business goals, not “percentage of tests automated”.

In one “Agile Testing” case, a testing team spent almost all of their time working on an automation effort. An Agile Testing consultant had told them that if they automated all their tests, it would free up their manual testers to do more important testing work. They had automated user acceptance tests, and were trying to automate all the manual regression tests to speed up releases. One release went out after the automated tests all passed, but it had a show-stopping, high profile bug that was an embarassment to the company. In spite of the automated tests passing, they couldn’t spot something suspicious and explore the behavior of the application. In this case, the bug was so obvious, a half-way decent manual tester would have spotted it almost immediately. To get a computer to spot the problem through investigation would have required Artificial Intelligence, or a very complex fuzzy logic algorithm in the test automation suite, for one quick, simple, inexpensive, adaptive, yet powerful human test. The automation wasn’t freeing up time for testers, it had become a massive maintenance burden over time, so there was little human testing going on, other than superficial reviews by the customer after sprint demos. Automation was king, so human testing was de-emphasized and even looked on as inferior.

In another case, developers were so confident in their TDD-derived automated unit tests, they had literally gone for months without any functional testing, other than occasional acceptance tests by a customer representative. When I started working with them, they first defied me to find problems (in a joking way), and then were completely flabbergasted when my manual exploratory testing did find problems. They would point wide-eyed to the green bar in their IDE signifying that all their unit tests had passed. They were shocked that simple manual test scenarios could bring the application to its knees, and it took quite a while to get them to do some manual functional testing as well as their automated testing. It took them a while to leave their automation dogma aside, to become more pragmatic, and then figure out how to also incorporate important issues like state into their test efforts. When they did, I saw a marked improvement in the code they delivered once stories were completed.

In another “Agile Testing” case, the testing team had put enormous effort into automating regression tests and user acceptance tests. Before they were through, they had more lines of code in the test automation stack than what was in the product it was supposed to be testing. Guess what happened? The automation stack became buggy, unwieldly, unreliable, and displayed the same problems that any software development project suffers from. In this case, the automation was done by the least skilled programmers, with a much smaller staff than the development team. To counter this, we did more well thought out and carefully planned manual exploratory testing, and threw out buggy automation code that was regression test focussed. A lot of those tests should never have been attempted to be automated in that context because a human is much faster and much superior at many kinds of tests. Furthermore, we found that the entire test environment had been optimized for the automated tests. The inherent system variablity the computers couldn’t handle (but humans could!), not to mention quick visual tests (computers can’t do this well) had been attempted to be factored out. We did not have a system in place that was anything close to what any of our customers used, but the automation worked (somewhat). Scary.

After some rework on the testing process, we found it cheaper, faster and more effective to have humans do those tests, and we focussed more on leveraging the tool to help achieve the goals of the team. Instead of trying to automate the manual regression tests that were originally written for human testers, we relied on test automation to provide simulation. Running simulators and manual testing at the same time was a powerful investigative tool. Combining simulation with observant manual testing revealed false positives in some of the automated tests which had to been unwittingly released to production in the past. We even extended our automation to include high volume test automation, and we were able to greatly increase our test effectiveness by really taking advantage of the power of tools. Instead of trying to replicate human activities, we automated things that computers are superior at.

Don’t get me wrong – I’m a practitioner and supporter of test automation, but I am frustrated by reckless test automation. As Donald Norman reminds us, we can automate some human tasks with technology, but we lose something when we do. In the case of test automation, we lose thoughtful, flexible, adaptable, “agile” testing. In some tasks, the computer is a clear winner over manual testing. (Remember that the original “computers” were humans doing math – specifically calculations. Technology was used to automate computation because it is a task we weren’t doing so well at. We created a machine to overcome our mistakes, but that machine is still not intelligent.)

Here’s an example. On one application I worked on, it took close to two weeks to do manual credit card validation by testers. This work was error prone (we aren’t that great at number crunching, and we tire doing repetitive tasks.) We wrote a simple automated test suite to do the validation, and it took about a half hour to run. We then complemented the automated test suite with thoughtful manual testing. After an hour and a half of both automated testing (pure number crunching), and manual testing (usually scenario testing), we had a lot of confidence in what we were doing. We found this combination much more powerful than pure manual testing or pure automated testing. And it was faster than the old way as well.

When automating, look at what you gain by automating, and what you lose by automating a test. Remember, until computers become intelligent, we can’t automate testing, only tasks related to testing. Also, as we move further away from the code context, it usually becomes more difficult to automate tests, and the trade-offs have greater implications. It’s important to make considerations for automated test design to meet team goals, and to be aware of the potential for enormous maintenance costs in the long term.

Please don’t become reckless trying to fulfill an ideal of “100% test automation”. Instead, find out what the goals of the company and the team are, and see how all the tools at your disposal, including test automation can be harnessed to help meet those goals. “Test automation” is not a solution, but one of many tools we can use to help meet team goals. In the end, reckless test automation leads to feckless testing.

Update: I talk more about alternative automation options in my Man and Machine article, and in chapter 19 in the book: Experiences of Test Automation.

Testers and Independence

I’m a big fan of collaboration within software development groups. I like to work closely with developers and other team members (particularly documentation writers and customers who can be great bug finders), because we get great results by working closely together.

Here are some concerns I hear from people who aren’t used to this:

  • How do testers (and other critical thinkers) express critical ideas?
  • How can testers integrated into development teams still be independent thinkers?
  • How can testers provide critiques of product development?

Here’s how I do it:

1) I try very hard to be congruent.

Read Virginia Satir’s work, or Weinberg’s Quality Software Management series for more on congruence. I work on being congruent by asking myself these questions:

  • “Am I trying to manipulate someone (or the rest of the team) by what I’m saying?”
  • “Am I not communicating what I really think?”
  • “Am I putting the process above people?”

Sounds simple, but it goes a long way.

We can be manipulative on agile teams as well. If I want a certain bug to be fixed that isn’t being addressed, I can subtly alter my status at a daily standup to give it more attention (which will eventually backfire), or I can be congruent, and just say: “I really want us to focus on this bug.”

Whenever I vocalize a small concern even when the rest of the team is going another direction, it is worthwhile. Whenever I don’t, we end up with problems. It helps me retain my independence as an individual working in a team. If everyone does it, we get diverse opinions, and hopefully diverse views on potential risks instead of getting wrapped up in groupthink. Read Brenner’s Appreciate Differences for more on this.

Sometimes, we ignore our intuition and doubts when we are following the process. For example, we may get annoyed when we feel someone else is violating one of the 12 practices of XP. We may harp on them about not following the process instead of finding out what the problem is. I have seen this happen frequently, and I’ve been on teams that were disasters because we had complete faith in the process (even with Scrum, XP), and forgot about the people. How did we put the process over people? Agile methods are not immune to this problem. On one project, we ran around saying “that isn’t XP” when we saw someone doing something that didn’t fit the process. In most cases it was good work, but it turned out to be a manipulative way of dealing with something we saw as a problem. In the end, some of them were good practices that should have been retained in that context, on that team. They weren’t “textbook” XP, but the people with the ideas knew what they were doing, not the inanimate “white book”.

2) I make sure I’m accountable for what I’m doing.

Anyone on the team should be able to come up and ask me what I’m doing as a tester, and I should be able to clearly explain it. Skilled testing does a lot to build credibility, and an accountable tester will be given freedom to try new ideas. If I’m accountable for what I’m doing, I can’t hide behind a process or what the developers are doing. I need to step up and apply my skills where they will add value. When you add value, you are respected and given more of a free reign on testing activities that might have been discouraged previously.

Note: By accountability, I do not mean lots of meaningless “metrics”, charts, graphs and other visible measurement attempts that I might use to justify my existence. Skilled testing and coherent feedback will build real credibility, while meaningless numbers will not. Test reports that are meaningful, kept in check by qualitative measures, are developed with the audience in mind, and are actually useful will do more to build credibility than generating numbers for numbers sake.

3) I don’t try to humiliate the programmers, or police the process.

(I now see QA people making a career out of process policing on Agile teams). If you are working together, technical skills should be rubbing off on each other. In some cases, I’ve seen testing become “cool” on a project, and on one project not only testers were working on it, but developers, the BA and the PM were also testing. Each were using their unique skills to help generate testing ideas, and engage in testing. This in-turn gave the testers more credibility when they wanted to try out different techniques that could reveal potential problems. Now that all the team members had a sense for what the testers were going through, more effort was made to enhance testability. Furthermore, the discovery of potential problems was encouraged at this point, it was no longer feared. The whole team really bought into testing.

4) I collaborate even more, with different team members.

When I find I’m getting stale with testing ideas, or I’m afraid I’m getting sucked into groupthink, I pair with someone else. Lately, a customer representative has really been a catalyst for me for testing. Whenever we work together, I get a new perspective on project risks that are due to what is going on in the business, and they find problems I’ve missed. This helps me generate new ideas for testing in areas I hadn’t thought of.

Sometimes working with a technical writer, or even a different developer, instead of the developer(s) you usually work with helps you get a new perspective. This ties into the accountability thought as well. I’m accountable for what I’m testing, but so is the rest of the team. Sometimes fun little pretend rivalries will occur: “Bet I can find more bugs than you.” Or “Bet you can’t find a bug in my code in the next five minutes.” (In this case the developer beat me to the punch by finding a bug in his own code through exploratory testing beside me on another computer, and then gave me some good-natured ribbing about him being the first one to find a bug.)

Independent thinkers need not be in a separate independent department that is an arm’s length away. This need for an independent testing department is not something I unquestionably support. In fact, I have found more success through collaboration than isolation. Your mileage will vary.

Testing and Writing

I was recently involved on a project and noticed something interesting. I was writing documentation for a particular feature, and had a hard time expressing it for a user guide. We had made a particular design decision that made sense, and it worked intuitively but was difficult to write about coherently. As time went on, we discovered through user-feedback that this feature was confusing. I remembered how I had struggled to write about it, and shared this experience with my wife who worked as a technical writer for several years . She replied that this is very common. A lot of bugs that she and other writers found simply because a software feature was awkward to write about.

I’ve found that story cards in XP, or Scrum backlog items can suffer from incoherence, like any other written form. Incoherently written requirements, stories, use cases, etc. are frequently a sign that there is a problem in the design. There is something about actually writing down thoughts that helps us identify incoherence. When they are thoughts, or expressed verbally, we can defend problem spots with clarifications. When they are written down, we see them in a different light. What we think we understand may not be so clear when we capture our thoughts on paper.

Some of the most effective testers I’ve worked with were technical writers. They had a different perspective on the project than the rest of us because they had to understand the software and write about it in a way an end-user would understand. They find problems in the design when they struggle with writing about a feature. Hidden assumptions can come to light suddenly through capturing ideas on paper, and reading them. When testing, try writing down design ideas, as well as testing ideas, and see if what you write is coherent. Incoherent expressions of product features or test ideas can tell you a lot about development and testing.

The Role of a Tester on Agile Projects

I’ve stopped using this phrase: “The role of a tester on agile projects”. There has been endless debate over whether there should be dedicated testers on agile projects. In the end, endless debate doesn’t appeal to me. I’ve been on enough agile teams now to see that testers can add value in pretty much the same way they always have. They provide a service to a team and customers that is information-based. As James Bach says: “testing lights the way.” Programmer testing and tester testing complement each other, and agile projects can provide an ideal environment for an amazing amount of collaboration and testing. It can be hard to break in to some agile projects though when many agilists seem to view conventional testing with indifference or in some cases with disdain. (Bad experiences with QA Police doesn’t help, so the testing community bears some responsibility with this poor view of testing.)

What I find interesting is hearing experiences from people on Agile projects who fit in and contributed on Agile teams. I like to hear stories about how a Tester worked on an XP team, or how a Business Analyst fit in and thrived on an Agile team. One of the most fascinating stories I’ve encountered was a technical writer who worked on an XP team. It was amazing to find out how they adapted and filled a “team member role” to help get things done. It’s even more fascinating to find how the roles of different team members change over time, and how different people learn new skills and roll up their sleeves to get things done. There is a lot of knowledge in areas such as user experience work, testing, technical writing and others that Agile team members can learn from. In turn, they can learn a tremendous amount from Agilists. This collaborative learning helps move teams forward and can really push a team’s knowledge envelope. When people share successes and failures of what they have tried, we all benefit.

I like to hear of people who work on a team in spite of being told “dedicated testers aren’t in the white book”, or “our software doesn’t need documentation”. I’m amazed at how adaptable smart, talented people are who are blazing trails and adding value on teams that have challenging and different constraints. A lot of the “we don’t need your role” discussion sounds like the same old arguments testers, technical writers, user experience folks and others have been hearing already for years from non-Agilists. Interestingly enough, those who work on agile projects often report that in spite of initial resistance, they manage to fit in and thrive once they adapt. Those who were their biggest opponents at the beginning of a project often become their biggest supporters. This says to me that there are smart, capable people from a lot of different backgrounds who can offer something to any team they are a part of.

The Agile Manifesto says: “we value: Individuals and interactions over processes and tools.” Why then is there so much debate over the “tester role”? Many Agile pundits dismiss having dedicated testers on Agile projects. I often hear: “There is no dedicated tester role needed on an Agile team”. Isn’t this notion of a “role” to be excluded from a team putting a process over people? If someone joins an Agile team who is not a developer and they believe in the values of the methodology and want to work with a great team, do we turn them away? Shouldn’t we embrace the people even if the particular process we are following does not spell out their duties?

I would like to see people from non-developer roles be encouraged to try working on agile teams and share successes and failures so we can all benefit. I still believe software development has a long way to go, and we should try to improve every process. A danger of codifying and spreading a process is that it doesn’t have all the answers for every team. When we don’t have all the answers, we need to look at the motivations and values behind a process. For example, what attracted me most to XP were the values. My view on software processes is that we should embrace the values, and use them as a base to strive towards constant improvement. That means we experiment and push ideas forward, and fight apathy, hubris and as Deming said, drive out fear.

Occam’s Butter Knife

When I work in test automation, I always seek the most simple solution that will work for the task at hand. Extreme Programming’s STTCPW (simplest thing that can possibly work) and YAGNI (you aren’t going to need it) are principles I’ve applied to test automation. Occam’s Razor is a principle that can be interpreted as having the same meaning as STTCPW. Simple tests are something I value because I don’t:

  • Want complex test code that can be a source for bugs
  • Have a lot of time for maintenance
  • Wish to have tests I can’t throw away easily

Recently, while automating a complex functional test for a tester, I ran into a difficult problem. I was retro-fitting a test into a framework and I couldn’t figure out how to translate the model into automated test code. I struggled for a while, and paired with the tester (who was a non-programmer) and described my problem. I was thinking about a Decorator pattern, and data structures and algorithms, but couldn’t make this model fit. I was trying to get a simple solution together, but I had constraints.

The tester patiently listened as I explained my problem and diagrammed on a white board, explained what I was trying to do, and then when I was finished said: “Forgive me for stating the obvious…” and paused. I said “What may be obvious to you may be something I haven’t considered, please continue.” The tester explained to me that I had already created that model when I had automated test data generation, and asked why I couldn’t re-use it? They were right on the money. The solution didn’t involve any design patterns or clever solutions, I just had to go grab the data from a different area of the test. The solution was sitting right there in front of me, but I was looking at the problem in the wrong way.

Non-programming testers can offer a lot to development efforts. Their minds are not cluttered with design patterns, xUnit frameworks, data structures and algorithms which allows them to see things in a different way. When I’m up to my eyeballs in code, I sometimes stop thinking like a tester. In spite of my wish to get a simple design, I had needlessly over complicated my automated test development. So much for my attempt to apply Occam’s Razor – I needed a tester to help me see the simple solution. When I’m a tester working in test automation, I sometimes lose that “tester’s perspective”. Collaboration is key, and also helps me when I’m not particularly sharp.

Ted Talks About the Customer

Ted O’Grady has an interesting post on the customer and their role on XP teams. I agree with what he has said.

It’s also important to think of the context that the software is being used in. I learned a lesson about getting users to test our software in our office vs. getting them to use the software in their own office. The Hawthorne Effect seemed to really kick in when they were on-site with us. When we observed them using the software in their own business context to solve real-world problems, they used it differently.

If the customer is out of their regular context, that may have an effect on their performance on an agile team. Especially if they feel intimidated by a team of techies who outnumber them. When they are in their own office, and their team outnumbers the techies, they might be much more candid with constructive criticism. Just having a customer on-site doing the work with the team doesn’t guarantee they will approve of the final product. Often I think there is a danger they will approve of what the team thinks the final product should be. When we had rotating customer representatives on a team, groupthink was greatly reduced.

Gerard Meszaros on Computer Assisted Testing

At last night’s Extreme Tuesday Club meeting, Gerard Meszaros had a great way of describing what I do using automated scripts with Ruby combined with exploratory testing. (See my blog post about this here: Test Automation as a Testing Tool.)

As I was describing using a Ruby script to automate some parts of the application I’m testing, and then taking over manually with exploratory testing, Gerard said:

Sounds like you are using your Ruby program to set up the test fixture that you will then use as a starting point for exploratory testing.

I hadn’t thought of what I was doing in this way before, and I like the idea. It is a great way to succinctly explain how to complement the strengths of a testing tool and a brain-engaged tester.

Check out Gerard’s Patterns of xUnit Test Automation site for more of his thoughts on testing.