Category Archives: extreme programming

Who Do We Serve?

Bob and Doug are programmers on an Extreme Programming team. Their customer representative has written a story card for a new feature the company would like to see developed in the system to help with calclulations undertaken by the finance group. Currently, they have spreadsheets and a manual process that is time-consuming and somewhat error prone. Bob and Doug pair up to work on the story. They feel confident – they are both experienced with Test-Driven Development, and they constantly use design patterns in their code.

They sit at a machine, check out the latest version of the code from source control, and start to program. They begin by writing a test and forming the code test-first, to meet the feature request. Over time, they accumulate a good number of test cases and corresponding code. Running their automated unit test harness gives them feedback on whether their tests passed or not. Sometimes the tests fail sporadically for no apparent reason, but over time the failures appear less frequently.

As they are programming and have their new suite of automated test cases passing, they begin refactoring. They decide to refactor to a design pattern that seems suitable. It takes some rework, but the code seems to meet the pattern definition and all the automated unit tests are passing. They check the code in, and move on to a new story.

The next day, they demonstrate the feature to the customer representative, but the customer representative isn’t happy with the feature. It feels awkward to them. Bob and Doug explain what pattern they were using, and talk about facades, persistence layers, inversion of control, design patterns, and demonstrate that the automated unit tests all pass by showing the green bar in their IDE. The customer representative eventually reluctantly agrees with them that the feature has been implemented correctly.

Months later, the finance department complains that the feature is wrong almost half of the time, and is causing them to make some embarrassing mistakes. They aren’t pleased. Wasn’t XP supposed to solve these kinds of problems?

Looking into the problem revealed that the implementation was incomplete – several important scenarios had been missed. A qualitative review of the automated tests showed that they were quite simple, worked only at the code level, and no tests had been done in a production-like environment. Furthermore, no thought had been put into the design for the future when the system would be under more load due to more data and usage. “That isn’t the simplest thing that could possibly work!” they retorted. However, the team knew that data load would greatly increase over the coming months. If they weren’t going to deal with that inevitability in the design right then, they needed to address it soon after that. They never had, and blamed the non-technical customer representative for not writing a story to address it.

Finally, when the customer representative tried to express that something wasn’t quite right with the design, they were outnumbered and shot down by technical explanations. One representative against the six technical programmers on the team who were using jargon and technical bullying didn’t stand much of a chance. After the problem was discovered in production, Bob and Doug admitted that things felt a little awkward even to them when they initially implemented the code for the story, but since they were following practices by the book, they ignored signs to the contrary. The automated tests and textbook refactoring to patterns had given them a false sense of security. In fact, the automated tests trumped the manual tests done by a human, an actual user, in their eyes.

What went wrong? Bob and Doug measured their project success on their adoption of their preferred development practices. They worried more about satisfying their desire to use design patterns, and to follow Test-Driven Development and XP practices than they did about the customer. Oh sure, they talked about providing business value, worked with the customer representative on the team and wrote a lot of automated unit tests. The test strategy was sorely lacking, and their blind faith in test automation let them down.

Due to this incident, the development team lost credibility with the business and faced a difficult uphill battle to regain their trust. In fact, they had been so effective at marketing these development methods as the answer to all of the company’s technical problems that “Agile” was blamed, and to many in the business, it became a dirty word.

Sadly, this kind of scenario is far too common. On software development teams, we seem especially susceptible to desire silver-bullet solutions. Sometimes it seems we try to force every programming problem into a pattern whether it needs it or not. We suggest tools to take care of testing, or deployments, or any of the tasks that we don’t understand that well, or don’t enjoy doing as much as programming. We expect that following a process will prevail, particularly a good one like Extreme Programming.

Through all of this we can lose sight of what is really important – satisfying and impressing the customer. In the end, the process doesn’t matter as much as getting that result. The process and tools we use are our servants, not our masters. If we forget that and stop thinking about what we’re trying to solve, and who we are trying to solve it for, the best tools and processes at our disposal won’t help us.

Post-Agilism: Process Skepticism

I’m a fan of Agile practices, and I draw on values and ideas from Scrum and XP quite often. However, I have become a process skeptic. I’m tired of what a colleague complained to me about several years ago as: “Agile hype”. I’m tired of hearing “Agile this” and “Agile that”, especially when some of what is now branded as “Agile” was branded as “Total Quality Management”, or <insert buzzword phrase here> in the past. It seems that some are flogging the same old solutions, and merely tacking “Agile” onto them to help with marketing. It looks like Robert Martin was right when he predicted this would happen.

More and more I am meeting people who, like me, are quiet Agile skeptics. I’m not talking about people who have never been on Agile projects before that take shots from the outside. I’m talking about smart, capable, experienced Agilists. Some were early adopters who taught me Agile basics. While initially overcome with Agile as a concept, their experience has shown them that the “hype” doesn’t always work. Instead of slavishly sticking to Agile process doctrines, they use what works, in the context they are currently working with. Many cite the Agile Manifesto and values of XP as their guide for work that they do, but they find they don’t identify with the hype surrounding Agile methods.

That said, they don’t want to throw out the practices that work well for them. They aren’t interested in turning back the clock, or implementing heavyweight processes in reaction to the hype. While they like Agile practices, some early adopters I’ve talked to don’t really consider themselves “Agile” anymore. They sometimes muse aloud, wondering what the future might hold. “We’ve done that, enjoyed it, had challenges, learned from it, and now what’s next?” Maybe that’s the curse of the early adopter.

They may use XP practices, and wrap up their projects with Scrum as a project management tool, but they aren’t preachy about it. They just stick with what works. If an Agile practice isn’t working on a project, they aren’t afraid to throw it out and try something else. They are pragmatic. They are also zealous about satisfying and impressing the customer, not zealous about selling “Agile”. In fact, “Agile” zealotry deters them.

They also have stories of Agile failures, and usually can describe a watershed moment where they set aside their Agile process zeal, and just worked on whatever it took to get a project complete in order to have happy customers. To them, this is what agility is really about. Being “agile” in the dictionary sense, instead of being “Agile” in the marketing sense.

Personally, I like Agile methods, but I have also seen plenty of failures. I’ve witnessed that merely following a process, no matter how good it is, does not guarantee success. I’ve also learned that no matter what process you follow, if you don’t have skilled people on the team, you are going to find it hard to be successful. A friend of mine told a me about a former manager who was in a state of perpetual amazement. The manager felt that the process they had adopted was the key to their success, and enforced adherence to it. However, when anything went wrong (which happened frequently), he didn’t know what to do. Their project was in a constant state of disarray. I’ve seen this same behavior on Agile projects as well. (In fact, due to the blind religiosity of some Agile proponents, some Agile projects are more prone to this than we might realize.) Too many times we look for the process to somehow provide us the perfect answer instead of just using our heads and doing what needs to be done. “Oh great oracle, the white book! Please tell us what do do!” may be the reaction to uncertainty, instead of using the values and principles in that book as a general guideline to help solve problems and improve processes.

I have also seen people on Agile teams get marginalized because they aren’t Agile enough. While I appreciate keeping a process pure, (such as really doing XP rather than just calling whatever you are doing “Agile” because it’s cool right now), sometimes you have to break the rules and get a solution out the door. I have seen people be pushed out of projects because they “didn’t get Agile”. I have seen good solutions turned away because they weren’t in the white book (Extreme Programming Explained). I’ve also seen reckless, unthinking behavior justified because it was “Agile”. In one case, I saw a manager pull out the white book, and use a paragraph from it in an attempt to justify not fixing bugs. I was reminded of a religious nut pulling quotes out of context from a sacred book to justify some weird doctrine that was contrary to what the original author intended.

Here’s a spin on something I wrote earlier this spring:

In an industry that seems to look for silver-bullet solutions, it’s important for us to be skeptics. We should be skeptical of what we’re developing, but also of the methodologies, processes, and tools on which we rely. We should continuously strive for improvement.

I call this brand of process skepticism exhibited by experienced Agilists “Post-Agilism”. I admit that I have been an Agile process zealot in the past, and I don’t want to fall into that trap again. I’ve learned that skill and effective communication are much more powerful, so much so that they transcend methodologies. There are also pre-Agile ideas that are just fine to pick from. Why throw out something that works just because it isn’t part of the process that’s currently the most popular?

Test your process, and strive for improvement. Be skeptical, and don’t be afraid to follow good Post-Agilist thinking. Some good Post-Agilist thinkers are former process zealots who know and enjoy Agile development, but also don’t throw out anything that isn’t generally accepted by Agilists. Others are just plain pragmatic sorts who saw the Agile hype early on, and decided to do things on their terms. They took what they found effective from Agile methods, and didn’t worry about selling it to anyone.

Post-Agilist thinkers just don’t see today’s “Agile” as the final word on software development. They want to see the craft to move forward and generate new ideas. They feel that anything that stifles new ideas (whether “Agile-approved” or not) should be viewed with suspicion. They may benefit from tools provided by Agile methods, but are more careful now when it comes to evaluating the results of their process. In the past, many of us measured our degree of process adoption as success, sometimes forgetting to look to the product we were developing.

What does the post-Agile future hold? Will individuals adapt and change Agile method implementations as unique needs arise? Will “Agile” go too far and become a dirty word like Business Process Re-engineering? Or will it indeed be the way software is widely developed? Or will we see a future of people picking what process works for them, while constantly evaluating and improving upon it, without worrying about marketing a particular development methodology? I certainly hope it will be the latter. What do you think?

Edit: April 28, 2007. I’ve added a Post-Agilism Frequently Asked Questions post to help explain more. Also see Jason Gorman’s Post-Agilism – Beyond the Shock of the New.

Reckless Test Automation

The Agile movement has brought some positive practices to software development processes. I am a huge fan of frequent communication, of delivering working software iteratively, and strong customer involvement. Of course, before “Agile” became a movement, a self-congratulating community, and a fashionable term, there were companies following “small-a agile” practices. Years ago in the ’90s I worked for a startup with a CEO who was obsessed with iterative development, frequent communication and customer involvement. The Open Source movement was an influence on us at that time, and today we have the Agile movement helping create a shared language and a community of practice. We certainly could have used principles from Scrum and XP back then, but we were effective with what we had.

This software business involves trade-offs though, and for all the good we can get from Agile methods, vocal members of the Agile community have done testing a great disservice by emphasizing some old testing folklore. One of these concepts is “automate all tests”. (Some claimed agilists have the misguided gall to claim that manual testing is harmful to a project. Since when did humans stop using software?) Slavishly trying to reach this ideal often results in: Reckless Test Automation. Mandates of “all”, “everything” and other universal qualifiers are ideals, and without careful, skillful implementation, can promote thoughtless behavior which can hinder goals and needlessly cost a lot of money.

To be fair, the Agile movement says nothing officially about test automation to my knowledge, and I am a supporter of the points of the Agile Manifesto. However, the “automate all tests” idea has been repeated so often and so loudly in the Agile community, I am starting to hear it being equated with so-called “Agile-Testing” as I work in industry. In fact, I am now starting to do work to help companies undo problems associated with over-automation. They find they are unhappy with results over time while trying to follow what they interpret as an “Agile Testing” ideal of “100% test automation”. Instead of an automation utopia, they find themselves stuck in a maintenance quagmire of automation code and tools, and the product quality suffers.

The problems, like the positives of the Agile movement aren’t really new. Before Agile was all the rage, I helped a company that had spent six years developing automated tests. They had bought the lie that vendors and consultants spouted: “automate all tests, and all your quality problems will be solved”. In the end, they had three test cases developed, with an average of 18 000 lines of code each, and no one knew what their intended purpose was, what they were supposed to be testing, but it was very bad if they failed. Trouble was, they failed a lot, but it took testers anywhere from 3-5 days to hand trace the code to track down failures. Excessive use of unrecorded random data sometimes made this impossible. (Note: random data generation can be an incredibly useful tool for testing, but like anything else, should be applied with thoughtfulness.) I talked with decision makers and executives, and the whole point of them buying a tool and implementing it was to help reduce the feedback loop. In the end, the tool greatly increased the testing feedback loop, and worse, the testers spent all of their time babysitting and maintaining a brittle, unreliable tool, and not doing any real, valuable testing.

How did I help them address the slow testing feedback loop problem? Number one, I de-emphasized relying completely on test automation, and encouraged more manual, systematic exploratory testing that was risk-based, and speedy. This helped tighten up the feedback loop, and now that we had intelligence behind the tests, bug report numbers went through the roof. Next, we reduced the automation stack, and implemented new tests that were designed for quick feedback and lower maintenance. We used the tool to complement what the skilled human testers were doing. We were very strict about just what we automated. We asked a question: “What do we potentially gain by automating this test? And, more importantly, what do we lose?” The results? Feedback on builds was reduced from days to hours, and we had same-day reporting. We also had much better bug reports, and frankly, much better overall testing.

Fast-forward to the present time. I am still seeing thoughtless test automation, but this time under the “Agile Testing” banner. When I see reckless test automation on Agile teams, the behavior is the same, only the tools and ideals have changed. My suggestions to work towards solutions are the same: de-emphasize thoughtless test automation in favor of intelligent manual testing, and be smart about what we try to automate. Can a computer do this task better than a human? Can a human do it with results we are happier with? How can we harness the power of test automation to complement intelligent humans doing testing? Can we get test automation to help us meet overall goals instead of thoughtlessly trying to fullfill something a pundit says in a book or presentation or on a mailing list? Are our test automation efforts helping us save time, and helping us provide the team the feedback they need, or are they hindering us? We need to constantly measure the effectiveness of our automated tests against team and business goals, not “percentage of tests automated”.

In one “Agile Testing” case, a testing team spent almost all of their time working on an automation effort. An Agile Testing consultant had told them that if they automated all their tests, it would free up their manual testers to do more important testing work. They had automated user acceptance tests, and were trying to automate all the manual regression tests to speed up releases. One release went out after the automated tests all passed, but it had a show-stopping, high profile bug that was an embarassment to the company. In spite of the automated tests passing, they couldn’t spot something suspicious and explore the behavior of the application. In this case, the bug was so obvious, a half-way decent manual tester would have spotted it almost immediately. To get a computer to spot the problem through investigation would have required Artificial Intelligence, or a very complex fuzzy logic algorithm in the test automation suite, for one quick, simple, inexpensive, adaptive, yet powerful human test. The automation wasn’t freeing up time for testers, it had become a massive maintenance burden over time, so there was little human testing going on, other than superficial reviews by the customer after sprint demos. Automation was king, so human testing was de-emphasized and even looked on as inferior.

In another case, developers were so confident in their TDD-derived automated unit tests, they had literally gone for months without any functional testing, other than occasional acceptance tests by a customer representative. When I started working with them, they first defied me to find problems (in a joking way), and then were completely flabbergasted when my manual exploratory testing did find problems. They would point wide-eyed to the green bar in their IDE signifying that all their unit tests had passed. They were shocked that simple manual test scenarios could bring the application to its knees, and it took quite a while to get them to do some manual functional testing as well as their automated testing. It took them a while to leave their automation dogma aside, to become more pragmatic, and then figure out how to also incorporate important issues like state into their test efforts. When they did, I saw a marked improvement in the code they delivered once stories were completed.

In another “Agile Testing” case, the testing team had put enormous effort into automating regression tests and user acceptance tests. Before they were through, they had more lines of code in the test automation stack than what was in the product it was supposed to be testing. Guess what happened? The automation stack became buggy, unwieldly, unreliable, and displayed the same problems that any software development project suffers from. In this case, the automation was done by the least skilled programmers, with a much smaller staff than the development team. To counter this, we did more well thought out and carefully planned manual exploratory testing, and threw out buggy automation code that was regression test focussed. A lot of those tests should never have been attempted to be automated in that context because a human is much faster and much superior at many kinds of tests. Furthermore, we found that the entire test environment had been optimized for the automated tests. The inherent system variablity the computers couldn’t handle (but humans could!), not to mention quick visual tests (computers can’t do this well) had been attempted to be factored out. We did not have a system in place that was anything close to what any of our customers used, but the automation worked (somewhat). Scary.

After some rework on the testing process, we found it cheaper, faster and more effective to have humans do those tests, and we focussed more on leveraging the tool to help achieve the goals of the team. Instead of trying to automate the manual regression tests that were originally written for human testers, we relied on test automation to provide simulation. Running simulators and manual testing at the same time was a powerful investigative tool. Combining simulation with observant manual testing revealed false positives in some of the automated tests which had to been unwittingly released to production in the past. We even extended our automation to include high volume test automation, and we were able to greatly increase our test effectiveness by really taking advantage of the power of tools. Instead of trying to replicate human activities, we automated things that computers are superior at.

Don’t get me wrong – I’m a practitioner and supporter of test automation, but I am frustrated by reckless test automation. As Donald Norman reminds us, we can automate some human tasks with technology, but we lose something when we do. In the case of test automation, we lose thoughtful, flexible, adaptable, “agile” testing. In some tasks, the computer is a clear winner over manual testing. (Remember that the original “computers” were humans doing math – specifically calculations. Technology was used to automate computation because it is a task we weren’t doing so well at. We created a machine to overcome our mistakes, but that machine is still not intelligent.)

Here’s an example. On one application I worked on, it took close to two weeks to do manual credit card validation by testers. This work was error prone (we aren’t that great at number crunching, and we tire doing repetitive tasks.) We wrote a simple automated test suite to do the validation, and it took about a half hour to run. We then complemented the automated test suite with thoughtful manual testing. After an hour and a half of both automated testing (pure number crunching), and manual testing (usually scenario testing), we had a lot of confidence in what we were doing. We found this combination much more powerful than pure manual testing or pure automated testing. And it was faster than the old way as well.

When automating, look at what you gain by automating, and what you lose by automating a test. Remember, until computers become intelligent, we can’t automate testing, only tasks related to testing. Also, as we move further away from the code context, it usually becomes more difficult to automate tests, and the trade-offs have greater implications. It’s important to make considerations for automated test design to meet team goals, and to be aware of the potential for enormous maintenance costs in the long term.

Please don’t become reckless trying to fulfill an ideal of “100% test automation”. Instead, find out what the goals of the company and the team are, and see how all the tools at your disposal, including test automation can be harnessed to help meet those goals. “Test automation” is not a solution, but one of many tools we can use to help meet team goals. In the end, reckless test automation leads to feckless testing.

Update: I talk more about alternative automation options in my Man and Machine article, and in chapter 19 in the book: Experiences of Test Automation.

Testers and Independence

I’m a big fan of collaboration within software development groups. I like to work closely with developers and other team members (particularly documentation writers and customers who can be great bug finders), because we get great results by working closely together.

Here are some concerns I hear from people who aren’t used to this:

  • How do testers (and other critical thinkers) express critical ideas?
  • How can testers integrated into development teams still be independent thinkers?
  • How can testers provide critiques of product development?

Here’s how I do it:

1) I try very hard to be congruent.

Read Virginia Satir’s work, or Weinberg’s Quality Software Management series for more on congruence. I work on being congruent by asking myself these questions:

  • “Am I trying to manipulate someone (or the rest of the team) by what I’m saying?”
  • “Am I not communicating what I really think?”
  • “Am I putting the process above people?”

Sounds simple, but it goes a long way.

We can be manipulative on agile teams as well. If I want a certain bug to be fixed that isn’t being addressed, I can subtly alter my status at a daily standup to give it more attention (which will eventually backfire), or I can be congruent, and just say: “I really want us to focus on this bug.”

Whenever I vocalize a small concern even when the rest of the team is going another direction, it is worthwhile. Whenever I don’t, we end up with problems. It helps me retain my independence as an individual working in a team. If everyone does it, we get diverse opinions, and hopefully diverse views on potential risks instead of getting wrapped up in groupthink. Read Brenner’s Appreciate Differences for more on this.

Sometimes, we ignore our intuition and doubts when we are following the process. For example, we may get annoyed when we feel someone else is violating one of the 12 practices of XP. We may harp on them about not following the process instead of finding out what the problem is. I have seen this happen frequently, and I’ve been on teams that were disasters because we had complete faith in the process (even with Scrum, XP), and forgot about the people. How did we put the process over people? Agile methods are not immune to this problem. On one project, we ran around saying “that isn’t XP” when we saw someone doing something that didn’t fit the process. In most cases it was good work, but it turned out to be a manipulative way of dealing with something we saw as a problem. In the end, some of them were good practices that should have been retained in that context, on that team. They weren’t “textbook” XP, but the people with the ideas knew what they were doing, not the inanimate “white book”.

2) I make sure I’m accountable for what I’m doing.

Anyone on the team should be able to come up and ask me what I’m doing as a tester, and I should be able to clearly explain it. Skilled testing does a lot to build credibility, and an accountable tester will be given freedom to try new ideas. If I’m accountable for what I’m doing, I can’t hide behind a process or what the developers are doing. I need to step up and apply my skills where they will add value. When you add value, you are respected and given more of a free reign on testing activities that might have been discouraged previously.

Note: By accountability, I do not mean lots of meaningless “metrics”, charts, graphs and other visible measurement attempts that I might use to justify my existence. Skilled testing and coherent feedback will build real credibility, while meaningless numbers will not. Test reports that are meaningful, kept in check by qualitative measures, are developed with the audience in mind, and are actually useful will do more to build credibility than generating numbers for numbers sake.

3) I don’t try to humiliate the programmers, or police the process.

(I now see QA people making a career out of process policing on Agile teams). If you are working together, technical skills should be rubbing off on each other. In some cases, I’ve seen testing become “cool” on a project, and on one project not only testers were working on it, but developers, the BA and the PM were also testing. Each were using their unique skills to help generate testing ideas, and engage in testing. This in-turn gave the testers more credibility when they wanted to try out different techniques that could reveal potential problems. Now that all the team members had a sense for what the testers were going through, more effort was made to enhance testability. Furthermore, the discovery of potential problems was encouraged at this point, it was no longer feared. The whole team really bought into testing.

4) I collaborate even more, with different team members.

When I find I’m getting stale with testing ideas, or I’m afraid I’m getting sucked into groupthink, I pair with someone else. Lately, a customer representative has really been a catalyst for me for testing. Whenever we work together, I get a new perspective on project risks that are due to what is going on in the business, and they find problems I’ve missed. This helps me generate new ideas for testing in areas I hadn’t thought of.

Sometimes working with a technical writer, or even a different developer, instead of the developer(s) you usually work with helps you get a new perspective. This ties into the accountability thought as well. I’m accountable for what I’m testing, but so is the rest of the team. Sometimes fun little pretend rivalries will occur: “Bet I can find more bugs than you.” Or “Bet you can’t find a bug in my code in the next five minutes.” (In this case the developer beat me to the punch by finding a bug in his own code through exploratory testing beside me on another computer, and then gave me some good-natured ribbing about him being the first one to find a bug.)

Independent thinkers need not be in a separate independent department that is an arm’s length away. This need for an independent testing department is not something I unquestionably support. In fact, I have found more success through collaboration than isolation. Your mileage will vary.

The Role of a Tester on Agile Projects

I’ve stopped using this phrase: “The role of a tester on agile projects”. There has been endless debate over whether there should be dedicated testers on agile projects. In the end, endless debate doesn’t appeal to me. I’ve been on enough agile teams now to see that testers can add value in pretty much the same way they always have. They provide a service to a team and customers that is information-based. As James Bach says: “testing lights the way.” Programmer testing and tester testing complement each other, and agile projects can provide an ideal environment for an amazing amount of collaboration and testing. It can be hard to break in to some agile projects though when many agilists seem to view conventional testing with indifference or in some cases with disdain. (Bad experiences with QA Police doesn’t help, so the testing community bears some responsibility with this poor view of testing.)

What I find interesting is hearing experiences from people on Agile projects who fit in and contributed on Agile teams. I like to hear stories about how a Tester worked on an XP team, or how a Business Analyst fit in and thrived on an Agile team. One of the most fascinating stories I’ve encountered was a technical writer who worked on an XP team. It was amazing to find out how they adapted and filled a “team member role” to help get things done. It’s even more fascinating to find how the roles of different team members change over time, and how different people learn new skills and roll up their sleeves to get things done. There is a lot of knowledge in areas such as user experience work, testing, technical writing and others that Agile team members can learn from. In turn, they can learn a tremendous amount from Agilists. This collaborative learning helps move teams forward and can really push a team’s knowledge envelope. When people share successes and failures of what they have tried, we all benefit.

I like to hear of people who work on a team in spite of being told “dedicated testers aren’t in the white book”, or “our software doesn’t need documentation”. I’m amazed at how adaptable smart, talented people are who are blazing trails and adding value on teams that have challenging and different constraints. A lot of the “we don’t need your role” discussion sounds like the same old arguments testers, technical writers, user experience folks and others have been hearing already for years from non-Agilists. Interestingly enough, those who work on agile projects often report that in spite of initial resistance, they manage to fit in and thrive once they adapt. Those who were their biggest opponents at the beginning of a project often become their biggest supporters. This says to me that there are smart, capable people from a lot of different backgrounds who can offer something to any team they are a part of.

The Agile Manifesto says: “we value: Individuals and interactions over processes and tools.” Why then is there so much debate over the “tester role”? Many Agile pundits dismiss having dedicated testers on Agile projects. I often hear: “There is no dedicated tester role needed on an Agile team”. Isn’t this notion of a “role” to be excluded from a team putting a process over people? If someone joins an Agile team who is not a developer and they believe in the values of the methodology and want to work with a great team, do we turn them away? Shouldn’t we embrace the people even if the particular process we are following does not spell out their duties?

I would like to see people from non-developer roles be encouraged to try working on agile teams and share successes and failures so we can all benefit. I still believe software development has a long way to go, and we should try to improve every process. A danger of codifying and spreading a process is that it doesn’t have all the answers for every team. When we don’t have all the answers, we need to look at the motivations and values behind a process. For example, what attracted me most to XP were the values. My view on software processes is that we should embrace the values, and use them as a base to strive towards constant improvement. That means we experiment and push ideas forward, and fight apathy, hubris and as Deming said, drive out fear.

Occam’s Butter Knife

When I work in test automation, I always seek the most simple solution that will work for the task at hand. Extreme Programming’s STTCPW (simplest thing that can possibly work) and YAGNI (you aren’t going to need it) are principles I’ve applied to test automation. Occam’s Razor is a principle that can be interpreted as having the same meaning as STTCPW. Simple tests are something I value because I don’t:

  • Want complex test code that can be a source for bugs
  • Have a lot of time for maintenance
  • Wish to have tests I can’t throw away easily

Recently, while automating a complex functional test for a tester, I ran into a difficult problem. I was retro-fitting a test into a framework and I couldn’t figure out how to translate the model into automated test code. I struggled for a while, and paired with the tester (who was a non-programmer) and described my problem. I was thinking about a Decorator pattern, and data structures and algorithms, but couldn’t make this model fit. I was trying to get a simple solution together, but I had constraints.

The tester patiently listened as I explained my problem and diagrammed on a white board, explained what I was trying to do, and then when I was finished said: “Forgive me for stating the obvious…” and paused. I said “What may be obvious to you may be something I haven’t considered, please continue.” The tester explained to me that I had already created that model when I had automated test data generation, and asked why I couldn’t re-use it? They were right on the money. The solution didn’t involve any design patterns or clever solutions, I just had to go grab the data from a different area of the test. The solution was sitting right there in front of me, but I was looking at the problem in the wrong way.

Non-programming testers can offer a lot to development efforts. Their minds are not cluttered with design patterns, xUnit frameworks, data structures and algorithms which allows them to see things in a different way. When I’m up to my eyeballs in code, I sometimes stop thinking like a tester. In spite of my wish to get a simple design, I had needlessly over complicated my automated test development. So much for my attempt to apply Occam’s Razor – I needed a tester to help me see the simple solution. When I’m a tester working in test automation, I sometimes lose that “tester’s perspective”. Collaboration is key, and also helps me when I’m not particularly sharp.

Ted Talks About the Customer

Ted O’Grady has an interesting post on the customer and their role on XP teams. I agree with what he has said.

It’s also important to think of the context that the software is being used in. I learned a lesson about getting users to test our software in our office vs. getting them to use the software in their own office. The Hawthorne Effect seemed to really kick in when they were on-site with us. When we observed them using the software in their own business context to solve real-world problems, they used it differently.

If the customer is out of their regular context, that may have an effect on their performance on an agile team. Especially if they feel intimidated by a team of techies who outnumber them. When they are in their own office, and their team outnumbers the techies, they might be much more candid with constructive criticism. Just having a customer on-site doing the work with the team doesn’t guarantee they will approve of the final product. Often I think there is a danger they will approve of what the team thinks the final product should be. When we had rotating customer representatives on a team, groupthink was greatly reduced.

Gerard Meszaros on Computer Assisted Testing

At last night’s Extreme Tuesday Club meeting, Gerard Meszaros had a great way of describing what I do using automated scripts with Ruby combined with exploratory testing. (See my blog post about this here: Test Automation as a Testing Tool.)

As I was describing using a Ruby script to automate some parts of the application I’m testing, and then taking over manually with exploratory testing, Gerard said:

Sounds like you are using your Ruby program to set up the test fixture that you will then use as a starting point for exploratory testing.

I hadn’t thought of what I was doing in this way before, and I like the idea. It is a great way to succinctly explain how to complement the strengths of a testing tool and a brain-engaged tester.

Check out Gerard’s Patterns of xUnit Test Automation site for more of his thoughts on testing.