Category Archives: agile testing

Software Development Process Fusion Part 2

What is it? The Short Version

Software development process fusion involves taking different kinds of processes and tools and utilizing a combination on your project to help you reach your goals. You aren’t just using one particular methodology or school of thought or toolset, you are using a combination of tools that fits your unique needs on your project to help create value.

What is it? The Very Long Version

In Part 1 of this series, I talked about fusion in music from early days of the genre, when it was somewhat controversial and aimed more at enthusiasts, to the present, when most music we hear on popular radio stations is a fusion of styles. On country stations, we hear rockabilly, pop, rock and roll, blues and traditional country fused together in many songs. Popular music now has influences from all kinds of cultures, and we are seeing hip hop music fused with traditional Indian music and pop. In my collection, Canadian artist Cat Jahnke includes folk, pop, rock, gospel and film music in her songwriting and performing. A more obvious fusion might be found in fellow Canadian artist Rebekah Higgs music, categorized in the “folktronica” genre, a combination of electronica music and folk. Another Canadian group with a wide variety of styles fused together is the Duhks, who “…play a blend of Canadian soul, gospel, North American folk, Brazilian samba, old time country string band, zydeco, and Irish dance music…” according to wikipedia.

These kinds of fusions of ideas are all around us. The fusions of styles from different traditions, cultures and ideas are due in part to our increasing interconnectedness and mass media and communication. In the effort to create something new in the market, we often borrow something old or unfamiliar in our culture and mix it with the current and familiar. We have fusion cuisine, for example (a Chinese restaurant near our home became an Italian restaurant and serves delicious Asian-Italian fusion cuisine). We see it in exercise with holistic training, and exercise regimes that combine eastern, western, kinesiology and spiritual elements. Often a combination of ideas helps us reach our ultimate goal, which isn’t to create a fusion of styles, or to adhere to just one style, but to achieve a desired effect or outcome.

The goal of each of these examples is quite clear. With music, the musician’s goal is to create something that resonates with them, an expression of their art and their personality. Their other goal is to produce something that is satisfactory and enjoyable for their audience. With restaurants, they want to provide fresh, delicious food. With exercise programs, the goal is better health and fitness. Another important underlying theme is financial success. We all need to make money somehow to live, and even though we may produce something that is wonderful, it may not be recognized by the market. Sometimes our goals in software development aren’t quite as clear, particularly for those of us down in the details of coding, testing, writing, etc. It can be hard to see the big picture and measure ourselves against it. It can also be hard to deal with the imprecise environment that our software is released into and instead cling to something that feels predictable and stable, like a well-defined process.

Software development process fusion involves taking different kinds of processes and tools and utilizing a combination on your project to help you reach your goals. You aren’t just using one particular methodology or school of thought or toolset, you are using a combination of tools that fits your unique needs on your project to help create value.

This paper is a recent example: Process fusion: An industrial case study on agile software product line engineering that talks about the fusion of two bodies of practice. I’d like to find more that have identified several different schools of thought: iterative and incremental, Agile, phased or “waterfall”, spiral, user experience, etc. etc.

Process Mashups

A couple of years ago, I was interviewed about post-Agilism by a company that does industry analysis of the software field. One of the interviewers used an interesting term when she described the message she got from my work. She said something like this: “We really see that teams in the future will be less dogmatic about what particular process ideology they need to follow, and will be more focused on using different ideas to get the results they need. We’ll see all sorts of interesting process mashups as people combine different process ideas on their own projects to reach their particular goals for that particular project.” Wow. She got that from my writing? “Process mashup” wasn’t a term I had used, but it’s another way of explaining what I am trying to get across.
Mashup seems to be a relatively new term that describes combining different sources into one form. Wikipedia has some different examples of mashups.

Here in Canada, a fusion of ideas is built into our culture, since our society is modeled as a “cultural mosaic” which means people retain and continue to practice their original culture when they move here to live. On CBC (Canadian Broadcasting Corporation) radio, there is an interesting show called Mashup, hosted by Geeta Nadkarni, that I enjoy listening to. The website describes the show:

Over the summer Mashup will explore what really happens when cultures intersect in love, at work and at play. You’ll hear from immigrants, second-generation Canadians, mixed-race Canadians, people who’ve been in Canada for decades – each with a personal story about how cultures collide in their daily lives. Canada is a country of mashups. People from different cultures find themselves living and working together here – bumping into different values, assumptions and different ways of doing things.

When I listen to the stories and challenges of how people overcome the collisions of culture, I see many parallels on software development teams. In fact, we tend to have our own little mishmash of cultures on our teams due to our ability to collaborate with technology, and there is often a shortage of skilled people in one particular area, so people from different countries are often on the same teams. I see this culture mashup as a more accurate description of what most teams experience and how they implement processes, so why not embrace it?

We’re Doing That Anyway

Most teams I work with tend to use a blend of process ideas in practice. Often, we like to talk about our Scrum or XP process in the pure sense on mailing lists or at conferences or user group meetings, but what we are really doing is a blend of Scrum or XP, corporate culture and practices (what we’ve learned through experience that seems to work for our project, but doesn’t necessarily fit the process literature.) Often teams apologize to me if they are doing something that isn’t by the book process-wise. I ask: “Is it working for you?” and if they say “yes”, I tell them not to worry about it.

It’s important to realize that pure “Agile” and pure “waterfall” don’t really exist on projects. They are ideals, or strawmen, depending on what your particular software religion is. (That includes my writing here, it is a model of software development that I and others find ideal. We strive towards reaching a goal of using our process to serve us, rather than working to serve the process.) There is nothing wrong with ideals, but they can be carried too far. Many feel that Royce’s original “waterfall” paper described an ideal, and that process wonks got to that diagram in Figure 2 on about the second page or so and stopped reading, adopted it on that alone and the waterfall practice was born Somewhere along the way, most teams that were focused on results figured out how to adapt their phased or “waterfall” approach to get the job done. Others got too caught up in the ideal of the process and created bureaucratic nightmares that produced more paper and procedures than working software. We see the same thing with Agile extremism – the process is held up at the expense of the people on the project who are blamed if anything goes wrong. Roles like testing and technical writing are marginalized (“there is no tester role on Agile projects”): tests are automated, and after the fact manual testing skills are marginalized as “being negative.” Testers are twisted into any role but testing, such as development or business analysis. Tests become “documentation” or “requirements” that drive development. While there’s nothing wrong with experimenting with ideas, it should not be at the cost of dehumanizing skilled people who are trying to deliver the best working software they can. What exists on most successful teams I’ve worked with, who realize that they need to reach goals for the organization, for their users, for their teams, and for each individual, is usually a combination of changing process ideas and practices at work at any given time. Some are recognizable and named, and others are just what the team does in that environment.

In the Agile world, most process adoptions tend to be a blend of Scrum and XP. Some teams I’ve worked with couldn’t do a pure implementation of either because of their unique circumstances. One team couldn’t completely adopt XP because of the physical layout of the building prevented them from arranging themselves all together. Shrinkwrap software companies often have trouble getting a real customer representative on their team, and often have a product manager, business analyst or someone with a customer-facing role stand in. Sometimes teams are successful at delivering working software in spite of process adoption limitations, and sometimes they probably aren’t. (Usually, failures I’ve witnessed are due to a lack of skills rather than a process failure.) There are a lot of perfectly good reasons why Agile process adoptions aren’t implemented in the purest sense and yet still succeed. (Hint, skill is usually a big factor.)

People have also adapted so-called “waterfall” or phased lifecycle approaches as well. Furthermore, there are different ways of viewing software development processes. Steve McConnell explains this in a comment on his blog post:

I think it’s important to remember that Waterfall and Agile aren’t the only two options. “Agile” is a very large umbrella that includes many, many practices. “Waterfall” is one specific way of approaching projects that’s in the broader family of “sequential” development practices. Staged delivery, spiral, and design to cost are three other members of the sequential family. I agree that waterfall will only rarely do better at providing predictability than agile practices will. But there are other non-Waterfall practices within the sequential family that eliminate 90%+ of the weaknesses of waterfall and that are more applicable than full-blown agile practices in many contexts. (By full blown, I mean like the project in the cautionary tale–fully iterative requirements, etc.)
…There is no One True Way. When people think about the fact that there’s software in toasters, airplanes, video games, movies, medical devices, and thousands of other places, it seems kind of obvious that the best approaches are going to arise when people pay close attention to the needs of their specific circumstances and then choose appropriate practices.

That’s contextualist thinking expressed eloquently, and is easier to hang your hat on than the: “doing what works for you” post-Agilism maxim.

Software Development Process Fusion – Know Your Goals

To get this fusion concept to work properly, it is incredibly, incredibly important to know what your goals are for providing value to your customers while building value on your teams. Otherwise, you may end up with a mishmash of watered down practices and have no way to measure whether they are helping you or not. Without an understanding of what success looks like, your team may end up with a “we’re doing what works for us” combination of process ideas that get you no further than with what you were doing before. I have seen this on countless teams adapting Agile processes. They thought adding daily standups, using iterative development, TDD, and getting rid of up-front planning and documentation was enough for success, and they ended up worse off value-wise than with a heavyweight process implementation. My response to the “We’re only adopting what works for us” concept is: a) Have you tried it? and b) If you have, can you evaluate whether it is helping your project or not? If you can answer both of those, then that phrase is completely appropriate. If not, we need to be sure we really do know what works for us and have a standard of measure to evaluate whether what we are adopting is helping us now, and if it helped us in the past, is it still helping us now? Believe it or not, sometimes the best processes can become stale and ineffective over time. Can you tell what is working on your project?

One team that I was on in the pre-Agile era was ruthless with tools and processes. Our development team lead would always say: “Does the tool suck, or do we need more practice or training with it?” whenever a tool or process wasn’t working for us as advertised. Notice the people focus – he empowered us, and made sure that we had a way to measure our tool and process adoption against our project goals. If things weren’t working, the finger was first pointed at the tool or process, not the people doing the work. We ended up with a combination of practices that evolved over time. We had clear goals on what we needed to do and what success looked like, and the following characteristics. We used :

  • an iterative and incremental delivery lifecycle
  • experimental programming/development
  • prototyping
  • strong customer involvement in planning and development
  • a strong emphasis on individuals developing their skilsl
  • frequent communication (standups, quality circles, pairing, collaborating, regular meetings with stakeholders and executives on goals and vision)
  • varied methods of developing requirements
  • varied methods of frequent testing from project beginning during the planning and idea phase, to product critiquing with serious exploratory testing on anything delivered, and at project end
  • varied automation in testing, build processes, and anything else that helped us be more productive

Anything went within reason, as long as it wasn’t unethical and didn’t hurt someone and didn’t threaten our deadlines, we were encouraged to experiment with different processes and tools in the ongoing effort to build the best software that we possibly could that not only satisfied, but impressed our user community. This was true “continuous improvement” in action. Sadly, most process ideals that I see completely miss out on most of this. They may have an iterative lifecycle, but don’t realize that the point is to help you deliver something your customer needs in stages to get their feedback, and to be able to adjust your plan as it hits the reality of the project. They do testing, but they artificially constrain it by trying to automate everything, or severely constrain requirements by forcing them into “tests”. They talk to each other, but have daily standups and iteration meetings whether they are really communicating anything useful or not.

The teams that seem to miss out on creating value over a sustained period of time are not open to ideas outside their favorite process, and belittle and marginalize people who have ideas on how to solve real problems. They look to the process to solve those tough problems, and cling to it instead of looking at the bigger picture. Successful teams I’ve worked with, on the other hand adapt, and change their process and understand that the process is yet another tool in the software toolbox to help them reach their goals. Process isn’t king – skilled people are. (Lacking in skill? Invest in skill development before worrying about your process too much.)

On that team I described above, we didn’t care what the role on the team was as long as they provided a service that helped us create value in our product. We needed people to translate requirements and product vision from something vague to something concrete that programmers could work on. We used a variety of lightweight ways to express this, and didn’t have rules about it. If it worked, it worked and we used it until it stopped working for us. The same went for testers. Those that were skilled at finding problems in designs and in the product and provided an information service were valued and encouraged, no matter what tools or processes they used. The quality of their information was what was important. No one walked around saying: “That’s not Agile!” or “That’s not [process we were using]” and discouraged you if you were doing something different. If it worked, the creativity was celebrated, not feared and driven out because it wasn’t recorded in some book somewhere. When the Agile Manifesto came out, and processes like Scrum and XP were gaining traction, we tried the ideas and adapted them to our process fusion. Processes and tools that worked were retained, and surprisingly, some practices like TDD were jettisoned over time, with a focus moving towards developing programming skills with some sort of lightweight code inspection process taking its place. We heard success stories of other teams who were doing wonders with things that had stopped working for us, and we wondered a bit why we were different, but at the end of the day, we were reaching our goals. We had stable, working software, a process that worked, satisfied customers, and a highly skilled team that valued each other and the diversity that individuals brought to it.

The Rule is There Are No Rules

I’ve seen too many process zealots or snake oil salesmen display bigotry towards others with different ideas that don’t fit their particular model. It’s easy to pick on the Agile movement because it’s a big fad right now, so there are a lot of readily available examples of people going around saying “That’s not Agile!” and creating an elitist club. Over my career, I’ve experienced people in the Object Oriented movement do this, and some RAD folks looked down their noses at one team I was on because we didn’t use the “approved” prototyping tools they used. Teams with a high level of CMM were also elitist snobs, as were some RUP practitioners, consultants and tool floggers. There are a lot of people out there who are more than happy to set an ideal standard of measure for us to live up to, make us feel guilty for our software “sins” and then profit from telling us we’re doing it wrong. A wise theologian once said something like this: “without sins the priest would be out of work.” Next time you feel you are doing something wrong, or someone else makes you feel that way, evaluate how they are profiting from making you feel that way. If you are creating value even though you’re “doing it wrong,” ignore them.

I’ve seen novel ideas to real life project problems turned aside because they didn’t follow somebody’s idea of process rules. If a pure process adoption is your goal, then you may have to do that sort of thing, but if a successful product that delivers value is your goal, following arbitrary process rules can be a real hindrance. If the software is well developed, who cares that you did some up-front planning. Who cares if you didn’t use story cards? If the team has great communication, who cares if you don’t do daily standups? If testing is done well, who cares if it isn’t completely automated? If you are good at eliciting and expressing requirements, who cares if you didn’t use ATDD or some other Agile automated test ideal? If your code is stable and maintainable, who cares that you didn’t use TDD? If you deliver value, who cares that you needed some up-front design? If your software is usable, who cares that you didn’t use BDD, but used traditional user experience techniques instead? (I’m not discouraging you from trying any of those Agile practices, indeed, try what you like as you strive to improve your process, but do it on your own terms – don’t feel pressured to try them just because it seems everyone else is doing it.)

As I mentioned earlier, we can put artificial bounds around what we do in software development, and invent rules that can impede our goals. Furthermore, rules that worked really well on some high profile project may not be appropriate for our project. Also, rigid rules can be a barrier to creativity and creating novel solutions, which are both the lifeblood of technological innovation.

My stance on all of this: if the particular process or process fusion you are using is working for you, do that. I really don’t care what it is, whether it is an Agile process, Cleanroom, RUP, Evo, some phased “waterfall” variant. If you have a bang up XP implementation that is working for you, your team and your customers, that’s great. Keep doing it. If you have a process fusion, don’t feel badly because someone says: “That isn’t Agile.” All I am encouraging is that you understand your goals, have a way to measure whether your tools and processes are helping you or not, and be open to other ideas when you need to adapt and change. Look at the history of software development and other ideas that have come before and try to learn from as many different sources as possible. Enlarge your software development process toolbox, and try combinations of ideas. Others have done this before, so it isn’t really that radical. Google the term for more ideas.

Agilism all too often ends up people being much more concerned with following “the rules” instead of being concerned with providing value and reaching goals. Merely following a good process in the hopes that all those tough problems will be solved by strict adherence to that process may not work for you. There is a difference between understanding what you need to do, and adapting as you go, and merely following a ritual without understanding the meaning behind it.

What Process Combinations Have Your Teams Created?

I see this sort of thing as having a future in software development processes partly because successful teams I’ve worked on have always changed and adapted not only their plans, designs and their code, but their tools and processes as well. We’ve also seen a fusion of ideas become popular in other areas, and it seems like a natural evolution. First we work through various extremes, and then we find some sort of balance. I’d like to hear about the combinations and adaptation of processes on your team. One day I hope to hear of a team that says: “We created a process mashup like this: we learned how to measure performance requirements of our development efforts and software inspection from Evo, iteration planning and management from Scrum, continuous integration from XP, persona creation from the user experience world, user testing from Cleanroom, and a large variety of testing ideas from various schools of thought in testing, combined with this other stuff we do on our teams that isn’t written down in a book or talked about by experts.” Most importantly, what are you doing to create value for your customers and your team? Are you using a purist implementation of a process, or are you combining different process aspects to reach your goals?

Software Testing 2.0?

For so many years the Quality Assurance ideal has dominated software testing. “QA”-flavored software testing often feels like equal parts of Factory School and Quality School thrown together. When I was starting out as a tester, I quickly learned through hard experience that a lot of the popular software testing thought was built around folklore. I wanted results, and didn’t like process police, so I often found myself at odds with the Quality Assurance community. I read writings by Cem Kaner, James Bach and Brian Marick, and worked on my testing skill.

When the Agile movement kicked into gear, I loved the Agile Manifesto and the values agile leaders were espousing, but the Agile Testing community rehashed a lot of the same old Factory School folklore. Instead of outsourcing testing to lesser-skilled, cheaper human testers, testing was often outsourced to automated testing tools. While there were some really cool ideas, “Agile Testing” ideals still frequently felt like testing didn’t require skills, other than programming. I was frequently surprised at how “Agile Testing” thought was attracted to a lot of the old Factory School thoughts, like they were oppositely charged magnets. As a big proponent of skilled testing, I found I was often at odds with “Agile Testers”, even though I agreed with the values and ideals behind the movement. Testing in that community did not always feel “agile” to me.

Then Test-Driven Development really got my attention. I worked with some talented developers who taught me a lot, and wanted to work with me. They told me they wanted me to work with them because I thought like they did. I was still a systems thinker, but I came at the project from a different angle. Instead of confirming that their code worked, I creatively thought of ideas to see if it might fail. They loved those ideas because it helped them build more robust solutions, and in turn, taught me a lot about testing through TDD. I learned that TDD doesn’t have a lot to do with testing in the way I’m familiar with, but is still a testing school of thought. It is focused on the code-context, and I tend to do more testing from user contexts. Since I’m not a developer, and TDD is predominantly a design tool, I wasn’t a good candidate for membership in the TDD community.

The Context-Driven Testing School is a small, influential community. The founders all had an enormous influence on my career as a tester. One thing this community has done is build up and teach skilled testing, and has influenced other communities. Everywhere I go, I meet smart, talented, thoughtful testers. In fact, I am meeting so many, that I believe a new community is springing up in testing. A community born of experience, pragmatism and skill. Testers with different skillsets and ideas are converging and sharing ideas. I find this exciting.

I’m meeting testers in all sorts of roles, and often the thoughtful, skilled ones aren’t necessarily “QA” folks. For example, some of my thought-leader testing friends are developers who are influenced by TDD. Some skilled testers I meet are test automation experts, some are technical writers, some are skilled with applying exploratory testing concepts. All are smart, talented and have cool ideas. I am meeting more and more testers from around the world with different backgrounds and expertise who share a common bond of skill. I’m beginning to believe that a new wave of software testing is coming, and a new skills-focused software testing community is being formed through like-minded practitioners all over the world. This new community growing in the software development world is driven by skilled testers.

This is happening because skilled testers are sharing ideas. They are sharing their results by writing, speaking, and practicing skilled testing. Results mean something. Results build confidence in testers, and in the people who work with them. Skill prevails over process worship, methodology worship and tool worship. I’ve said before that skilled software testing seems to transcend the various methodologies and processes and add value on any software development project. I’m finding that other testers are finding this out as well. This new wave of skilled tester could be a powerful force.

Are you frustrated with the status quo of software testing? Are you tired of hearing the same hollow maxims like “automate all tests”, “process improvement” and “best practices”? Do you feel like something is missing in the Quality Assurance and Agile communities when it comes to testing? Do you feel like you don’t fit in a community because of your views on testing? You aren’t alone. There are many others who are working on doing a better job than we have been doing for the past few years. Let’s work together to push skilled software testing as far as it will go. Together, we are creating our own community of practice. The “second version” of software testing has begun to arrive.

Reckless Test Automation

The Agile movement has brought some positive practices to software development processes. I am a huge fan of frequent communication, of delivering working software iteratively, and strong customer involvement. Of course, before “Agile” became a movement, a self-congratulating community, and a fashionable term, there were companies following “small-a agile” practices. Years ago in the ’90s I worked for a startup with a CEO who was obsessed with iterative development, frequent communication and customer involvement. The Open Source movement was an influence on us at that time, and today we have the Agile movement helping create a shared language and a community of practice. We certainly could have used principles from Scrum and XP back then, but we were effective with what we had.

This software business involves trade-offs though, and for all the good we can get from Agile methods, vocal members of the Agile community have done testing a great disservice by emphasizing some old testing folklore. One of these concepts is “automate all tests”. (Some claimed agilists have the misguided gall to claim that manual testing is harmful to a project. Since when did humans stop using software?) Slavishly trying to reach this ideal often results in: Reckless Test Automation. Mandates of “all”, “everything” and other universal qualifiers are ideals, and without careful, skillful implementation, can promote thoughtless behavior which can hinder goals and needlessly cost a lot of money.

To be fair, the Agile movement says nothing officially about test automation to my knowledge, and I am a supporter of the points of the Agile Manifesto. However, the “automate all tests” idea has been repeated so often and so loudly in the Agile community, I am starting to hear it being equated with so-called “Agile-Testing” as I work in industry. In fact, I am now starting to do work to help companies undo problems associated with over-automation. They find they are unhappy with results over time while trying to follow what they interpret as an “Agile Testing” ideal of “100% test automation”. Instead of an automation utopia, they find themselves stuck in a maintenance quagmire of automation code and tools, and the product quality suffers.

The problems, like the positives of the Agile movement aren’t really new. Before Agile was all the rage, I helped a company that had spent six years developing automated tests. They had bought the lie that vendors and consultants spouted: “automate all tests, and all your quality problems will be solved”. In the end, they had three test cases developed, with an average of 18 000 lines of code each, and no one knew what their intended purpose was, what they were supposed to be testing, but it was very bad if they failed. Trouble was, they failed a lot, but it took testers anywhere from 3-5 days to hand trace the code to track down failures. Excessive use of unrecorded random data sometimes made this impossible. (Note: random data generation can be an incredibly useful tool for testing, but like anything else, should be applied with thoughtfulness.) I talked with decision makers and executives, and the whole point of them buying a tool and implementing it was to help reduce the feedback loop. In the end, the tool greatly increased the testing feedback loop, and worse, the testers spent all of their time babysitting and maintaining a brittle, unreliable tool, and not doing any real, valuable testing.

How did I help them address the slow testing feedback loop problem? Number one, I de-emphasized relying completely on test automation, and encouraged more manual, systematic exploratory testing that was risk-based, and speedy. This helped tighten up the feedback loop, and now that we had intelligence behind the tests, bug report numbers went through the roof. Next, we reduced the automation stack, and implemented new tests that were designed for quick feedback and lower maintenance. We used the tool to complement what the skilled human testers were doing. We were very strict about just what we automated. We asked a question: “What do we potentially gain by automating this test? And, more importantly, what do we lose?” The results? Feedback on builds was reduced from days to hours, and we had same-day reporting. We also had much better bug reports, and frankly, much better overall testing.

Fast-forward to the present time. I am still seeing thoughtless test automation, but this time under the “Agile Testing” banner. When I see reckless test automation on Agile teams, the behavior is the same, only the tools and ideals have changed. My suggestions to work towards solutions are the same: de-emphasize thoughtless test automation in favor of intelligent manual testing, and be smart about what we try to automate. Can a computer do this task better than a human? Can a human do it with results we are happier with? How can we harness the power of test automation to complement intelligent humans doing testing? Can we get test automation to help us meet overall goals instead of thoughtlessly trying to fullfill something a pundit says in a book or presentation or on a mailing list? Are our test automation efforts helping us save time, and helping us provide the team the feedback they need, or are they hindering us? We need to constantly measure the effectiveness of our automated tests against team and business goals, not “percentage of tests automated”.

In one “Agile Testing” case, a testing team spent almost all of their time working on an automation effort. An Agile Testing consultant had told them that if they automated all their tests, it would free up their manual testers to do more important testing work. They had automated user acceptance tests, and were trying to automate all the manual regression tests to speed up releases. One release went out after the automated tests all passed, but it had a show-stopping, high profile bug that was an embarassment to the company. In spite of the automated tests passing, they couldn’t spot something suspicious and explore the behavior of the application. In this case, the bug was so obvious, a half-way decent manual tester would have spotted it almost immediately. To get a computer to spot the problem through investigation would have required Artificial Intelligence, or a very complex fuzzy logic algorithm in the test automation suite, for one quick, simple, inexpensive, adaptive, yet powerful human test. The automation wasn’t freeing up time for testers, it had become a massive maintenance burden over time, so there was little human testing going on, other than superficial reviews by the customer after sprint demos. Automation was king, so human testing was de-emphasized and even looked on as inferior.

In another case, developers were so confident in their TDD-derived automated unit tests, they had literally gone for months without any functional testing, other than occasional acceptance tests by a customer representative. When I started working with them, they first defied me to find problems (in a joking way), and then were completely flabbergasted when my manual exploratory testing did find problems. They would point wide-eyed to the green bar in their IDE signifying that all their unit tests had passed. They were shocked that simple manual test scenarios could bring the application to its knees, and it took quite a while to get them to do some manual functional testing as well as their automated testing. It took them a while to leave their automation dogma aside, to become more pragmatic, and then figure out how to also incorporate important issues like state into their test efforts. When they did, I saw a marked improvement in the code they delivered once stories were completed.

In another “Agile Testing” case, the testing team had put enormous effort into automating regression tests and user acceptance tests. Before they were through, they had more lines of code in the test automation stack than what was in the product it was supposed to be testing. Guess what happened? The automation stack became buggy, unwieldly, unreliable, and displayed the same problems that any software development project suffers from. In this case, the automation was done by the least skilled programmers, with a much smaller staff than the development team. To counter this, we did more well thought out and carefully planned manual exploratory testing, and threw out buggy automation code that was regression test focussed. A lot of those tests should never have been attempted to be automated in that context because a human is much faster and much superior at many kinds of tests. Furthermore, we found that the entire test environment had been optimized for the automated tests. The inherent system variablity the computers couldn’t handle (but humans could!), not to mention quick visual tests (computers can’t do this well) had been attempted to be factored out. We did not have a system in place that was anything close to what any of our customers used, but the automation worked (somewhat). Scary.

After some rework on the testing process, we found it cheaper, faster and more effective to have humans do those tests, and we focussed more on leveraging the tool to help achieve the goals of the team. Instead of trying to automate the manual regression tests that were originally written for human testers, we relied on test automation to provide simulation. Running simulators and manual testing at the same time was a powerful investigative tool. Combining simulation with observant manual testing revealed false positives in some of the automated tests which had to been unwittingly released to production in the past. We even extended our automation to include high volume test automation, and we were able to greatly increase our test effectiveness by really taking advantage of the power of tools. Instead of trying to replicate human activities, we automated things that computers are superior at.

Don’t get me wrong – I’m a practitioner and supporter of test automation, but I am frustrated by reckless test automation. As Donald Norman reminds us, we can automate some human tasks with technology, but we lose something when we do. In the case of test automation, we lose thoughtful, flexible, adaptable, “agile” testing. In some tasks, the computer is a clear winner over manual testing. (Remember that the original “computers” were humans doing math – specifically calculations. Technology was used to automate computation because it is a task we weren’t doing so well at. We created a machine to overcome our mistakes, but that machine is still not intelligent.)

Here’s an example. On one application I worked on, it took close to two weeks to do manual credit card validation by testers. This work was error prone (we aren’t that great at number crunching, and we tire doing repetitive tasks.) We wrote a simple automated test suite to do the validation, and it took about a half hour to run. We then complemented the automated test suite with thoughtful manual testing. After an hour and a half of both automated testing (pure number crunching), and manual testing (usually scenario testing), we had a lot of confidence in what we were doing. We found this combination much more powerful than pure manual testing or pure automated testing. And it was faster than the old way as well.

When automating, look at what you gain by automating, and what you lose by automating a test. Remember, until computers become intelligent, we can’t automate testing, only tasks related to testing. Also, as we move further away from the code context, it usually becomes more difficult to automate tests, and the trade-offs have greater implications. It’s important to make considerations for automated test design to meet team goals, and to be aware of the potential for enormous maintenance costs in the long term.

Please don’t become reckless trying to fulfill an ideal of “100% test automation”. Instead, find out what the goals of the company and the team are, and see how all the tools at your disposal, including test automation can be harnessed to help meet those goals. “Test automation” is not a solution, but one of many tools we can use to help meet team goals. In the end, reckless test automation leads to feckless testing.

Update: I talk more about alternative automation options in my Man and Machine article, and in chapter 19 in the book: Experiences of Test Automation.

Testing Debt

When I’m working on an agile project, (or any process using an iterative lifecycle), an interesting phenomenon occurs. I’ve been struggling to come up with a name for it, and conversations with Colin Kershaw have helped me settle on “testing debt”. (Note: Johanna Rothman has touched on this before, she considers it to be part of technical debt.) Here’s how it works:

  • in iteration one, we test all the stories as they are developed, and are in synch with development
  • in iteration two, we remain in synch testing stories, but when we integrate what has been developed in iteration one with the new code, we now have more to test than just the stories developed in that iteration
  • in iteration three, we have the stories to test in that iteration, plus the integration of the features developed in iterations that came before

As you can see, integration testing piles up. Eventually, we have so much integration testing to do as well as story testing, we have to sacrifice one or the other because we are running out of time. To end the iteration (often two to four weeks in length) some sort of testing needs to be cut in this iteration to be looked at later. I prefer keeping in synch with development, so I consciously incur “integration testing debt”, and we schedule time at the end of development to test a completed system.

Colin and I talked about this, and we explored other kinds of testing we could be doing. Once we had a sufficiently large list of testing (unit testing, “ility” testing, etc.), it became clear that the “testing debt” was more appropriate than “integration testing debt”.

Why do we want to test that much? As I’ve noted before, we can do testing in three broad contexts: the code context (addressed through TDD), the system context and the social context. The social context is usually the domain of conventional software testers, and tends to rely on testing through a user interface. At this level, the application becomes much more complex, greater than the sum of its parts. As a result, we have a lot of opportunity for testing techniques to satisfy coverage. We can get pretty good coverage at the code level, but we end up with more test possibilities as we move towards the user interface.

I’m not talking about what is frequently called “iteration slop” or “trailer-hitched QA” here. Those occur when development is done, and testing starts at the end of an iteration. The separate QA department or testing group then takes the product and deems it worthy of passing the iteration after they have done their testing in isolation. This is really still doing development and testing in silos, but within an iterative lifecycle.

I’m talking about doing the following within an iteration, alongside development:

  • work as a sounding board with development on emerging designs
  • help generate test ideas prior to story development (generative TDD)
  • help generate test ideas during story development (elaborative TDD)
  • provide initial feedback on a story under development
  • test a story that has completed development
  • integration test the product developed to date

Of note, when we are testing alongside development, we can actually engage in more testing activities than when working in phases (or in a “testing” phase near the end). We are able to complete more testing, but that can require that we use more testers to still meet our timelines. As we incur more testing debt throughout a project, we have some options for dealing with it. One is to leave off story testing in favour of integration testing. I don’t really like this option; I prefer keeping the feedback loop as tight as we can on what is being developed now. Another is to schedule a testing phase at the end of the development cycle to do all the integration, “ility”, system testing etc. Again I find this can cause a huge lag in the feedback loop.

I prefer a trade-off. We have as tight a feedback loop on testing stories that are being developed so we stay in synch with the developers. We do as much integration, system, “ility” testing as we can in each iteration, but when we are running out of time, we incur some testing debt in these areas. As the product is developed more (and there is now much more potential for testing), we bring in more testers to help address the testing debt, and bring on the maximum number we can near the end. We schedule a testing iteration at the end to catch up on the testing debt that we determine will help us mitigate project risk.

There are several kinds of testing debt we can incur:

  • integration testing
  • system testing
  • security testing
  • usability testing
  • performance testing
  • some unit testing

And the list goes on.

This idea is very much a work-in-progress. Colin and I have both noticed that on the development side, we are also incurring testing debt. Testing is an area with enormous potential, as Cem Kaner has pointed out in “The Impossibility of Complete Testing” (Presentation) (Article).

Much like technical debt we can incur it unknowingly. Unlike refactoring, I don’t know of a way to repay this other than to strategically add more testers, and to schedule time to pay it back when we are dealing with contexts other than program code. Even in the code context, we still may incur testing debt that refactoring doesn’t completely pay down.

How have you dealt with testing debt? Did you realize you were incurring this debt, and if so, how did you deal with it? Please drop me a line and share your ideas.

The Role of a Tester on Agile Projects

I’ve stopped using this phrase: “The role of a tester on agile projects”. There has been endless debate over whether there should be dedicated testers on agile projects. In the end, endless debate doesn’t appeal to me. I’ve been on enough agile teams now to see that testers can add value in pretty much the same way they always have. They provide a service to a team and customers that is information-based. As James Bach says: “testing lights the way.” Programmer testing and tester testing complement each other, and agile projects can provide an ideal environment for an amazing amount of collaboration and testing. It can be hard to break in to some agile projects though when many agilists seem to view conventional testing with indifference or in some cases with disdain. (Bad experiences with QA Police doesn’t help, so the testing community bears some responsibility with this poor view of testing.)

What I find interesting is hearing experiences from people on Agile projects who fit in and contributed on Agile teams. I like to hear stories about how a Tester worked on an XP team, or how a Business Analyst fit in and thrived on an Agile team. One of the most fascinating stories I’ve encountered was a technical writer who worked on an XP team. It was amazing to find out how they adapted and filled a “team member role” to help get things done. It’s even more fascinating to find how the roles of different team members change over time, and how different people learn new skills and roll up their sleeves to get things done. There is a lot of knowledge in areas such as user experience work, testing, technical writing and others that Agile team members can learn from. In turn, they can learn a tremendous amount from Agilists. This collaborative learning helps move teams forward and can really push a team’s knowledge envelope. When people share successes and failures of what they have tried, we all benefit.

I like to hear of people who work on a team in spite of being told “dedicated testers aren’t in the white book”, or “our software doesn’t need documentation”. I’m amazed at how adaptable smart, talented people are who are blazing trails and adding value on teams that have challenging and different constraints. A lot of the “we don’t need your role” discussion sounds like the same old arguments testers, technical writers, user experience folks and others have been hearing already for years from non-Agilists. Interestingly enough, those who work on agile projects often report that in spite of initial resistance, they manage to fit in and thrive once they adapt. Those who were their biggest opponents at the beginning of a project often become their biggest supporters. This says to me that there are smart, capable people from a lot of different backgrounds who can offer something to any team they are a part of.

The Agile Manifesto says: “we value: Individuals and interactions over processes and tools.” Why then is there so much debate over the “tester role”? Many Agile pundits dismiss having dedicated testers on Agile projects. I often hear: “There is no dedicated tester role needed on an Agile team”. Isn’t this notion of a “role” to be excluded from a team putting a process over people? If someone joins an Agile team who is not a developer and they believe in the values of the methodology and want to work with a great team, do we turn them away? Shouldn’t we embrace the people even if the particular process we are following does not spell out their duties?

I would like to see people from non-developer roles be encouraged to try working on agile teams and share successes and failures so we can all benefit. I still believe software development has a long way to go, and we should try to improve every process. A danger of codifying and spreading a process is that it doesn’t have all the answers for every team. When we don’t have all the answers, we need to look at the motivations and values behind a process. For example, what attracted me most to XP were the values. My view on software processes is that we should embrace the values, and use them as a base to strive towards constant improvement. That means we experiment and push ideas forward, and fight apathy, hubris and as Deming said, drive out fear.

Software Testing and Scrum

Edit: update. I wrote an article for InformIT on this topic which was published Sept. 30, 2005.

I’ve been getting asked lately about how a software testing or QA department fits when a development team adopts Scrum. I’ll post my experiences of working as a conventional tester on a variety of Scrum projects. Stay tuned for more posts on this subject.

Testing Values

I was thinking about the agile manifesto, and this blatant ripoff came to mind. As a software tester, I’ve come to value the following:

  • bug advocacy over bug counts
  • testable software over exhaustive requirements docs
  • measuring product success over measuring process success
  • team collaboration over departmental independence

Point 1: Project or individual bug counts are meaningless unless the important bugs are getting fixed. There are useful bug count related measurements, provided they are used in the right context. However, bug counts themselves don’t have a direct impact on the customer. Frequently, testers are motivated much more by how many bugs they log than they are by how many important bugs they found, reported, advocated and pitched in to help get fixed before the product went out the door.

Point 2: We usually don’t sell requirements documents to our customers, (we tend to sell software products) and these docs often provide a false sense of all that is testable. Given a choice, I’d rather test the software finding requirements by interacting with customers and collaborating with the team than following requirements documents. At least we can start providing feedback on the software. At best, requirements docs are an attempt to put tacit knowledge on paper. At worst, they are out of date, and out of touch with what the customer wants. Only test planning off of requirements documents leaves us open to faults of omission.

Point 3: I find the obsession with processes in software development a bit puzzling, if not absurd. “But to have good software, we need to have a good process!” you say. Sure, but I fear we measure the wrong things when we look too much at the process. I’ve seen wonderful processes produce terrible product too many times. As a customer, I haven’t bought any software processes yet, but I do buy software products. I don’t think about processes at all as a consumer. I’ll take product excellence over “process excellence” any day. The product either works or doesn’t work as expected. If it doesn’t, I quietly move on and don’t do business with that company any more.

I have seen what I would call process zealotry where teams were pressured not to talk about project failures because they “would cast a bad light” on the process that was used. I have seen this in “traditional” waterfall-inspired projects, and interestingly enough, in the agile world as well. If we have some problems with the product, learn from the mistakes and strive to do better. Don’t cover up failures because you fear that your favorite process might get some bad press. Fix the product, and make the customer happy. If you don’t they will quietly move on and you will eventually be out of business.

Point 4: The “QA” line of thinking that advocates an independent testing team doesn’t always work well in my experience. Too often, the QA folks end up as the process police at odds with everyone else, and not enough testing is getting done. Software testing is a challenging intellectual exercise, and software programs are very complex. The more testing we can do, and the more collaboration we can do to do more effective testing, the better. The entire team should be the Quality Assurance department. We succeed or fail as a team, and product quality, as well as adherence to development processes are everyone’s responsibility.

Conventional Testers on Agile Projects – Getting Started Continued

Some of what you find out about agile methods may sound familiar. In fact, many development projects have adopted solutions that some agile methods employ. You may have already adjusted to some agile practices as a conventional tester without realizing it. For example, before the term “agile” was formally adopted, I was on more traditional projects that had some “agile” elements:

  • when I started as a tester, I spent a lot of my first year pair testing with developers
  • during the dot com bubble, we adopted an iterative life cycle with rapid releases at least every two weeks
  • one project required quick builds, so the team developed something very similar to a continuous integration build system with heavy test automation
  • developers I worked with had been doing refactoring since the early ’80s. they didn’t call it by that name, and used checkpoints in their code instead of xUnit tests that would be used now
  • in a formal waterfall project, we had a customer representative on the team, and did quick iterations in between the formal signoffs from phase to phase
  • one project adapted Open Source-inspired practices and rapid prototyping

These actions were done by pragmatic, product-focused companies who needed to get something done to please the customer. Many of these projects would not consider themselves to be “agile” – they were just getting the job done. The difference between them and an agile development team is that the agile methods are a complete methodology driven towards a certain goal rather than a team who has adjusted some practices to improve what they are doing.

Other conventional testers tell me about projects they were on that were not agile, but did agile-like things. This shouldn’t be surprising. The iterative lifecycle has been around for many years (at least back to the 1940s). There are a lot of methodologies that people have used, but not necessarily codified into a formal method within the iterative lifecycle as some agile champions have. A lot of what agile methods talk about isn’t new. Jerry Weinberg has said that methods employed on the Mercury project team he was on in the early ’60s looks to be indistinguishable from what is now known as Extreme Programming.1

Another familiar aspect of agile methods is the way projects are managed. Much of the agile management theory draws very heavily from the quality movement, lean manufacturing, and what some might call Theory Y management. Like the quality pundits of past, many agile management writers are once again educating workers about the problems of Taylorism or Theory X management.

What is new with agile methods, are comprehensive methodology descriptions that are driven from experience. From these practices, discplined design and development methodologies have improved rapidly, such as Test-Driven Development. Most importantly, a shared language has emerged for practices like “unit testing”, “refactoring”, “continuous integration” and others – many of which might have been widely practiced but called different things. This shared language helps a community of practice share and improve ideas much more efficiently. Common goals are much more easily identified when everyone involved is using the same terminology. As a result, the needs of the community have been quickly addressed by tool makers, authors, consultants and practitioners.

This has several implications for conventional testers that require some adjustments:

  • a new vocabulary of practices, rituals, tools and roles
  • getting involved in testing from day one
  • testing in iterations which are often 2-4 weeks long
  • an absence of detailed, formalized requirements documents developed up front
  • requirements done by iteration in backlogs or on 3×5 story cards
  • often, a lack of a formal bug-tracking system
  • working knowledge of tools such as refactoring and TDD-based IDEs, xUnit automation and continuous integration build tools
  • a team focus over individual performance
  • developers who are obsessed with testing
  • working closely with the entire team in the same work area
  • not focusing on individual bug counts or lines of code
  • less emphasis on detailed test plans and scripted test cases
  • heavy emphasis on test automation using Open Source tools

Some of these changes sound shocking to a conventional tester. Without a detailed requirements document, how can we test? Why would a team not have a bug tracking database? What about my comprehensive test plans and detailed manual regression test suites? Where are the expensive capture/replay GUI automation tools? How can we keep up with testing when the project is moving so quickly?

A good place to address some of these questions is: Lessons Learned in Software Testing: A Context-Driven Approach by Cem Kaner, James Bach and Bret Pettichord.

We’ll address some of these challenges in this series, as well as examples of testing activities that conventional testers can engage in on agile projects.

1 p. 48 “Iterative and Incremental Development: A Brief History”, Larman and Basili, 2003