Category Archives: agile

The Role of a Tester on Agile Projects

I’ve stopped using this phrase: “The role of a tester on agile projects”. There has been endless debate over whether there should be dedicated testers on agile projects. In the end, endless debate doesn’t appeal to me. I’ve been on enough agile teams now to see that testers can add value in pretty much the same way they always have. They provide a service to a team and customers that is information-based. As James Bach says: “testing lights the way.” Programmer testing and tester testing complement each other, and agile projects can provide an ideal environment for an amazing amount of collaboration and testing. It can be hard to break in to some agile projects though when many agilists seem to view conventional testing with indifference or in some cases with disdain. (Bad experiences with QA Police doesn’t help, so the testing community bears some responsibility with this poor view of testing.)

What I find interesting is hearing experiences from people on Agile projects who fit in and contributed on Agile teams. I like to hear stories about how a Tester worked on an XP team, or how a Business Analyst fit in and thrived on an Agile team. One of the most fascinating stories I’ve encountered was a technical writer who worked on an XP team. It was amazing to find out how they adapted and filled a “team member role” to help get things done. It’s even more fascinating to find how the roles of different team members change over time, and how different people learn new skills and roll up their sleeves to get things done. There is a lot of knowledge in areas such as user experience work, testing, technical writing and others that Agile team members can learn from. In turn, they can learn a tremendous amount from Agilists. This collaborative learning helps move teams forward and can really push a team’s knowledge envelope. When people share successes and failures of what they have tried, we all benefit.

I like to hear of people who work on a team in spite of being told “dedicated testers aren’t in the white book”, or “our software doesn’t need documentation”. I’m amazed at how adaptable smart, talented people are who are blazing trails and adding value on teams that have challenging and different constraints. A lot of the “we don’t need your role” discussion sounds like the same old arguments testers, technical writers, user experience folks and others have been hearing already for years from non-Agilists. Interestingly enough, those who work on agile projects often report that in spite of initial resistance, they manage to fit in and thrive once they adapt. Those who were their biggest opponents at the beginning of a project often become their biggest supporters. This says to me that there are smart, capable people from a lot of different backgrounds who can offer something to any team they are a part of.

The Agile Manifesto says: “we value: Individuals and interactions over processes and tools.” Why then is there so much debate over the “tester role”? Many Agile pundits dismiss having dedicated testers on Agile projects. I often hear: “There is no dedicated tester role needed on an Agile team”. Isn’t this notion of a “role” to be excluded from a team putting a process over people? If someone joins an Agile team who is not a developer and they believe in the values of the methodology and want to work with a great team, do we turn them away? Shouldn’t we embrace the people even if the particular process we are following does not spell out their duties?

I would like to see people from non-developer roles be encouraged to try working on agile teams and share successes and failures so we can all benefit. I still believe software development has a long way to go, and we should try to improve every process. A danger of codifying and spreading a process is that it doesn’t have all the answers for every team. When we don’t have all the answers, we need to look at the motivations and values behind a process. For example, what attracted me most to XP were the values. My view on software processes is that we should embrace the values, and use them as a base to strive towards constant improvement. That means we experiment and push ideas forward, and fight apathy, hubris and as Deming said, drive out fear.

Software Testing and Scrum

Edit: update. I wrote an article for InformIT on this topic which was published Sept. 30, 2005.

I’ve been getting asked lately about how a software testing or QA department fits when a development team adopts Scrum. I’ll post my experiences of working as a conventional tester on a variety of Scrum projects. Stay tuned for more posts on this subject.

Testing Values

I was thinking about the agile manifesto, and this blatant ripoff came to mind. As a software tester, I’ve come to value the following:

  • bug advocacy over bug counts
  • testable software over exhaustive requirements docs
  • measuring product success over measuring process success
  • team collaboration over departmental independence

Point 1: Project or individual bug counts are meaningless unless the important bugs are getting fixed. There are useful bug count related measurements, provided they are used in the right context. However, bug counts themselves don’t have a direct impact on the customer. Frequently, testers are motivated much more by how many bugs they log than they are by how many important bugs they found, reported, advocated and pitched in to help get fixed before the product went out the door.

Point 2: We usually don’t sell requirements documents to our customers, (we tend to sell software products) and these docs often provide a false sense of all that is testable. Given a choice, I’d rather test the software finding requirements by interacting with customers and collaborating with the team than following requirements documents. At least we can start providing feedback on the software. At best, requirements docs are an attempt to put tacit knowledge on paper. At worst, they are out of date, and out of touch with what the customer wants. Only test planning off of requirements documents leaves us open to faults of omission.

Point 3: I find the obsession with processes in software development a bit puzzling, if not absurd. “But to have good software, we need to have a good process!” you say. Sure, but I fear we measure the wrong things when we look too much at the process. I’ve seen wonderful processes produce terrible product too many times. As a customer, I haven’t bought any software processes yet, but I do buy software products. I don’t think about processes at all as a consumer. I’ll take product excellence over “process excellence” any day. The product either works or doesn’t work as expected. If it doesn’t, I quietly move on and don’t do business with that company any more.

I have seen what I would call process zealotry where teams were pressured not to talk about project failures because they “would cast a bad light” on the process that was used. I have seen this in “traditional” waterfall-inspired projects, and interestingly enough, in the agile world as well. If we have some problems with the product, learn from the mistakes and strive to do better. Don’t cover up failures because you fear that your favorite process might get some bad press. Fix the product, and make the customer happy. If you don’t they will quietly move on and you will eventually be out of business.

Point 4: The “QA” line of thinking that advocates an independent testing team doesn’t always work well in my experience. Too often, the QA folks end up as the process police at odds with everyone else, and not enough testing is getting done. Software testing is a challenging intellectual exercise, and software programs are very complex. The more testing we can do, and the more collaboration we can do to do more effective testing, the better. The entire team should be the Quality Assurance department. We succeed or fail as a team, and product quality, as well as adherence to development processes are everyone’s responsibility.

Conventional Testers on Agile Projects – Getting Started Continued

Some of what you find out about agile methods may sound familiar. In fact, many development projects have adopted solutions that some agile methods employ. You may have already adjusted to some agile practices as a conventional tester without realizing it. For example, before the term “agile” was formally adopted, I was on more traditional projects that had some “agile” elements:

  • when I started as a tester, I spent a lot of my first year pair testing with developers
  • during the dot com bubble, we adopted an iterative life cycle with rapid releases at least every two weeks
  • one project required quick builds, so the team developed something very similar to a continuous integration build system with heavy test automation
  • developers I worked with had been doing refactoring since the early ’80s. they didn’t call it by that name, and used checkpoints in their code instead of xUnit tests that would be used now
  • in a formal waterfall project, we had a customer representative on the team, and did quick iterations in between the formal signoffs from phase to phase
  • one project adapted Open Source-inspired practices and rapid prototyping

These actions were done by pragmatic, product-focused companies who needed to get something done to please the customer. Many of these projects would not consider themselves to be “agile” – they were just getting the job done. The difference between them and an agile development team is that the agile methods are a complete methodology driven towards a certain goal rather than a team who has adjusted some practices to improve what they are doing.

Other conventional testers tell me about projects they were on that were not agile, but did agile-like things. This shouldn’t be surprising. The iterative lifecycle has been around for many years (at least back to the 1940s). There are a lot of methodologies that people have used, but not necessarily codified into a formal method within the iterative lifecycle as some agile champions have. A lot of what agile methods talk about isn’t new. Jerry Weinberg has said that methods employed on the Mercury project team he was on in the early ’60s looks to be indistinguishable from what is now known as Extreme Programming.1

Another familiar aspect of agile methods is the way projects are managed. Much of the agile management theory draws very heavily from the quality movement, lean manufacturing, and what some might call Theory Y management. Like the quality pundits of past, many agile management writers are once again educating workers about the problems of Taylorism or Theory X management.

What is new with agile methods, are comprehensive methodology descriptions that are driven from experience. From these practices, discplined design and development methodologies have improved rapidly, such as Test-Driven Development. Most importantly, a shared language has emerged for practices like “unit testing”, “refactoring”, “continuous integration” and others – many of which might have been widely practiced but called different things. This shared language helps a community of practice share and improve ideas much more efficiently. Common goals are much more easily identified when everyone involved is using the same terminology. As a result, the needs of the community have been quickly addressed by tool makers, authors, consultants and practitioners.

This has several implications for conventional testers that require some adjustments:

  • a new vocabulary of practices, rituals, tools and roles
  • getting involved in testing from day one
  • testing in iterations which are often 2-4 weeks long
  • an absence of detailed, formalized requirements documents developed up front
  • requirements done by iteration in backlogs or on 3×5 story cards
  • often, a lack of a formal bug-tracking system
  • working knowledge of tools such as refactoring and TDD-based IDEs, xUnit automation and continuous integration build tools
  • a team focus over individual performance
  • developers who are obsessed with testing
  • working closely with the entire team in the same work area
  • not focusing on individual bug counts or lines of code
  • less emphasis on detailed test plans and scripted test cases
  • heavy emphasis on test automation using Open Source tools

Some of these changes sound shocking to a conventional tester. Without a detailed requirements document, how can we test? Why would a team not have a bug tracking database? What about my comprehensive test plans and detailed manual regression test suites? Where are the expensive capture/replay GUI automation tools? How can we keep up with testing when the project is moving so quickly?

A good place to address some of these questions is: Lessons Learned in Software Testing: A Context-Driven Approach by Cem Kaner, James Bach and Bret Pettichord.

We’ll address some of these challenges in this series, as well as examples of testing activities that conventional testers can engage in on agile projects.

1 p. 48 “Iterative and Incremental Development: A Brief History”, Larman and Basili, 2003

Conventional Testers on Agile Projects – Getting Started

At this point, the conventional tester says that they can really identify with the values, understand some of the motivations behind agile methods and are ready to jump in. “How do I get started? What do I do?”

Testers Provide Feedback

I’ve talked about this before in the Testers Provide Feedback blog post.

A conventional tester starting out on an agile team should engage in testing activities that provide relevant feedback. It’s as simple as that.

I’m hard pressed to think of any activity that doesn’t tie into the tester as service role, ultimately helping the tester provide feedback. What activity that is depends on what the needs are on a project, right now.

Testing is what I do to provide good feedback on any development project. What is relevant depends on what your goals are, and what the team needs. This can be risk assessments, bug reports, a thumbs up on a new story, all sorts of things.

To have confidence in that feedback, we can engage in many activities to gather information. Exploratory testing is one effective way to do this, another is to use automated tests. There are lots of ways that we can gather information by inquiring, observing, and reporting useful information. What is key to me is to figure out what information the team needs at a particular time. What are some things that have worked well for you? Please let me know.

Personally, a testing activity is useful to the extent that it helps me get the information I need to provide useful feedback to the rest of the team. Sometimes it involves working with a customer and helping identify risks. Other times it’s a status report on automated tests that I give to the team. It may involve manual testing when on a bug hunt, or another useful testing mission where I need to do testing activities beyond automated tests. Other times it’s real-time feedback done when pair testing with a developer. Other times I am working with a customer helping them develop tests. The kind of feedback needed on a project guides what kind of testing activities I need to do.

Providing information is central. As James Bach says: “testing lights the way”. If I am not able to provide more feedback than the automated tests and customer are already providing, then I need to evaluate whether I should be on that agile team or not. If a particular area is not being addressed well and the team needs more information, then I should focus activities on that area, not focus slavishly on what role I think I should be filling.

Doing what needs to be done to help the team and the customer have confidence in the product is central. That means stepping out of comfort zones, learning new things and pitching in to help. This can be intimidating at first, but helps the tester gather more information and helps me learn what kinds of feedback the team needs. It’s a challenge, and those who enjoy challenges might identify with this way of thinking. Doing what needs to be done helps testers gather different kinds of information that can be used to provide the right kind of feedback.

More Information for Testers

In The Ongoing Revolution in Software Testing, Cem Kaner describes the kind of thinking that I am trying to get across. Testers who identify and agree with what Cem Kaner has said should have few problems adjusting to agile teams. This article is worth reading for anyone who is thinking about software testing.

Continue reading the series >>

Conventional Testers on Agile Projects – Values

Values Are Key

Good conventional software testers can potentially offer a lot to a team provided their working attitude is aligned with that of the team. One of the most important aspects of agile development is the values that many agile methods encourage. A team focus rather than an adversarial relationship is important, and testers on agile teams tend to agree with the principles guiding the development process. A conventional tester should at least understand them and follow them when they are on an agile project. Understanding the values goes a long way to understanding the other activities, and why agile teams are motivated to do the things they do. What is important to an agile team? For one, working software. It isn’t the process that is important, it’s the product we deliver in the end. Agile methods are often pragmatic approaches to that end.

A good place to start to learn about values on agile projects is by looking at the values for Extreme Programming. These are the values that I personally identify the most with.

Independence

Many times I hear that testing teams should remain separate from development teams so that they can retain their independence. Even agilists have different opinions on this. This might be due to a misunderstanding of what “independence” can mean on a project. Testers must be independent thinkers, and sometimes need to stick to their guns to get important bugs fixed. To be an independent thinker who advocates for the customer does not necessitate being in a physically independent, separate testing department. Agile projects tend to favor integrated teams, and there are a lot of reasons why having separate teams can cause problems. It can slow down development processes, discourage collaboration, encourage disparate team goals, and impede team communication.

Testers who are integrated with a development team need not sacrifice their independent thinking just because they are sitting with and working closely with developers. The pros of integration can far outweigh the cons. If your project needs an independent audit, hire an auditing team to do just that. Then you should be guaranteed an independent, outside opinion. In other industries, an audit is generally done by an outside team. Any team that does a formal audit of itself wouldn’t be taken seriously. That doesn’t mean the team can’t do a good job of auditing itself, or doing work to prepare for a formal audit isn’t worthwhile. What it means is that a formal audit from an outsider overcomes a conflict of interest. If your team needs independent auditing, prepare for it by testing yourselves, and hire an outsider to do the audit.

I personally would rather be influenced by the development team and collaborate with them to do more testing activities. I get far more testing work done the more I collaborate. If I become biased towards the product in the process, I will trade that for the knowledge and better testing I am able to do by collaborating.

Do What Needs to be Done

A talented professional who cares about the quality of the product that they work on, and believes in the values of agile methods should be able to add to any team they are a part of. This isn’t limited to those who do software development or software testing, but also technical writers, business analysts and project managers. If the team values are aligned, the roles will emerge and come and go as needs arise and change. “That’s not my job!” should not be in an agile team member’s vocabulary.

On an agile project it is important to not stick slavishly to a job title, but to pitch in and do whatever it takes to get the job done. This is something that agilists value. If you can work with them, they can work with you, provided your values are aligned.

Understand the Motivations

The motivation behind the values of agile methods are important. Read Kent Beck’s Extreme Programming Explained for more on values. Read Ken Scwaber and Mike Beedle’s Agile Software Development with Scrum to get insight into how an agile methodology came about. The first couple of chapters of the Scrum book really provide a picture for why agile methods can work.

My favourite line in the Schwaber/Beedle Scrum book is:

They inspected the systems development processes that I brought them. I have rarely provided a group with so much laughter. They were amazed and appalled that my industry, systems development, was trying to do its work using a completely inappropriate process control model.

p. 24, Agile Software Development with Scrum, 2002, Prentice Hall.

This book provides a lot of insight into what motivated people to try something new in software development, and the rationale behind an agile methodology.

The values behind agile methods really flow from these early motivations and discoveries of pragmatic practitioners, and are well worth reading. When you understand where the knowledge of delivering working systems was drawn from, the values and activities really start to make sense.

Continue reading the series >>

Conventional Testers on Agile Projects – Agile Methods

With the rise in popularity of agile development, professionals from the Quality Assurance or software testing world are finding themselves on agile teams. Customers are becoming more conscious of how they spend money on the software they rely on, and as such are starting to demand that dedicated software testers or Quality Assurance teams be involved with development.

Professionals who work in software testing bring a special set of skills to a project. These are people who are thinking about testing all the time, and they work on projects on behalf of the development team as well as on behalf of the customer. They can help the team have more confidence in the product they have developed, and help the customer have more confidence in the product that has been delivered. Some customers realize the importance of testing, and often demand that testing professionals be on agile projects. Some developers also value these skills, and want to work with people to learn more about testing. How does a conventional tester use these skills to add value in a new and unique project environment?

What is Agile anyway?

I hear this quite often: “Our development shop is pretty chaotic, and doesn’t do much documentation. I guess we’re agile, right?” Wrong. “Agile” does not mean “undisciplined” and “no documentation”. Agile development refers to a group of very disciplined methodologies that share similar characteristics. Agile processes tend to be part of an iterative lifecycle, rely on rapid feedback, and believe that software development cannot be predictive, but must be adaptive. Many practitioners and methodology founders came out of chaotic or waterfall methodologies. They figured out what had worked for them and others, and published their findings.

Agilists tend to believe that it’s pretty much impossible to have all the information up front, so have developed systems that cope with uncertainty, and rely on evolutionary designs. The Agile Manifesto has some good descriptions of overall agile values. The Agile Alliance has some great information to look into to find out more.

But if the developers are testing now, won’t I be out of a job?

Nope. Not necessarily. In my experience, I’ve yet to see a project where I and other conventional testers didn’t find important bugs. This includes agile projects. The difference is that on an agile project, we find the important bugs faster. We are more involved with testing throughout development. Now that the developers are doing rigorous work themselves with solid automated unit tests, the products I test are much more robust. This gives me more time to focus on important testing tasks instead of dealing with lots of broken builds and unreliable software at the beginning of a testing phase.

It’s also important to realize that developer testing such as Test-Driven Development can be very different from the kind of testing that conventional testers are used to doing. Testers don’t tend to unit test the production code. Some testing experts (notably Cem Kaner and Brian Marick) describe TDD as “example driven development”. It can be viewed as design work with examples written to evolve a program until it meets requirements. The automated unit tests then provide a safety net for refactoring. If the tests fail when new code is integrated, they are fixed immediately. Chronic broken builds, hours of debugging and a lot of time-consuming troubleshooting are eliminated or greatly reduced this way.

Conventional testing and agile testing are complementary tasks. One does not necessarily replace the other. I was recently the testing lead on a project that was using Scrum with some elements of XP. I had a couple of conventional testers working with me, doing manual testing (especially exploratory testing), automation at several layers in the application with Open Source testing tools, and working with the customer. The customer was doing acceptance tests, the developers were doing unit tests, and the developers and testers were working together on load testing. One day, the Project Manager asked me if he could get involved with testing during some of his spare cycles. A few days later, the Business Analyst asked the same thing. For several weeks, at any given time 95% of the project team were testing. It didn’t put anyone out of job – each person had something unique to add. In fact, the conventional testers were able to do much more testing in this environment than in others I have seen.

“OK” says the conventional tester, “I’m convinced. Where do I start?”

As a software tester, it is important to not get too caught up in “the right development methodology”. After all, it isn’t the process that is important to the end user, it’s the product. We need to be open-minded to different methods of delivering the right product on time with a reasonable level of quality.

In agile methods, the values behind the development methodology are important to understand. We’ll look at values in the next post.

Continue reading the series >>

Conventional Testers on Agile Projects – Intro

For the past year or so, I’ve been posting questions on my blog and to the development community about the role of testers on agile projects. This summer, I decided that the ship has sailed on the question: “should there be testers on agile projects?” No matter how much pundits on either side of the equation: “no tester role” vs. “tester role” debate it, the market will ultimately decide. What is more interesting to me, is the fact that there are conventional testers on agile projects, and they face unique challenges.

How do Conventional testers end up on Agile projects?

I have experienced or been approached by people seeking help in situations like this:

  • the customer tells an agile team they have to have “QA” people on their project
  • an agile pilot project has such good results, a development lead from that team is put in charge of a QA group
  • a software company with an existing QA department tries some agile methods
  • developers who want to learn more about testing request conventional testers to be on an agile project

Moving Forward

I will put a stake in the ground, and answer some of the questions I’ve raised from what I have learned so far. This is the first in a blog series on “what are conventional testers doing on agile projects”? Now that conventional testers are here, what do we do?

I’ve been involved in agile projects, or projects trying out some agile methods as a conventional tester over the past five years. I feel woefully inadequate to really answer these questions, but demand is growing from people who read my ramblings. They are asking me to provide my own thoughts, so here we go. I’ll continue to post my thoughts in this series, which are very much a work-in-progress.

What I seek the most is feedback from you, the reader. Are you a conventional tester on an agile project? If so, what did you do? What worked, and what didn’t? Are you a developer doing agile testing that has worked with conventional testers? What worked well? What didn’t work so well?

As a development community, I challenge each of us to help each other and work together by sharing knowledge and experience. We can leave the debating to the pundits, or for our own pursuit of knowledge. I hope we can get the ball rolling and move towards building better software products, gaining knowledge and skills and serving the customer. After all, the process isn’t what counts to the consumer, it’s the product.

Who Should Read This Blog Series

If you work in Quality Assurance, or are a professional Software Tester, I consider you to be (like me) a “conventional tester”. You tend to do more testing than contributing production code. If you are a conventional tester or a Business Analyst who is joining an agile project as a tester, this blog series is for you. If you are a Testing Manager, or Project Manager on an agile team, you too may find this series helpful. If you are a developer who isn’t sure what to do when a customer asks you to work with people who are full-time testers or Quality Assurance folks, you may also find this series useful.

Continue reading the series >>

Ted Talks About the Customer

Ted O’Grady has an interesting post on the customer and their role on XP teams. I agree with what he has said.

It’s also important to think of the context that the software is being used in. I learned a lesson about getting users to test our software in our office vs. getting them to use the software in their own office. The Hawthorne Effect seemed to really kick in when they were on-site with us. When we observed them using the software in their own business context to solve real-world problems, they used it differently.

If the customer is out of their regular context, that may have an effect on their performance on an agile team. Especially if they feel intimidated by a team of techies who outnumber them. When they are in their own office, and their team outnumbers the techies, they might be much more candid with constructive criticism. Just having a customer on-site doing the work with the team doesn’t guarantee they will approve of the final product. Often I think there is a danger they will approve of what the team thinks the final product should be. When we had rotating customer representatives on a team, groupthink was greatly reduced.

Testers provide Feedback

A question that comes up very often is what activities can conventional testers engage in on agile projects? To me, the key to testing on any project is to provide feedback. On agile projects, the code should always be available to test, and iterations are often quite short. Therefore, a tester on an agile project should perform activities that provide rapid, relevant feedback to the developers and the business stakeholders.

A tester needs to provide feedback to developers to help them gain more confidence in their code, and (as James Bach has pointed out to me), feedback to the customer to help them gain more confidence in the product that is being delivered. So an agile tester should identify areas to enhance and complement the testing that is already being done on the project to that end.

There are potentially some activities that are more helpful than others, but providing feedback is central to what testers do. Some feedback may be in the form of bug reports, or it may be a confirmation that the code satisfies a story, or the application works in a business context.