Category Archives: agile testing

Conventional Testers on Agile Projects – Getting Started

At this point, the conventional tester says that they can really identify with the values, understand some of the motivations behind agile methods and are ready to jump in. “How do I get started? What do I do?”

Testers Provide Feedback

I’ve talked about this before in the Testers Provide Feedback blog post.

A conventional tester starting out on an agile team should engage in testing activities that provide relevant feedback. It’s as simple as that.

I’m hard pressed to think of any activity that doesn’t tie into the tester as service role, ultimately helping the tester provide feedback. What activity that is depends on what the needs are on a project, right now.

Testing is what I do to provide good feedback on any development project. What is relevant depends on what your goals are, and what the team needs. This can be risk assessments, bug reports, a thumbs up on a new story, all sorts of things.

To have confidence in that feedback, we can engage in many activities to gather information. Exploratory testing is one effective way to do this, another is to use automated tests. There are lots of ways that we can gather information by inquiring, observing, and reporting useful information. What is key to me is to figure out what information the team needs at a particular time. What are some things that have worked well for you? Please let me know.

Personally, a testing activity is useful to the extent that it helps me get the information I need to provide useful feedback to the rest of the team. Sometimes it involves working with a customer and helping identify risks. Other times it’s a status report on automated tests that I give to the team. It may involve manual testing when on a bug hunt, or another useful testing mission where I need to do testing activities beyond automated tests. Other times it’s real-time feedback done when pair testing with a developer. Other times I am working with a customer helping them develop tests. The kind of feedback needed on a project guides what kind of testing activities I need to do.

Providing information is central. As James Bach says: “testing lights the way”. If I am not able to provide more feedback than the automated tests and customer are already providing, then I need to evaluate whether I should be on that agile team or not. If a particular area is not being addressed well and the team needs more information, then I should focus activities on that area, not focus slavishly on what role I think I should be filling.

Doing what needs to be done to help the team and the customer have confidence in the product is central. That means stepping out of comfort zones, learning new things and pitching in to help. This can be intimidating at first, but helps the tester gather more information and helps me learn what kinds of feedback the team needs. It’s a challenge, and those who enjoy challenges might identify with this way of thinking. Doing what needs to be done helps testers gather different kinds of information that can be used to provide the right kind of feedback.

More Information for Testers

In The Ongoing Revolution in Software Testing, Cem Kaner describes the kind of thinking that I am trying to get across. Testers who identify and agree with what Cem Kaner has said should have few problems adjusting to agile teams. This article is worth reading for anyone who is thinking about software testing.

Continue reading the series >>

Conventional Testers on Agile Projects – Values

Values Are Key

Good conventional software testers can potentially offer a lot to a team provided their working attitude is aligned with that of the team. One of the most important aspects of agile development is the values that many agile methods encourage. A team focus rather than an adversarial relationship is important, and testers on agile teams tend to agree with the principles guiding the development process. A conventional tester should at least understand them and follow them when they are on an agile project. Understanding the values goes a long way to understanding the other activities, and why agile teams are motivated to do the things they do. What is important to an agile team? For one, working software. It isn’t the process that is important, it’s the product we deliver in the end. Agile methods are often pragmatic approaches to that end.

A good place to start to learn about values on agile projects is by looking at the values for Extreme Programming. These are the values that I personally identify the most with.

Independence

Many times I hear that testing teams should remain separate from development teams so that they can retain their independence. Even agilists have different opinions on this. This might be due to a misunderstanding of what “independence” can mean on a project. Testers must be independent thinkers, and sometimes need to stick to their guns to get important bugs fixed. To be an independent thinker who advocates for the customer does not necessitate being in a physically independent, separate testing department. Agile projects tend to favor integrated teams, and there are a lot of reasons why having separate teams can cause problems. It can slow down development processes, discourage collaboration, encourage disparate team goals, and impede team communication.

Testers who are integrated with a development team need not sacrifice their independent thinking just because they are sitting with and working closely with developers. The pros of integration can far outweigh the cons. If your project needs an independent audit, hire an auditing team to do just that. Then you should be guaranteed an independent, outside opinion. In other industries, an audit is generally done by an outside team. Any team that does a formal audit of itself wouldn’t be taken seriously. That doesn’t mean the team can’t do a good job of auditing itself, or doing work to prepare for a formal audit isn’t worthwhile. What it means is that a formal audit from an outsider overcomes a conflict of interest. If your team needs independent auditing, prepare for it by testing yourselves, and hire an outsider to do the audit.

I personally would rather be influenced by the development team and collaborate with them to do more testing activities. I get far more testing work done the more I collaborate. If I become biased towards the product in the process, I will trade that for the knowledge and better testing I am able to do by collaborating.

Do What Needs to be Done

A talented professional who cares about the quality of the product that they work on, and believes in the values of agile methods should be able to add to any team they are a part of. This isn’t limited to those who do software development or software testing, but also technical writers, business analysts and project managers. If the team values are aligned, the roles will emerge and come and go as needs arise and change. “That’s not my job!” should not be in an agile team member’s vocabulary.

On an agile project it is important to not stick slavishly to a job title, but to pitch in and do whatever it takes to get the job done. This is something that agilists value. If you can work with them, they can work with you, provided your values are aligned.

Understand the Motivations

The motivation behind the values of agile methods are important. Read Kent Beck’s Extreme Programming Explained for more on values. Read Ken Scwaber and Mike Beedle’s Agile Software Development with Scrum to get insight into how an agile methodology came about. The first couple of chapters of the Scrum book really provide a picture for why agile methods can work.

My favourite line in the Schwaber/Beedle Scrum book is:

They inspected the systems development processes that I brought them. I have rarely provided a group with so much laughter. They were amazed and appalled that my industry, systems development, was trying to do its work using a completely inappropriate process control model.

p. 24, Agile Software Development with Scrum, 2002, Prentice Hall.

This book provides a lot of insight into what motivated people to try something new in software development, and the rationale behind an agile methodology.

The values behind agile methods really flow from these early motivations and discoveries of pragmatic practitioners, and are well worth reading. When you understand where the knowledge of delivering working systems was drawn from, the values and activities really start to make sense.

Continue reading the series >>

Conventional Testers on Agile Projects – Agile Methods

With the rise in popularity of agile development, professionals from the Quality Assurance or software testing world are finding themselves on agile teams. Customers are becoming more conscious of how they spend money on the software they rely on, and as such are starting to demand that dedicated software testers or Quality Assurance teams be involved with development.

Professionals who work in software testing bring a special set of skills to a project. These are people who are thinking about testing all the time, and they work on projects on behalf of the development team as well as on behalf of the customer. They can help the team have more confidence in the product they have developed, and help the customer have more confidence in the product that has been delivered. Some customers realize the importance of testing, and often demand that testing professionals be on agile projects. Some developers also value these skills, and want to work with people to learn more about testing. How does a conventional tester use these skills to add value in a new and unique project environment?

What is Agile anyway?

I hear this quite often: “Our development shop is pretty chaotic, and doesn’t do much documentation. I guess we’re agile, right?” Wrong. “Agile” does not mean “undisciplined” and “no documentation”. Agile development refers to a group of very disciplined methodologies that share similar characteristics. Agile processes tend to be part of an iterative lifecycle, rely on rapid feedback, and believe that software development cannot be predictive, but must be adaptive. Many practitioners and methodology founders came out of chaotic or waterfall methodologies. They figured out what had worked for them and others, and published their findings.

Agilists tend to believe that it’s pretty much impossible to have all the information up front, so have developed systems that cope with uncertainty, and rely on evolutionary designs. The Agile Manifesto has some good descriptions of overall agile values. The Agile Alliance has some great information to look into to find out more.

But if the developers are testing now, won’t I be out of a job?

Nope. Not necessarily. In my experience, I’ve yet to see a project where I and other conventional testers didn’t find important bugs. This includes agile projects. The difference is that on an agile project, we find the important bugs faster. We are more involved with testing throughout development. Now that the developers are doing rigorous work themselves with solid automated unit tests, the products I test are much more robust. This gives me more time to focus on important testing tasks instead of dealing with lots of broken builds and unreliable software at the beginning of a testing phase.

It’s also important to realize that developer testing such as Test-Driven Development can be very different from the kind of testing that conventional testers are used to doing. Testers don’t tend to unit test the production code. Some testing experts (notably Cem Kaner and Brian Marick) describe TDD as “example driven development”. It can be viewed as design work with examples written to evolve a program until it meets requirements. The automated unit tests then provide a safety net for refactoring. If the tests fail when new code is integrated, they are fixed immediately. Chronic broken builds, hours of debugging and a lot of time-consuming troubleshooting are eliminated or greatly reduced this way.

Conventional testing and agile testing are complementary tasks. One does not necessarily replace the other. I was recently the testing lead on a project that was using Scrum with some elements of XP. I had a couple of conventional testers working with me, doing manual testing (especially exploratory testing), automation at several layers in the application with Open Source testing tools, and working with the customer. The customer was doing acceptance tests, the developers were doing unit tests, and the developers and testers were working together on load testing. One day, the Project Manager asked me if he could get involved with testing during some of his spare cycles. A few days later, the Business Analyst asked the same thing. For several weeks, at any given time 95% of the project team were testing. It didn’t put anyone out of job – each person had something unique to add. In fact, the conventional testers were able to do much more testing in this environment than in others I have seen.

“OK” says the conventional tester, “I’m convinced. Where do I start?”

As a software tester, it is important to not get too caught up in “the right development methodology”. After all, it isn’t the process that is important to the end user, it’s the product. We need to be open-minded to different methods of delivering the right product on time with a reasonable level of quality.

In agile methods, the values behind the development methodology are important to understand. We’ll look at values in the next post.

Continue reading the series >>

Conventional Testers on Agile Projects – Intro

For the past year or so, I’ve been posting questions on my blog and to the development community about the role of testers on agile projects. This summer, I decided that the ship has sailed on the question: “should there be testers on agile projects?” No matter how much pundits on either side of the equation: “no tester role” vs. “tester role” debate it, the market will ultimately decide. What is more interesting to me, is the fact that there are conventional testers on agile projects, and they face unique challenges.

How do Conventional testers end up on Agile projects?

I have experienced or been approached by people seeking help in situations like this:

  • the customer tells an agile team they have to have “QA” people on their project
  • an agile pilot project has such good results, a development lead from that team is put in charge of a QA group
  • a software company with an existing QA department tries some agile methods
  • developers who want to learn more about testing request conventional testers to be on an agile project

Moving Forward

I will put a stake in the ground, and answer some of the questions I’ve raised from what I have learned so far. This is the first in a blog series on “what are conventional testers doing on agile projects”? Now that conventional testers are here, what do we do?

I’ve been involved in agile projects, or projects trying out some agile methods as a conventional tester over the past five years. I feel woefully inadequate to really answer these questions, but demand is growing from people who read my ramblings. They are asking me to provide my own thoughts, so here we go. I’ll continue to post my thoughts in this series, which are very much a work-in-progress.

What I seek the most is feedback from you, the reader. Are you a conventional tester on an agile project? If so, what did you do? What worked, and what didn’t? Are you a developer doing agile testing that has worked with conventional testers? What worked well? What didn’t work so well?

As a development community, I challenge each of us to help each other and work together by sharing knowledge and experience. We can leave the debating to the pundits, or for our own pursuit of knowledge. I hope we can get the ball rolling and move towards building better software products, gaining knowledge and skills and serving the customer. After all, the process isn’t what counts to the consumer, it’s the product.

Who Should Read This Blog Series

If you work in Quality Assurance, or are a professional Software Tester, I consider you to be (like me) a “conventional tester”. You tend to do more testing than contributing production code. If you are a conventional tester or a Business Analyst who is joining an agile project as a tester, this blog series is for you. If you are a Testing Manager, or Project Manager on an agile team, you too may find this series helpful. If you are a developer who isn’t sure what to do when a customer asks you to work with people who are full-time testers or Quality Assurance folks, you may also find this series useful.

Continue reading the series >>

Ted Talks About the Customer

Ted O’Grady has an interesting post on the customer and their role on XP teams. I agree with what he has said.

It’s also important to think of the context that the software is being used in. I learned a lesson about getting users to test our software in our office vs. getting them to use the software in their own office. The Hawthorne Effect seemed to really kick in when they were on-site with us. When we observed them using the software in their own business context to solve real-world problems, they used it differently.

If the customer is out of their regular context, that may have an effect on their performance on an agile team. Especially if they feel intimidated by a team of techies who outnumber them. When they are in their own office, and their team outnumbers the techies, they might be much more candid with constructive criticism. Just having a customer on-site doing the work with the team doesn’t guarantee they will approve of the final product. Often I think there is a danger they will approve of what the team thinks the final product should be. When we had rotating customer representatives on a team, groupthink was greatly reduced.

Testers provide Feedback

A question that comes up very often is what activities can conventional testers engage in on agile projects? To me, the key to testing on any project is to provide feedback. On agile projects, the code should always be available to test, and iterations are often quite short. Therefore, a tester on an agile project should perform activities that provide rapid, relevant feedback to the developers and the business stakeholders.

A tester needs to provide feedback to developers to help them gain more confidence in their code, and (as James Bach has pointed out to me), feedback to the customer to help them gain more confidence in the product that is being delivered. So an agile tester should identify areas to enhance and complement the testing that is already being done on the project to that end.

There are potentially some activities that are more helpful than others, but providing feedback is central to what testers do. Some feedback may be in the form of bug reports, or it may be a confirmation that the code satisfies a story, or the application works in a business context.

“Test” and “generalist” are vague words

Brian Marick has posted his Methodology Work Is Ontology Work paper. As I was reading it, I came across this part on page 3 which puts into words what I have been thinking about for a while:

… there are two factions. One is the “conventional” testers, to whom testing is essentially about finding bugs. The other is the Agile programmers, to whom testing is essentially not about bugs. Rather, it’s about providing examples that guide the programming process.

Brian uses this example as an ontology conflict, but has provided me with a springboard to talk about vague words. Not only conflicts, but vague terms can cause breakdowns in communication which can be frustrating to the parties involved.

Tests

When I was studying philosophy at university, we talked a lot about vague words, which are words that can have more than one meaning. We were taught to state assumptions before crafting an argument to drive out any vague interpretations. Some work has been done in this area with the word “testing” on agile projects. Brian Marick has talked about using the term “checked examples“, and others have grappled with this as well.

“Test” is a term that seems to work in the agile development context, so it has stuck. Attempts at changing the terminology haven’t worked. For those of us who have testing backgrounds and experience on agile teams, we automatically determine the meaning of the term based on the context. If I’m talking to developers on an agile team and they use the word “test”, it often means the example they wrote during TDD to develop software. This and other tests are automated and run constantly as a safety net or “change detectors” when developing software. If I’m talking to testers, a test is something that is developed to find defects in the software, or in the case of some regression tests, to show that defects aren’t there. There are many techniques and many different kinds of tests in this context.

Generalists

Another vague term that can cause communication breakdowns is “generalist”. An expression that comes up a lot in the agile community is that agile projects prefer generalists to specialists. What does that mean in that context? Often, when I talk with developers on agile teams, I get the impression that they would prefer working with a tester who is writing automated tests and possibly even contributing production code. As James Bach has pointed out to me, this concept of a project generalist to a tester would be “…an automated testing specialist, not a project generalist.” Sometimes independent testers express confusion to me when they get the impression that on some agile teams, testing specialists need not apply, the team needs to be made up of generalists. A tester may look at the generalist term differently than a developer. To a tester, a generalist may do some programming, testing, documentation, work with the customer – a little bit of everything. A tester feels that by their very dilletante nature on projects that their role by definition truly is a generalist one. Again, we’re using the same word, but it can mean different things to different people.

For those who have software testing backgrounds, working at the intersection of “conventional” testing and agile development is challenging. This is a challenge I enjoy, but I find that sometimes testers and developers are using the same words and talking about two different things. Testers may be intially drawn to the language of agile developers only to be confused when they feel the developers are expecting them to provide a different service than they are used to. Agile developers may initially welcome testers because they value the expertise they hope to gain by collaborating, but may find that the independent tester knows little about xUnit, green bars and FIT. They may be saying the same words, but talking about completely different things. It can be especially frustrating on a project when people don’t notice this is happening.

Other vague words?

I’m trying to make sense of this intersection and share ideas. If you have other vague words you’ve come across in this intersection of conventional testers and agile developers, please drop me a line.

But it’s not in the story…

Sometimes when I’m working as a tester on an agile project that is using story cards and I come across a bug, a response from the developers is something like this: “That’s not in the story.”, or: “That case wasn’t stated in the story, so we didn’t write code to cover that.”

I’ve heard this line of reasoning before on non-agile projects, and I used to think it sounded like an excuse. The response then to a bug report was: ‘That wasn’t in the specification.” As a tester, my thought is that the specification might be wrong. Maybe we missed a specification. Maybe it’s a fault of omission.

However, when I talk to developers, or wear the developer hat myself, I can see their point. It’s frustrating to put a lot of effort into developing something and have the specification change, particularly when you feel like you’re almost finished. Maybe to the developer, as a tester I’m coming across as the annoying manager type who constantly comes up with half-baked ideas that fuel scope creep.

But as a software tester, I get nervous when people point to documents as the final say. As Brian Marick points out, expressing documentation in a written form is often a poor representation of the tacit knowledge of an expert. Part of what I do as a tester is to look for things that might be missing. Automated tests cannot catch something that is missing in the first place.

I found a newsletter entry talking about “implied specifications” on Michael Bolton’s site that underscores this. On an agile project, replace “specification” with “story” and this point is very appropriate:

…So when I’m testing, even if I have a written specification, I’m also dealing with what James Bach has called “implied specifications” and what other people sometimes call “reasonable expectations”. Those expectations inform the work of any tester. As a real tester in the real world, sometimes the things I know and the program are all I have to work with.

(This was excerpted from the “How to Break Software” book review section of the January 2004 DevelopSense Newsletter, Volume 1, Number 1)

So where do we draw the line? When are testers causing “scope creep”, and when are they providing valuable feedback to the developers? One way to deal with the issue is to realize that feedback for developers is effective when given quickly. If feedback can be given earlier on, the testers and developers can collaborate to deal with these issues. The later the feedback occurs, the more potential there is for the developer to feel that someone is springing the dreaded scope creep on them. But if testers bring these things up, and sometimes it’s annoying, why work with them at all?

Testers often work in areas of uncertainty. In many cases that’s where the showstopper bugs live. Testers think about testing on projects all the time, and have a unique perspective, a toolkit of testing techniques, and a catalog of “reasonable expectations”. Good testers have developed the skill of realizing what implied specs or reasonable expectations are to a high level. They can often help articulate something that is missing on a project that developers or customers may not be able to commit to paper.

When testers find bugs that may not seem important to developers (and sometimes to the customer), we need to be careful not to dismiss them as not worth fixing. As a developer, make sure your testers have demonstrated through serious inquiry into the application that this bug isn’t worth fixing. It might simply be a symptom of a larger issue in the project. The bug on its own might seem trivial to the customer and the developer may not feel it’s worth spending the time fixing. Maybe they are right, but a tester knows that bugs tend to cluster, can investigate whether this bug is a corner case, or a symptom of a larger issue that needs to be discovered and addressed. Faults of omission at some level are often to blame for particularly nasty or “unrepeatable” bugs. If it does turn out to be a corner case that isn’t worth fixing, the work of the tester can help build confidence in the developer’s and customer’s decision to not fix it.

*With thanks to John Kordyback for his review and comments.

Automating Tests at the Browser Layer

With the rise in popularity of agile development, there is much work being done with various kinds of testing in the software development process. Developers as well as testers are looking at creative solutions for test automation. With the popularity of Open Source xUnit test framework tools such as JUnit, NUnit, HTTPUnit, JWebUnit and others, testing when developing software can be fun. Getting a “green bar” (all automated unit tests in the harness passed) has become a game, and folks are getting more creative about bringing these test tools to new areas of applications.

One area of web applications that is difficult to test is the browser layer. We can easily use JUnit or NUnit at the code level, we can create a testable interface and some fixtures for FIT or FITnesse to drive the code with tabular data, and we can run tests at the HTTP layer using HTTPUnit or JWebUnit. Where do we go from there, particularly in a web application that relies on JavaScript and CSS?

Historically, the well-known “capture/replay” testing tools have owned this market. These are an option, but do have some drawbacks. Following the home brew test automation vein that many agile development projects use, there is another option using a scripting language and Internet Explorer.

IE can be controlled using its COM interface (also referred to as OLE or ActiveX) which allows a user to access the IE DOM. This means that all users of IE have an API that is tested, published, and quite stable. In short, the vendor supplies us with a testable interface. We can use a testable interface that is published with the product, and maintained by the vendor. This provides a more stable interface than building one at run-time against the objects in the GUI, and we can use any kind of language we want to drive the API. I’m part of a group that prefers the scripting language Ruby.

A Simple Example

How does it work? We can find methods for the IE COM On the MSDN web site, and use these to create tests. I’ll provide a simple example using Ruby. If you have Ruby installed on your machine, open up the command interpreter, the Interactive Ruby Shell. At the prompt, enter the following (after Brian Marick’s example in Bypassing the GUI, what we type is in bold, while the response from the interpreter is in regular font).

irb> require ‘win32ole’
=>true
irb> ie = WIN32OLE.new(‘InternetExplorer.Application’)
=>#<WIN32OLE:0x2b9caa8>
irb> ie.visible = true
#You should see a new Internet Explorer application appear. Now let’s direct our browser to Google:
irb> ie.navigate(“http://www.google.com”)
=>nil
#now that we are on the Google main page, let’s try a search:
irb> ie.document.all[“q”].value = “pickaxe”
=>”pickaxe”
irb> ie.document.all[“btnG”].click
=>nil

#You should now see a search returned, with “Programming Ruby” high up on the results page. If you click that link, you will be taken to the site with the excellent “Programming Ruby” book known as the “pickaxe” book.

Where do we go from here?

Driving tests this way through the Interactive Ruby Shell may look a little cryptic, and the tests aren’t in a test framework. However, it shows us we can develop tests using those methods, and is a useful tool for computer-assisted exploratory testing, or for trying out new script ideas.

This approach for testing web applications was pioneered by Chris Morris, and taught by Bret Pettichord. Building from both of those sources, Paul Rogers has developed a sophisticated library of Ruby methods for web application testing. An Open Source development group has grown up around this method of testing first known as “WTR” (Web Testing with Ruby). Bret Pettichord and Paul Rogers are spearheading the latest effort known as WATIR. Check out the WATIR details here, and the RubyForge project here. If this project interests you, join the mailing list and ask about contributing.

*This post was made with thanks to Paul Rogers for his review and corrections.

The Role of a Tester on Agile Projects

I have been dealing with this question for some time now: “What is the role of a tester on Agile projects?” and I’m beginning to wonder if I’m thinking about this in the right way. I’ve been exploring by doing, by thinking and talking to practitioners about whether dedicated testers have a place on Agile teams. However, most of the questions I get asked by practitioners and developers on Agile teams are about dealing with testing on an Agile project right now, not whether a tester should be put on the team. The testers are here, or the need for testers on the team has been proscribed, so they are looking for answers on how to deal with issues they are facing right now.

Has the ship sailed on the question: “Is there room for dedicated testers on Agile projects?” already? Is it time to rephrase the question to: “What are roles that dedicated testers have added value with on Agile teams?” followed by “What are some good techniques to deal with the unique team conditions on Agile projects?”.

I’m willing to accept that some methodologies may not be compatible with this notion. The question remains, what are testers doing on real-world Agile projects, and what methodologies don’t seem to be amenable to dedicated testers? Of those, are dedicated testers pressured out due to team development philosophy, or are dedicated testers simply not needed? Real-world experience is what we as a community need to keep sharing.

Have the “specialized testers” arrived already, and has the question of whether they should be brought on teams become academic, or are we answering the question by doing? The question will answer itself anyway as time goes on, and experience tends to trump theory alone.

I have been a bit reluctant to put a stake in the ground about the role of testers on Agile projects without more experience myself, but judging by the questions I am getting and the constructive criticism that I have recieved, I should probably share more of my own experiences. I think it’s time for testers on Agile projects to start talking about techniques and what roles they have filled on Agile teams. From that we can gather a set of values that describe the roles, techniques and mindsets of those who are testers on Agile projects. Answering the question by exploration and doing is much more exciting to me than an academic debate.