Category Archives: agile

Dehumanizing the User

As Mike Cohn points out in Advantages of User Stories for Requirements, writing requirements documents with the phrase: “the system shall” causes us to lose focus on the user. We aren’t writing something to satisfy the system, we are writing something to satisfy the needs of the user, the customer.

Maybe this is another encapsulation endeavor. Is using this kind of language an implicit (or unconscious) attempt to hide the complexity of working with people into a simplified interface called “the system”? As we encapsulate the needs and wants of the user within this black box, is this system an accurate representation of the customer’s original knowledge? Does it meet their needs? How can we tell?

What do we lose when we shift focus from being customer-obsessed to talking more in terms of the system? It’s one thing to satisfy a system, and quite another to satisfy the customer. Language is very important, and if the “user” or “customer” are being talked about less on a project, there is a danger that we may think about the customer and their needs less. Furthermore, if we refer less to the user and more to the system we are developing when looking at requirements and needs, we dehumanize the user. The context of satisfying a requirement at the system level and at the customer level can be two very different things. Customers worry about how usable the software is, sometimes more so than the fact that the requirement exists in the system concept.

If we dehumanize the user, why shouldn’t the user dehumanize us? Instead of a development relationship based on collaboration focused on meeting the needs of the customer, we run the risk of becoming commodities to each other. What’s to stop the customer from treating us as a commodity if we treat them as one? If communication breaks down and we dehumanize each other, how can we meet the needs of the customer? How can they be sure their needs are met? Can we trust each other?

Even when we use methodologies that are based on collaborating with the customer, we should be careful not to force them into a position they aren’t comfortable with. A key for overcoming technical bullying is effective communication. We need to treat the customer as a human being who has needs and feelings like we do, and communicate with them. With proper two-way communication, we can make sure they understand what we are doing, and enable them to ensure we are meeting their needs. Words are important, and listening to the customer is even more important. If we really listen, we can find out what the customer needs and develop that. The customer needs to be satisfied, not necessarily the system. Our language should reflect this.

“Test” and “generalist” are vague words

Brian Marick has posted his Methodology Work Is Ontology Work paper. As I was reading it, I came across this part on page 3 which puts into words what I have been thinking about for a while:

… there are two factions. One is the “conventional” testers, to whom testing is essentially about finding bugs. The other is the Agile programmers, to whom testing is essentially not about bugs. Rather, it’s about providing examples that guide the programming process.

Brian uses this example as an ontology conflict, but has provided me with a springboard to talk about vague words. Not only conflicts, but vague terms can cause breakdowns in communication which can be frustrating to the parties involved.

Tests

When I was studying philosophy at university, we talked a lot about vague words, which are words that can have more than one meaning. We were taught to state assumptions before crafting an argument to drive out any vague interpretations. Some work has been done in this area with the word “testing” on agile projects. Brian Marick has talked about using the term “checked examples“, and others have grappled with this as well.

“Test” is a term that seems to work in the agile development context, so it has stuck. Attempts at changing the terminology haven’t worked. For those of us who have testing backgrounds and experience on agile teams, we automatically determine the meaning of the term based on the context. If I’m talking to developers on an agile team and they use the word “test”, it often means the example they wrote during TDD to develop software. This and other tests are automated and run constantly as a safety net or “change detectors” when developing software. If I’m talking to testers, a test is something that is developed to find defects in the software, or in the case of some regression tests, to show that defects aren’t there. There are many techniques and many different kinds of tests in this context.

Generalists

Another vague term that can cause communication breakdowns is “generalist”. An expression that comes up a lot in the agile community is that agile projects prefer generalists to specialists. What does that mean in that context? Often, when I talk with developers on agile teams, I get the impression that they would prefer working with a tester who is writing automated tests and possibly even contributing production code. As James Bach has pointed out to me, this concept of a project generalist to a tester would be “…an automated testing specialist, not a project generalist.” Sometimes independent testers express confusion to me when they get the impression that on some agile teams, testing specialists need not apply, the team needs to be made up of generalists. A tester may look at the generalist term differently than a developer. To a tester, a generalist may do some programming, testing, documentation, work with the customer – a little bit of everything. A tester feels that by their very dilletante nature on projects that their role by definition truly is a generalist one. Again, we’re using the same word, but it can mean different things to different people.

For those who have software testing backgrounds, working at the intersection of “conventional” testing and agile development is challenging. This is a challenge I enjoy, but I find that sometimes testers and developers are using the same words and talking about two different things. Testers may be intially drawn to the language of agile developers only to be confused when they feel the developers are expecting them to provide a different service than they are used to. Agile developers may initially welcome testers because they value the expertise they hope to gain by collaborating, but may find that the independent tester knows little about xUnit, green bars and FIT. They may be saying the same words, but talking about completely different things. It can be especially frustrating on a project when people don’t notice this is happening.

Other vague words?

I’m trying to make sense of this intersection and share ideas. If you have other vague words you’ve come across in this intersection of conventional testers and agile developers, please drop me a line.

But it’s not in the story…

Sometimes when I’m working as a tester on an agile project that is using story cards and I come across a bug, a response from the developers is something like this: “That’s not in the story.”, or: “That case wasn’t stated in the story, so we didn’t write code to cover that.”

I’ve heard this line of reasoning before on non-agile projects, and I used to think it sounded like an excuse. The response then to a bug report was: ‘That wasn’t in the specification.” As a tester, my thought is that the specification might be wrong. Maybe we missed a specification. Maybe it’s a fault of omission.

However, when I talk to developers, or wear the developer hat myself, I can see their point. It’s frustrating to put a lot of effort into developing something and have the specification change, particularly when you feel like you’re almost finished. Maybe to the developer, as a tester I’m coming across as the annoying manager type who constantly comes up with half-baked ideas that fuel scope creep.

But as a software tester, I get nervous when people point to documents as the final say. As Brian Marick points out, expressing documentation in a written form is often a poor representation of the tacit knowledge of an expert. Part of what I do as a tester is to look for things that might be missing. Automated tests cannot catch something that is missing in the first place.

I found a newsletter entry talking about “implied specifications” on Michael Bolton’s site that underscores this. On an agile project, replace “specification” with “story” and this point is very appropriate:

…So when I’m testing, even if I have a written specification, I’m also dealing with what James Bach has called “implied specifications” and what other people sometimes call “reasonable expectations”. Those expectations inform the work of any tester. As a real tester in the real world, sometimes the things I know and the program are all I have to work with.

(This was excerpted from the “How to Break Software” book review section of the January 2004 DevelopSense Newsletter, Volume 1, Number 1)

So where do we draw the line? When are testers causing “scope creep”, and when are they providing valuable feedback to the developers? One way to deal with the issue is to realize that feedback for developers is effective when given quickly. If feedback can be given earlier on, the testers and developers can collaborate to deal with these issues. The later the feedback occurs, the more potential there is for the developer to feel that someone is springing the dreaded scope creep on them. But if testers bring these things up, and sometimes it’s annoying, why work with them at all?

Testers often work in areas of uncertainty. In many cases that’s where the showstopper bugs live. Testers think about testing on projects all the time, and have a unique perspective, a toolkit of testing techniques, and a catalog of “reasonable expectations”. Good testers have developed the skill of realizing what implied specs or reasonable expectations are to a high level. They can often help articulate something that is missing on a project that developers or customers may not be able to commit to paper.

When testers find bugs that may not seem important to developers (and sometimes to the customer), we need to be careful not to dismiss them as not worth fixing. As a developer, make sure your testers have demonstrated through serious inquiry into the application that this bug isn’t worth fixing. It might simply be a symptom of a larger issue in the project. The bug on its own might seem trivial to the customer and the developer may not feel it’s worth spending the time fixing. Maybe they are right, but a tester knows that bugs tend to cluster, can investigate whether this bug is a corner case, or a symptom of a larger issue that needs to be discovered and addressed. Faults of omission at some level are often to blame for particularly nasty or “unrepeatable” bugs. If it does turn out to be a corner case that isn’t worth fixing, the work of the tester can help build confidence in the developer’s and customer’s decision to not fix it.

*With thanks to John Kordyback for his review and comments.

Automating Tests at the Browser Layer

With the rise in popularity of agile development, there is much work being done with various kinds of testing in the software development process. Developers as well as testers are looking at creative solutions for test automation. With the popularity of Open Source xUnit test framework tools such as JUnit, NUnit, HTTPUnit, JWebUnit and others, testing when developing software can be fun. Getting a “green bar” (all automated unit tests in the harness passed) has become a game, and folks are getting more creative about bringing these test tools to new areas of applications.

One area of web applications that is difficult to test is the browser layer. We can easily use JUnit or NUnit at the code level, we can create a testable interface and some fixtures for FIT or FITnesse to drive the code with tabular data, and we can run tests at the HTTP layer using HTTPUnit or JWebUnit. Where do we go from there, particularly in a web application that relies on JavaScript and CSS?

Historically, the well-known “capture/replay” testing tools have owned this market. These are an option, but do have some drawbacks. Following the home brew test automation vein that many agile development projects use, there is another option using a scripting language and Internet Explorer.

IE can be controlled using its COM interface (also referred to as OLE or ActiveX) which allows a user to access the IE DOM. This means that all users of IE have an API that is tested, published, and quite stable. In short, the vendor supplies us with a testable interface. We can use a testable interface that is published with the product, and maintained by the vendor. This provides a more stable interface than building one at run-time against the objects in the GUI, and we can use any kind of language we want to drive the API. I’m part of a group that prefers the scripting language Ruby.

A Simple Example

How does it work? We can find methods for the IE COM On the MSDN web site, and use these to create tests. I’ll provide a simple example using Ruby. If you have Ruby installed on your machine, open up the command interpreter, the Interactive Ruby Shell. At the prompt, enter the following (after Brian Marick’s example in Bypassing the GUI, what we type is in bold, while the response from the interpreter is in regular font).

irb> require ‘win32ole’
=>true
irb> ie = WIN32OLE.new(‘InternetExplorer.Application’)
=>#<WIN32OLE:0x2b9caa8>
irb> ie.visible = true
#You should see a new Internet Explorer application appear. Now let’s direct our browser to Google:
irb> ie.navigate(“http://www.google.com”)
=>nil
#now that we are on the Google main page, let’s try a search:
irb> ie.document.all[“q”].value = “pickaxe”
=>”pickaxe”
irb> ie.document.all[“btnG”].click
=>nil

#You should now see a search returned, with “Programming Ruby” high up on the results page. If you click that link, you will be taken to the site with the excellent “Programming Ruby” book known as the “pickaxe” book.

Where do we go from here?

Driving tests this way through the Interactive Ruby Shell may look a little cryptic, and the tests aren’t in a test framework. However, it shows us we can develop tests using those methods, and is a useful tool for computer-assisted exploratory testing, or for trying out new script ideas.

This approach for testing web applications was pioneered by Chris Morris, and taught by Bret Pettichord. Building from both of those sources, Paul Rogers has developed a sophisticated library of Ruby methods for web application testing. An Open Source development group has grown up around this method of testing first known as “WTR” (Web Testing with Ruby). Bret Pettichord and Paul Rogers are spearheading the latest effort known as WATIR. Check out the WATIR details here, and the RubyForge project here. If this project interests you, join the mailing list and ask about contributing.

*This post was made with thanks to Paul Rogers for his review and corrections.

The Role of a Tester on Agile Projects

I have been dealing with this question for some time now: “What is the role of a tester on Agile projects?” and I’m beginning to wonder if I’m thinking about this in the right way. I’ve been exploring by doing, by thinking and talking to practitioners about whether dedicated testers have a place on Agile teams. However, most of the questions I get asked by practitioners and developers on Agile teams are about dealing with testing on an Agile project right now, not whether a tester should be put on the team. The testers are here, or the need for testers on the team has been proscribed, so they are looking for answers on how to deal with issues they are facing right now.

Has the ship sailed on the question: “Is there room for dedicated testers on Agile projects?” already? Is it time to rephrase the question to: “What are roles that dedicated testers have added value with on Agile teams?” followed by “What are some good techniques to deal with the unique team conditions on Agile projects?”.

I’m willing to accept that some methodologies may not be compatible with this notion. The question remains, what are testers doing on real-world Agile projects, and what methodologies don’t seem to be amenable to dedicated testers? Of those, are dedicated testers pressured out due to team development philosophy, or are dedicated testers simply not needed? Real-world experience is what we as a community need to keep sharing.

Have the “specialized testers” arrived already, and has the question of whether they should be brought on teams become academic, or are we answering the question by doing? The question will answer itself anyway as time goes on, and experience tends to trump theory alone.

I have been a bit reluctant to put a stake in the ground about the role of testers on Agile projects without more experience myself, but judging by the questions I am getting and the constructive criticism that I have recieved, I should probably share more of my own experiences. I think it’s time for testers on Agile projects to start talking about techniques and what roles they have filled on Agile teams. From that we can gather a set of values that describe the roles, techniques and mindsets of those who are testers on Agile projects. Answering the question by exploration and doing is much more exciting to me than an academic debate.

Presenting Testing Activities to Business Stakeholders

Brian Marick’s series on Agile Testing Directions begins with a test matrix that describes testing activities as “Business Facing”, “Technology Facing”, “Support Programming” and “Critique Product”. This resonated with me, but it wasn’t until he pointed out that in my pair work with developers I did both Business Facing and Technology Facing activities that this seemed to click. I think this matrix he has developed provides testers with a common language to identify and communicate these activities.

I recently did presentations to business stakeholders on testing activities in Agile projects. I’ve generally found it difficult to explain the testing activities I engage in to fellow testers, let alone business stakeholders. In one meeting, I thought of the test matrix and brought up the “Business Facing” testing and “Technology Facing” areas of testing while I was explaining how I test on Agile projects. People seemed to understand this, so I started working on it more.

I started thinking of the matrix rotated on its side with Technology Facing on the left and Business Facing on the right. Instead of “support programming”, I went with “support” to capture both areas. The “business support” would involve activities like setting up meetings with developers and business stakeholders after each development iteration to ensure that the working code is what the business expects, and to get people communicating. I also thought that business support would involve helping the business people with acceptance tests and things like that.

I initially thought of naming each quadrant of the matrix, but when explaining it to my wife Elizabeth, she said: “Why don’t you just put that in a tree diagram?” I did just that, and presented Agile Testing activities like this:

I felt that “technical testing” was a simple way to describe “technology-facing product critiques”, and “business testing” would describe “business-facing product critiques”. Keeping it simple seems to work well when communicating testing concepts to non-technical people.

I described some of the testing techniques in each area. For example, a technical testing activity I use involves collaboration with the developers to write tests that can be run in the absence of a user interface. This can involve adding tests to drive a layer of the application at the controller level. Once the developers make this area testable, we co-design and develop a test case. I can then own the test case and run it with different kinds of test data.

Under the programmer support activity, we can pair together to generate testing ideas. In a test-driven development environment, we can pair program to come up with tests that drive the code, or the tester can use a scripting language to write the tests for the developers.

Business people and technical people seemed to understand this tree diagram and the explanations I gave. I heard later that business stakeholders were starting to use this language in other contexts when they were talking about testing.

Testers on Agile Projects

When I began reading articles and books on Agile Development and attended lectures by well-known experts in the subject, I was impressed. This style of development resonated with me, combining what I had learned from the Open Source world and aspects of successful projects I had been on. As a believer in W. Edwards Deming’s 14 Points, I felt that Agile Methods seemed be addressing many of the same issues.

I welcomed Agile Development, and have championed it now that I’ve experienced it. I didn’t feel a threat to my job as a tester, but I knew things were going to change. The only thing that bothered me was a new impression towards testers that seemed to be emerging. The attitude sounded like: “we’re doing testing now thank you very much, so we don’t know where you will fit in Agile projects”. This attitude is changing, but the role of testers on Agile teams is still emerging.

Since I am a tester working on Agile projects, I want to share my experiences. When I first thought about Agile Development not needing dedicated testers, my intitial reaction was to think of a writer/editor analogy. As my experience with Agile projects grows, I am less confident of the need for a dedicated tester. I still think there is a need, but I have to be willing to admit that there may be no role for dedicated testers on Agile Projects, even if I want there to be. The role may be a diminished one, or as Brian Marick points out in his blog series on Agile Testing Directions, testers may be specialists called in for certain tasks like Security or Performance testing. However, the agile testing role might evolve and change into something completely different from what we know of as testing today. My goal is to see if and where I fit in on Agile projects. I’m relying on my development colleagues to provide me with honest feedback which I will try to share here.

One aspect of the tester role I’m exploring is pair testing with developers. I can support the programmers and help them generate test ideas, especially if they are using test driven development. We can also test together on a machine to test the product – simultaneously generating testing ideas. A senior developer noted that pair programming with a tester provides a developer with someone to help them generate tests. After all, the testers see the world in terms of tests, and the developers see the world in terms of patterns or objects or algorithms. One creates code while the other creates testing ideas. It sounds like a good match. It will be interesting to see how this type of testing pans out.