Category Archives: process

Who Moved My Cheese Sandwich?

My Dad was a school teacher for 25 years. For several of those years, he had the same thing for lunch: a plain cheese sandwich. My Mom enjoyed getting all of us ready for school in the morning and would make us our lunches, put them in brown paper bags, and label each of them. My sister and I were finicky eaters at best, constantly changing our minds, or feeling jealous of classmates who had “cooler” lunches (usually junk food.) My Dad stuck to what he wanted, and it made my Mom’s job easy. She’d make him a plain cheese sandwich, and he happily took it to work.

Then one fateful day he snapped.

He told my Mom he never wanted to see a cheese sandwich ever again because he was sick of them. She was a bit shocked at first, but we all had a big laugh about it, since it had only taken him years of having the same thing for lunch every working day to get tired of it. Most of us have far less patience and are far more discriminating in our tastes and a need for variety.

I’ve worked for years as a consultant helping software development teams, and I see their versions of the cheese sandwich constantly.

Software development team’s version of the cheese sandwich tends to be process-related. Teams do the same things over and over because that’s what they are used to, because they are dogmatic in their alignment with a process ideal, or because it seems that is what everyone else is doing.

A few years ago, I was helping David Hussman with a book project that eventually became a Pragmatic Press production: Cutting an Agile Groove: The Live Sessions. As I looked over a collection of David’s writings, three key phases of team productivity came to mind:

  1. Getting Started
  2. Getting Productive
  3. Staying Productive

Most teams I have observed are pretty good at getting started and getting productive, but a huge majority have a lot of trouble staying productive. Many teams plateau and even drop off over the life of a project, and especially when they work on projects together year after year. A big part of why they plateau is because they don’t change up their processes and practices, and they get bored.

If the team isn’t willing to change, they won’t. My Mom tried for years to suggest things to mix it up. My Dad stuck with the cheese sandwich. However, once he made up his mind to change, he was ready for it, and he went for it. You can’t force change on people, you can only really support it. It can be frustrating to work with a team that has obvious problems, yet they refuse to look at simple, proven solutions and won’t try anything new.

After slavishly sticking to one thing for so long, Dad got rid of it altogether. Sometimes this is worthwhile, but on software teams this can be overkill. I see this with software development teams all the time – the old way is wrong, throw it all away! This can be unfortunate. There might be good things that are getting thrown aside. What you were doing before wasn’t necessarily all wrong, you are just tired of it and enamoured with something new. Or, it is still a good practice, it just doesn’t work for your team anymore.

Recently, a team I was working with decided to get rid of using a wiki. They had process dysfunction, and saw a fantastic presentation by Ralph Boden “Changing the Laws of Engineering with GitHub Pull Requests” that challenged the usefulness of wikis. The team Ralph was working on got rid of wikis, and had some compelling reasons for doing so. Instead of using this presentation as an example of what worked for one team, the take away was that wikis were bad and now have no place on a software development team. It turned out that for this team, the wiki was a useful tool and throwing it away altogether was premature. We see this a lot in software development communities, often in the form of “___X___ is dead!” pronouncements. No, X isn’t dead, it just doesn’t work for you anymore, so don’t discourage other teams from doing X. Just because your team doesn’t like cheese sandwiches anymore doesn’t give you the right to disparage cheese sandwiches and anyone who enjoys them.

Software development is complex and difficult. It is a huge challenge to get a team together, get productive, stay productive, and sustain this over several product releases. The lesson in the cheese sandwich is to not stick with one process or practice for too long. Just like with food, software processes require variety, serve different needs for different people, have different requirements and restrictions, and will get stale over time.

See Things Change and So Should Processes for more on this theme.

New Article – Things Change (And So Should Processes)

I wrote an article for Better Software magazine for the July/August 2013 issue about innovation in processes. The PDF of the article is available here: Things Change (And So Should Processes) and you can download the entire magazine here: Better Software July/Aug 2013.

Many teams struggle when a process lets them down because their unique situation and mix of people, technology and target market don’t fit a generic process. It’s also surprising to find out how old many of the popular processes we are trying to follow are. The technology we are creating has changed a great deal since the 1990s, so why are we surprised when processes created in the ’90s let some teams down?

Instead of feeling guilty that they aren’t following the crowd and doing what the experts tell them to do, I encourage teams to take pride in their innovation not just in technology, but in how they create powerful processes for themselves.

Test Automation Games

As I mentioned in a prior post: Software Testing is a Game, two dominant manual testing approaches to the software testing game are scripted and exploratory testing. In the test automation space, we have other approaches. I look at three main contexts for test automation:

  1. Code context – eg. unit testing
  2. System context – eg. protocol or message level testing
  3. Social context – eg. GUI testing

In each context, the automation approach, tools and styles differ. (Note: I first introduced this idea publicly in my keynote “Test Automation: Why Context Matters” at the Alberta Workshop on Software Testing, May 2005)

In the code context, we are dominated now by automated unit tests written in some sort of xUnit framework. This type of test automation is usually carried out by programmers who write tests to check their code as they develop products, and to provide a safety net to detect changes and failures that might get introduced as the code base changes over the course of a release. We’re concerned that our code works sufficiently well in this context. These kinds of tests are less about being rewarded for finding bugs – “Cool! Discovery!” and more about providing a safety net for coding, which is a different high value activity that can hold our interest.

In the social context, we are concerned about automating the software from a user’s perspective, which means we are usually creating tests using libraries that drive a User Interface, or GUI. This approach of testing is usually dominated by regression testing. People would rather get the tool to repeat the tests than deal with the repetition inherent in regression testing, so they use tools to try to automate that repetition. In other words, regression testing is often outsourced to a tool. In this context, we are concerned that the software works reasonably well for end users in the places that they use it, which are social situations. The software has emergent properties by combining code, system and user expectations and needs at this level. We frequently look to automate away the repetition of manual testing. In video game design terms, we might call repetition that isn’t very engaging as “grinding“. (David McFadzean introduced this idea to me during a design session.)

The system context is a bit more rare, but we test machine to machine interaction, or simulate various messaging or user traffic by sending messages to machines without using the GUI. There are integration paths and emergent properties that we can catch at this level that we will miss with unit testing, but by stripping the UI away, we can create tests that run faster, and track down intermittent or other bugs that might be masked by the GUI. In video or online games, some people use tools to help enhance their game play at this level, sometimes circumventing rules. In the software testing world, we don’t have explicit rules against testing at this level, but we aren’t often rewarded for it either. People often prefer we look at the GUI, or the code level of automation. However, you can get a lot of efficiency for testing at this level by cutting out the slow GUI, and we can explore the emergent properties of a system that we don’t see at the unit level.

We also have other types of automation to consider.

Load and performance testing is a fascinating approach to test automation. As performance thought leaders like Scott Barber will tell you, performance testing is roughly 20% of the automation code development and load generation work, and 80% interpreting results and finding problem areas to address. It’s a fascinating puzzle to solve – we simulate real-world or error conditions, look at the data, find anomalies and investigate the root cause. We combine a quest with discovery and puzzle solving game styles.

If we look at Test-Driven Development with xUnit tools, we even get an explicit game metaphor: The “red bar/green bar game.” TDD practitioners I have worked with have used this to describe the red bar (test failed), green bar (test passed) and refactor (improve the design of existing code, using the automated tests as a safety net.) I was first introduced to the idea of TDD being a game by John Kordyback. Some people argue that TDD is primarily a design activity, but it also has interesting testing implications, which I wrote about here: Test-Driven Development from a Conventional Software Testing Perspective Part 1, here: Test-Driven Development from a Conventional Software Testing Perspective Part 2, and here: Test-Driven Development from a Conventional Software Testing Perspective Part 3.
(As an aside, the Session Tester tool was inspired by the fun that programmers express while coding in this style.)

Cem Kaner often talks about high volume test automation, which is another approach to automation. If you automate a particular set of steps, or a path through a system and run it many times, you will discover information you might otherwise miss. In game design, one way to deal with the boredom of grinding is to add in surprises or rewarding behavior when people repeat things. That keeps the repetitiveness from getting boring. In automation terms, high volume test automation is an incredibly powerful tool to help discover important information. I’ve used this particularly in systems that do a lot of transactions. We may run a manual test several dozen times, and maybe an automated test several hundred or a thousand times in a release. With high volume test automation, I will run a test thousands of times a day or overnight. This greatly increases my chance of finding problems that only appear in very rare events, and forces seemingly intermittent problems to show themselves in a pattern. I’ve enhanced this approach to mutate messages in a system using fuzzing tools, which helps me greatly extend my reach as a tester over both manual testing, and conventional user or GUI-based regression automated testing.

Similarly, creating simulators or emulators to help generate real-world or error conditions that are impossible to create manually are powerful approaches to enhance our testing game play. In fact, I have written about some of these other approaches are about enhancing our manual testing game play. I wrote about “interactive automated testing” in my “Man and Machine” article and in Chapter 19 of the book “Experiences of Test Automation“. This was inspired by looking at alternatives to regression testing that could help testers be more effective in their work.

In many cases, we attempt to automate what the manual testers do, and we fail because the tests are much richer when exercised by humans, because they were written by humans. Instead of getting the computer to do things that we are poor at executing (lots of simple repetition, lots of math, asynchronous actions, etc.) we try to do an approximation of what the humans do. Also, since the humans interact with a constantly changing interface, the dependency of our automation code on a changing product creates a maintenance nightmare. This is all familiar, and I looked at other options. However, another inspiration was code one of my friends wrote to help him play an online game more effectively. He actually created code to help him do better in gaming activities, and then used that algorithm to create an incredibly powerful Business Intelligence engine. Code he wrote to enhance his manual game play in an online game was so powerful at helping him do better work in a gaming context, when he applied it to a business context, it was also powerful.

Software test automation has a couple of gaming aspects:

  1. To automate parts of the manual software testing game we don’t enjoy
  2. It’s own software testing game based on our perceived benefits and rewards

In number 1 above, it’s interesting to analyze why we are automating something. Is it to help our team and system quality goals, or are we merely trying to outsource something we don’t like to a tool, rather than look at what alternatives fit our problem best? In number 2, if we look at how we reward people who do automation, and map automation styles and approaches to our quality goals or quality criteria, not to mention helping our teams work with more efficiency, discover more important information, and make the lives of our testers better, there are a lot of fascinating areas to explore.

Content Pointer: New Article – The Software Development Game

Better Software magazine has published a new article that I co-authored with David McFadzean called “The Software Development Game.” A PDF version is available here: SDG Feature in PDF.

David approached me in the summer of 2008 and pitched the idea of co-authoring a piece about using game-like concepts on software development teams. It turned out that David and I had both been influenced by game theory when creating policy and strategy on software development teams, but David had taken it a step further than I had: he had formalized an actual game-like structure on several teams.

It took us a while to write the piece. First of all, we wanted to observe two SDG instances that were newly created, so we took our time. Secondly, we found that the topic is massive. It was very difficult to fit our ideas into a 3000 word article. Also, we wanted to incorporate more ideas from the gamification of work movement into game play. Finally, our ideas had to pass the review of peers that we trusted, and their feedback took time to address and incorporate.

The final result is a very brief introduction to the topic. It is one powerful tool, particularly for self-organizing teams to help determine their own destiny. Instead of being told what to do by a coach, process consultant or manager, a team can use a simple framework to determine their own optimal mix of processes, tools and practices at a particular point in time. The game structure provides visibility on decisions, and can help teams align their technology focus with the visions of leadership.

We hope you find it as interesting as we do, and if you try it out on your own team, let people know how it works for you and your team.

The Secrets of Faking a Test Project

Here is the slide deck: Secrets of Faking a Test Project. This is satire, but it is intended to get your attention and help you think about whether you are creating value on your projects, or merely faking it by following convention.

The Back Story

Back in 2007 I got a request to fill in for James Bach at a conference. He had an emergency to attend to and the organizers wondered if I could take his place. One of the requests was that I present James’ “Guide to Faking a Test Project” presentation because it was thought-provoking and entertaining. They sent me the slide deck and I found myself chuckling, but also feeling a bit sad about how common many of the practices are.

I couldn’t just use James’ slides because he has an inimitable style, and I had plenty of my own ideas and experiences to draw on that I wanted to share, so I used James’ slide deck as a guide and created and presented my own version.

This presentation is satirical – we challenge people to think about how they would approach a project where the goal is to release bad software, but you make it look as if you really tried to test it well. It didn’t take much effort on our part, we just looked to typical, all too common practices that are often the staple of how people approach testing projects, and presented them from a different angle.

I decided to release these slides publicly today, because almost 5 years after I first gave that presentation, this type of thing still goes on. Testers are forced into a wasteful, strict process of testing that rarely creates value. One of my colleages contacted me – she is on her first software testing project. She kept asking me about different practices that to her seemed completely counter-productive to effective testing, and asked if it was normal. Much of what she has described about her experiences are straight out of that slide deck. In this case, I think it is a naive approach. No doubt managers are much more worried about meeting impossible deadlines than finding problems that might take more time than is allocated, rather than blatant charlatans who are deliberately faking it, but sadly, the outcome is the same.

If you haven’t thought about how accepted testing approaches and “best practices” can be viewed from another perspective, I hope this is an eye opener for you. While you might not think it is particularly harmful to a project to merely follow convention, you might be faking testing to the most important stakeholder: you.

Content Pointer: The Next Wave: Valuable Products First, Process Second

I wrote an article for Modern Analyst that describes some of my process thinking over the past several years. They asked for a piece on post-Agilism, but I prefer talking about value now, so I wrote this piece for them instead.

I introduce my thoughts on a trend I have witnessed for a while now where people move from software ideas (let’s build this killer app!), to process (let’s go Agile!), to value (let’s ensure our application blows our customers away, and everything we do feeds that effort.)

Three years ago, I didn’t hear people talk about value that much at all. It was all process, process, process, and how following an Agile or other process would lead us to success. Now, I am seeing the “value” word pop up more and more, and more teams are using an overall vision to help focus their efforts (process and otherwise) towards creating value for their customers and themselves.

Software Development Process Fusion – Part 1

I first brought this idea up publicly last year at the Agile Vancouver conference with a promise that I would share more of my thoughts. What follows is an attempt to fulfill that promise. This has turned out to be rather long, so it will appear as a blog series.

I grew up in an environment with a lot of music. My grandfather had a rare mastery over a wide variety of musical instruments, and family gatherings were full of singing and impromptu jams. At home, my father had a very eclectic taste in music, and I had a steady diet of gospel, classical, big band, bluegrass, traditional German and Celtic music. One of my babysitters had spent most of her life in India, and introduced me to all kinds of wonderful forms of Indian classical music when I was very small. I was exposed to popular music on the radio, and I took part in various music groups in school bands, choirs and at church. By the time I was in high school, I had a wide exposure to many different kinds of music, and enjoyed any sort of music that moved me, no matter what the style. I could enjoy a common thread in music that was composed and performed in a way that appealed to me, even if the styles were very different. In some cases, my classical friends couldn’t stand some of the popular music I enjoyed, and some of my gospel music friends would refuse to listen to secular music. Enjoying a wide variety of music styles could be controversial, depending on who I was talking to.

In the late 80s, I was in a choir that was in a competition in Toronto. I was billeted with a family who introduced me to a Canadian band called “Manteca.” My new friends introduced me to a style of music called “fusion,” and Manteca were well-known for their mastery of that style. There were elements of improvisational jazz, popular music and world music in their work. Because of Manteca, I decided to learn more about the history of this style. It didn’t take long before I discovered Miles Davis recordings from the ’60s and ’70s that pioneered a combination of musical styles. I then checked out work by Larry Coryell, and jazzy popular bands like Chicago and Blood Sweat & Tears. From Miles Davis, I followed some of his former band members works such as John McLaughlin, Herbie Hancock, Chick Corea and Joe Zawinul. I also checked out groups like the Crusaders, Weather Report and anything with bassist Jaco Pastorius. Some of the music was highly experimental, sometimes it was hard to listen to. One of my favorite bands in that style was Mahavishnu Orchestra, founded by wizard guitarist John McLaughlin. McLaughlin also founded a band called Shakti that utilized a different style of fusion. Shakti was a highly improvisational band utilizing master musicians from India, and McLaughlin on guitar.

From this musical journey, I discovered progressive music from the ’70s, with bands like Yes, King Crimson, Emerson Lake and Palmer, and Genesis. These were all groups with highly talented musicians who brought other musical styles into popular music forms. As with fusion, this style of music was highly experimental – some of it made popular charts, and others remained obscure. What these styles of music share is a demanding level of skill for the performers, an experimental, pushing the envelope attitude, often utilizing collective improvisation (particularly in live concerts.) They were also controversial when they first came out, but many “wild in their time” elements have become enmeshed in mainstream music today. However, in the pioneering days of fusion, it was not uncommon for critics to pan albums and for purists to cry fowl.

One musician who has had an enormous influence on popular music is former Genesis front-man Peter Gabriel. By the late 1990s, the style of music that Peter Gabriel made accessible to a huge audience in the 1980s emanated from airwaves and stereos everywhere you went. I heard Jesse Cook talk (a guitarist who fuses flamenco guitar with many different styles) about the impact Peter Gabriel has had on modern music, particularly with his ability to fuse popular music with traditional music from other parts of the world (“world music”.) Jesse Cook mused at the time that anything we heard on the radio could probably be traced back to Gabriel. When we were discussing the different styles that had an impact on everyday popular music, we wondered what it was like for the musical pioneers when their ideas were new, and how little most of us know about the history of music we take for granted.

I still listen to music that combines different styles. One of my recent discoveries is Harry Manx, a blues guitarist who plays slide guitar on a Mohan Vina, an Indian slide guitar. He deftly fuses traditional Indian music with traditional blues. I find that I am moved by too many styles of music to merely just choose one, and sometimes the weird combinations do something for me that just one style on its own can’t do. I still love classical music, and find that a particular period or style of music suits my mood. Music can touch us in ways other things can’t. Music also evolves, and musicians draw on many influences – yesterday’s “pure” style becomes influenced by something else, and we co-opt other ideas and change. I suppose we expect that in the arts. For example, the Canadian artist Emily Carr’s work is called “post-impressionist” because she came after the famous impressionist painters, and developed a unique style that doesn’t quite fit in that category. Like the fusion musicians, Carr’s artistry has many influences and changed a good deal over her life. Carr had a special ability to fuse disparate themes together in a painting. She might combine everyday objects we might see in our homes with a nature scene, or combine two different scenes together.

In software development, we don’t have a long and rich history to draw from like our artistic counterparts. That doesn’t stop me from approaching software development from the same angle as I do anything else though. Brian Marick in his “Process and Personality” article says: “…my methodological prescriptions and approach match my personality.” This is also true with me. I like to fuse different ideas together and see if I can create something new from the combination. There may be stark lines drawn between the fields where the ideas come from, but that doesn’t bother me too much. It gets me into trouble sometimes, but the ideas are what are important to me. When it comes to software development, I don’t really care if an idea is “Agile”, “waterfall” or has no label at all. If it’s a good idea to me, it’s a good idea. Sometimes on software development projects, I weave together combinations of these ideas in a way that may seem strange to some. I’ve started calling this style that I and others are exploring: “software development process fusion.”

Hiding Behind Languages, Frameworks and Processes

Clinton Begin recently did a talk at a local Java users group, called “Ruby Rebuttal“. While I like Ruby, I know that Clinton is tired of some of the hype surrounding the language, and the calls and predictions of Java’s impending death at its hand. I was unable to attend his talk, but I particularly liked this part of his abstract:

So what’s the problem? We’ve forgotten how to write good software. Instead, we choose to blindly follow “best practices” and “patterns” by simply stuffing what used to be good code into reams of XML and annotations.

I agree. I don’t really care about Ruby vs. Java, or vi vs. Emacs, or “Agile” vs. “non-Agile”, etc. etc. etc. debates. Furthermore, hype does little to impress me. Like Clinton, I find myself turned off by excessive hype, and it makes me want to run in the opposite direction. What impresses me are people who have a position on an issue and aren’t afraid to speak their minds. What impresses me even more are skilled practitioners with a track record. I know Clinton–we’ve worked together–and I have a great deal of respect for him. Reading that section of his abstract just raised him up a few notches in my eyes.

I would take what Clinton said one step further. We also stuff what used to be good practice for development teams into a process. We then hold this process up like some mystically-endowed talisman, and through ritual and jargon create a kind of priesthood. The process might be home-grown, it might be “Agile,” or it could be anything that seems fashionable at the time. There are the us, and there are the “unenlightened” them. (Sometimes, as the previous links show, the “enlightened” helpfully point out the “them” in the form of a joke. How this attempt at parody does anything constructive is beyond me.) I might like or dislike a particular process, but if it is working for the team, that’s what really matters. A process, no matter what it is, is our servant, a tool to help us realize our goals.

When we lose sight of the goals of the business, it’s easy to hide behind a process, a framework, or a language. “Yeah, the product sucks, but we followed the 12 Practices of XP! Look at this scatter chart! We did everything we could!” or or “It would have worked if we had used Java!” or “We used XML! It had to work!” Programming languages, frameworks, development processes and best practices can all be used as shields.

What Clinton touches on reminds me of a blog post by Robert Martin: I’d rather use a socket. Uncle Bob says:

I think the industry should join frameworks anonymous and swear off gratuitous framework adoption. We should all start using sockets and flat files instead of huge middleware and enormous databases — at least for those applications where the frameworks and databases aren’t obviously necessary.

The first time I read Martin’s blog post, Clinton and I were both looking at a design document. A challenge for the proposed system was performance, but it had so many options of distributed frameworks it looked like the latest ad for a Service Oriented Architecture consulting company. Clinton turned to me and said: “If performance is a big deal, why are they looking at a distributed system? Going to disk is going to be much faster.” At that moment I knew Clinton and I were going to get along just fine.

I see the same kind of behavior Clinton and Robert Martin describe when it comes to processes – Agile or otherwise. Paraphrasing Robert Martin: I think the industry should join processes anonymous and swear off gratuitous process adoption.

We should all start analyzing what our processes are doing for us, and only allowing the practices that work for your team, in your context to stick around. We need not hide behind “Agile this” or “XP that” or “CMM something-or-other”, but proudly display our skill, and more importantly, our handiwork to satisfied and impressed customers.

We need to write great software, and continuously improve on whatever languages, frameworks and processes get us there. Whatever we do, we must be aligned with the goals of the business, and take the skills of the team members, as well as the corporate culture into account when we adopt a tool. We need to use the tool that serves us, whether it is a programming language, an IDE, a framework or a process. Leave the religiosity to medieval scholars. If we lose sight of what we’re in business for in the first place, our processes will just end up as hollow ritual.

Test Automation is Software Development

This is a concept I can’t stress enough: test automation is software development. There really is no getting around it. Even if we use a record/playback testing tool, some sort of code is generated behind the scenes. This is nothing new as people like James Bach and Bret Pettichord have reminded us for years. Attempts to automate software development have been around for a while. Here’s a quote that Daniel Gackle sent to me in “Facts and Fallacies of Software Engineering” by Robert Glass:

Through the years, a controversy has raged about whether software work is trivial and can be automated, or whether it is in fact the most complex task ever undertaken by humanity. In the trivial/automated camp are noted authors of books like “Programming without Programmers” and “CASE — The Automation of Software” and researchers who have attempted or claim to have achieved the automation of the generation of code from specification. In the “most complex” camp are noted software engineers like Fred Brooks and David Parnas.

Software testing is also a non-trivial, complex task. Dan Gackle commented on why what Glass calls the “trivial/automated camp” still has such currency in the testing world, and has less support in the development world:

It’s a lot easier to deceive yourself into buying test automation than programming automation because test automation can be seen to produce some results (bad results though they may be), whereas attempts to automate the act of programming are a patently laughable fiasco.”

I agree with Dan, and take this one step further: attempting to automate the act of software testing is also a fiasco. (It would be laughable if it weren’t for all the damage it has caused the testing world.) It just doesn’t get noticed as quickly.

If we want to automate a task such as testing, first of all, we need to ask the question: “What is software testing?” Once we know what it is, we are now ready to ask the question: “Can we automate software testing?”

Here is a definition I’m comfortable with of software testing activities (I got this from James Bach):

  • Assessing product risks
  • Engaging in testing activities
  • Asking questions of the product to evaluate it. We do this by gathering information using testing techniques and tools.
  • Using a mechanism by which we can recognize a problem (an oracle)
  • Being governed by a notion of test coverage

What we call “test automation” really falls under the tools and techniques section. It does not encapsulate software testing. “Test automation” is a valuable tool we can use in our tester’s toolbox to help us do more effective testing. It does not and can not replace a human tester, particularly at the end-user level. It is a sharp tool though, and we can easily cut ourselves with it. Most test automation efforts fail because they don’t take software development architecture into account, they don’t plan for maintenance, and they tend to be understaffed, and are often staffed by non-programmers.

Test automation efforts suffer from poor architecture, bugs (which can cause false positives in test results), high maintenance costs, and ultimately unhappy customers. Sound familiar? Regular software development suffers from these problems as well, but we get faster and louder feedback from paying customers when we get it wrong in a product. When we get it wrong in test automation, it is more insidious; it may take a long time to realize a problem is there. By that time, it might be too late. Customers are quietly moving on to competitors, talented testers are frustrated and leaving your company to work for others. The list goes on.

This attitude of a silver bullet solution to our problems of “test automation” contributes to the false reputation of testing as a trivial task, and testers are blamed for the ultimate poor results. “Our testers didn’t do their jobs. We had this expensive tool that came with such great recommendations, but our testers couldn’t get it to work properly. If we can hire an expert in “Test Company X’s Capture/Replay Tool”, we’ll be fine.” So instead of facing up to the fact that test automation is a very difficult task that requires skill, resources, good people, design, etc. we hire one guy to do it all with our magic tool. And the vicious circle continues.

The root of the problem is that we have trivialized the skill in software testing, and we should have hired skilled testers to begin with. When we trivialize the skill, we are now open to the great claims of snake-oil salesmen who promise the world, and underdeliver. Once we have sunk a lot of money into a tool that doesn’t meet our needs, will we admit it publicly? (In many cases, the test tool vendors forbid you from doing this anyway in their license agreements. One vendor forbids you from talking at all about their product when you buy it.)

In fact, I believe so strongly that “test automation” is not software testing, I agree with Cem Kaner that “test automation” is in most contexts (particularly when applied to a user interface) a complete misnomer. I prefer the more correct term “Computer Assisted Testing”. Until computers are intelligent, we can’t automate testing, we can only automate some tasks that are related to testing. The inquiry, analysis, testing skill etc. is not something a machine can do. Cem Kaner has written at length about this in: Architectures of Test Automation. In software develpment, we benefit greatly from the automation of many tasks that are related to, but not directly attempting to automate software development itself. The same is true of testing. Testing is a skilled activity.

Anyone who claims they can do software test automation without programming is either very naive themselves, or they think you are naive and are trying to sell you something.

Testers and Independence

I’m a big fan of collaboration within software development groups. I like to work closely with developers and other team members (particularly documentation writers and customers who can be great bug finders), because we get great results by working closely together.

Here are some concerns I hear from people who aren’t used to this:

  • How do testers (and other critical thinkers) express critical ideas?
  • How can testers integrated into development teams still be independent thinkers?
  • How can testers provide critiques of product development?

Here’s how I do it:

1) I try very hard to be congruent.

Read Virginia Satir’s work, or Weinberg’s Quality Software Management series for more on congruence. I work on being congruent by asking myself these questions:

  • “Am I trying to manipulate someone (or the rest of the team) by what I’m saying?”
  • “Am I not communicating what I really think?”
  • “Am I putting the process above people?”

Sounds simple, but it goes a long way.

We can be manipulative on agile teams as well. If I want a certain bug to be fixed that isn’t being addressed, I can subtly alter my status at a daily standup to give it more attention (which will eventually backfire), or I can be congruent, and just say: “I really want us to focus on this bug.”

Whenever I vocalize a small concern even when the rest of the team is going another direction, it is worthwhile. Whenever I don’t, we end up with problems. It helps me retain my independence as an individual working in a team. If everyone does it, we get diverse opinions, and hopefully diverse views on potential risks instead of getting wrapped up in groupthink. Read Brenner’s Appreciate Differences for more on this.

Sometimes, we ignore our intuition and doubts when we are following the process. For example, we may get annoyed when we feel someone else is violating one of the 12 practices of XP. We may harp on them about not following the process instead of finding out what the problem is. I have seen this happen frequently, and I’ve been on teams that were disasters because we had complete faith in the process (even with Scrum, XP), and forgot about the people. How did we put the process over people? Agile methods are not immune to this problem. On one project, we ran around saying “that isn’t XP” when we saw someone doing something that didn’t fit the process. In most cases it was good work, but it turned out to be a manipulative way of dealing with something we saw as a problem. In the end, some of them were good practices that should have been retained in that context, on that team. They weren’t “textbook” XP, but the people with the ideas knew what they were doing, not the inanimate “white book”.

2) I make sure I’m accountable for what I’m doing.

Anyone on the team should be able to come up and ask me what I’m doing as a tester, and I should be able to clearly explain it. Skilled testing does a lot to build credibility, and an accountable tester will be given freedom to try new ideas. If I’m accountable for what I’m doing, I can’t hide behind a process or what the developers are doing. I need to step up and apply my skills where they will add value. When you add value, you are respected and given more of a free reign on testing activities that might have been discouraged previously.

Note: By accountability, I do not mean lots of meaningless “metrics”, charts, graphs and other visible measurement attempts that I might use to justify my existence. Skilled testing and coherent feedback will build real credibility, while meaningless numbers will not. Test reports that are meaningful, kept in check by qualitative measures, are developed with the audience in mind, and are actually useful will do more to build credibility than generating numbers for numbers sake.

3) I don’t try to humiliate the programmers, or police the process.

(I now see QA people making a career out of process policing on Agile teams). If you are working together, technical skills should be rubbing off on each other. In some cases, I’ve seen testing become “cool” on a project, and on one project not only testers were working on it, but developers, the BA and the PM were also testing. Each were using their unique skills to help generate testing ideas, and engage in testing. This in-turn gave the testers more credibility when they wanted to try out different techniques that could reveal potential problems. Now that all the team members had a sense for what the testers were going through, more effort was made to enhance testability. Furthermore, the discovery of potential problems was encouraged at this point, it was no longer feared. The whole team really bought into testing.

4) I collaborate even more, with different team members.

When I find I’m getting stale with testing ideas, or I’m afraid I’m getting sucked into groupthink, I pair with someone else. Lately, a customer representative has really been a catalyst for me for testing. Whenever we work together, I get a new perspective on project risks that are due to what is going on in the business, and they find problems I’ve missed. This helps me generate new ideas for testing in areas I hadn’t thought of.

Sometimes working with a technical writer, or even a different developer, instead of the developer(s) you usually work with helps you get a new perspective. This ties into the accountability thought as well. I’m accountable for what I’m testing, but so is the rest of the team. Sometimes fun little pretend rivalries will occur: “Bet I can find more bugs than you.” Or “Bet you can’t find a bug in my code in the next five minutes.” (In this case the developer beat me to the punch by finding a bug in his own code through exploratory testing beside me on another computer, and then gave me some good-natured ribbing about him being the first one to find a bug.)

Independent thinkers need not be in a separate independent department that is an arm’s length away. This need for an independent testing department is not something I unquestionably support. In fact, I have found more success through collaboration than isolation. Your mileage will vary.