Unhealthy Goals

Chris Morris has blogged recently on a topic that is frequently misunderstood. There is often an attitude that we should shoot as high as we can when setting project goals. “Why not attempt to achieve perfection? Even if we don’t get there, we’ll get better results than if we set a lower goal.” is the line of thinking. Unfortunately on software projects (and probably other projects), this line of thinking can have the opposite effect, actually causing harm to a project.

Here are some project goals from my experience that have hindered the quality of a product, and have been detrimental to the development team:

  1. 100% test coverage
  2. Zero defects
  3. 100% test automation

Goal 1, “100% test coverage” is easily refuted as shown in this paper by Cem Kaner: The Impossibility of Complete Testing. What’s wrong with having this as a goal? An experienced tester realizes that it is an impossible task, and will probably feel like they are now on a death march project. At best, they might feel demoralized and not be able to finish an impossible task. At worst, they might feel pressured to falsify test results to please management.

An inexperienced tester might get complacent, and stop thinking about testing, instead rubber stamping the software with a suite of regression tests. Why should we challenge our ideas about testing the software when we already have 100% coverage? Every time I have seen a product go out the door with “100% Coverage”, a bug was found in the field. This ruins the credibility of the project team, especially if these numbers are used to measure performance, or market some sort of quality.

Goal 2, “Zero Defects” is also an impossible goal. If we can’t test every possible permutation and combination that the software might be exercised with ever, how can we guarantee that our software has zero defects? But this is a good goal you say, even if it is unachievable. Not in my experience. Every zero defect attempt project I have seen has caused defect reporting to become politicized. After the initial exhuberance wears off and testers and developers realize they are finding a lot of defects, strange things happen. Terminology changes, so certain classes of defects are called “issues” or “variances” or “thingies” so that they aren’t measured anymore. Developers (and managers) pressure testers and other developers to not log defects. Defects are closed without being fixed. A “shadow process” emerges where the defects that really need to be fixed aren’t logged through formal channels. Instead, they are logged and fixed away from the eyes and ears of other teammates and management. Even more defects can be injected into the code because of the resulting lack of communication and collaboration.

This is important to note, as Joel writes: “Fixing bugs is only important when the value of having the bug fixed exceeds the cost of the fixing it.” These are businesses we are running and working for, and everything we do needs to make sense for the financial health of the company.

Goal 3, “100% test automation” has been recently re-popularized by the agile development community. The problem with this goal, is that there are certain tests that we aren’t able to automate. In fact, there is little about testing that we can automate, especially if the end user of the software is a human. An automated test script is a very rough approximation of what a human does because computers are not intelligent. Entire classes of tests are ignored or not thought of because they do not fit this testing paradigm, especially exploratory testing. Once automated test suites become sufficiently large, maintenance becomes an issue. There is pressure to not add or execute new test cases on software because there are too many automated test cases to worry about.

Thorough, accurate, meaningful testing can be sacrificed when it is more important to automate tests to reach this goal. Relying too much on these automated tests often allows bugs to be released that a human would have caught instantly. Less rich, manual testing is completed as those cycles are taken up with maintenance.

Measuring individuals and teams by these sorts of standards is almost always counter-productive. SMART goals and other devices used on performance appraisals map nicely to numbers and percentages, but there is little in life we do that can be accurately mapped to a two-variable graph. There are lots of other factors beyond our control which are known as “spurious variables”. At some point, people who are measured against something that is always just beyond their reach will cause them to behave in unintended ways. There are even Dilbert cartoons about rewarding developers for the amount of bugs fixed, and measuring testers on bugs found is equally counter-productive. Both can lead to a breakdown within a team and product quality suffering.
When I talk to the people who decide on these goals, in most cases the actual goal they set isn’t the intended result they are looking for. If a senior manager says to me that they have a goal for zero defects, I ask why.

Usually there is a quality issue that is angering customers with the software being delivered, and they desperately want it fixed. If the issue is reframed to: “Would it be ok if the software you delivered was reliable and robust enough so that your customers are happy?” “Why not make having happy customers who can rely on our software as our goal?” Often, that is a satisfactory goal to the manager, and is something that is reasonable to shoot for. It is also helpful to look at goals over the long term, and have a goal of consistent attempts at improvement.

When I hear things like “100% test automation”, and I question further, it is almost always about efficiency. If the issue is reframed to: “We need to look for ways to be more efficient in our testing. Why don’t we analyze our testing processes (both manual and automated), and choose the most efficient methods we can, and keep working for more efficiency. Why not make efficiency our goal?” In some cases, strategic manual testing may be more efficient and cost effective than automated testing. In many projects, especially in the beginning of a test automation effort, a little test automation can go a long way.

The true motivation behind these goals is important to understand. In many cases I’ve seen the numbers line up nicely with the goals, only to have the intended but unexpressed goal fail. “That’s great that you have zero defects when shipping, but our customers are unhappy!” an executive might say. Mary Poppendieck says: “Measure Up!” This means measure what is really important. Look at the big picture. If you measure details too much, you may miss the big picture. If you do set details-oriented goals, carefully analyze resources, schedules and the people on the project instead of just picking a number to shoot for. Be sure to measure how the goals feed the big picture. If they aren’t helping contribute to the big picture (the bottom line, happy customers, a happy, healthy, productive team), drop them. It might be surprising under scrutiny to see how many of these unrealistic goals are barriers to a good bottom line, happy customers and a happy, healthy, productive team.

Many process certified projects with wonderful charts and graphs and SMART goals all over the place release crummy products that customers quietly stop using. After all the cheering over process numbers fades and the company finds it can’t sell products like it used to, people wonder why. If we measure product success instead of adherence to a process, and measure how the project feeds the bottom line instead of things like “Zero Defects”, we might end up with better results.