Testing Values

I was thinking about the agile manifesto, and this blatant ripoff came to mind. As a software tester, I’ve come to value the following:

  • bug advocacy over bug counts
  • testable software over exhaustive requirements docs
  • measuring product success over measuring process success
  • team collaboration over departmental independence

Point 1: Project or individual bug counts are meaningless unless the important bugs are getting fixed. There are useful bug count related measurements, provided they are used in the right context. However, bug counts themselves don’t have a direct impact on the customer. Frequently, testers are motivated much more by how many bugs they log than they are by how many important bugs they found, reported, advocated and pitched in to help get fixed before the product went out the door.

Point 2: We usually don’t sell requirements documents to our customers, (we tend to sell software products) and these docs often provide a false sense of all that is testable. Given a choice, I’d rather test the software finding requirements by interacting with customers and collaborating with the team than following requirements documents. At least we can start providing feedback on the software. At best, requirements docs are an attempt to put tacit knowledge on paper. At worst, they are out of date, and out of touch with what the customer wants. Only test planning off of requirements documents leaves us open to faults of omission.

Point 3: I find the obsession with processes in software development a bit puzzling, if not absurd. “But to have good software, we need to have a good process!” you say. Sure, but I fear we measure the wrong things when we look too much at the process. I’ve seen wonderful processes produce terrible product too many times. As a customer, I haven’t bought any software processes yet, but I do buy software products. I don’t think about processes at all as a consumer. I’ll take product excellence over “process excellence” any day. The product either works or doesn’t work as expected. If it doesn’t, I quietly move on and don’t do business with that company any more.

I have seen what I would call process zealotry where teams were pressured not to talk about project failures because they “would cast a bad light” on the process that was used. I have seen this in “traditional” waterfall-inspired projects, and interestingly enough, in the agile world as well. If we have some problems with the product, learn from the mistakes and strive to do better. Don’t cover up failures because you fear that your favorite process might get some bad press. Fix the product, and make the customer happy. If you don’t they will quietly move on and you will eventually be out of business.

Point 4: The “QA” line of thinking that advocates an independent testing team doesn’t always work well in my experience. Too often, the QA folks end up as the process police at odds with everyone else, and not enough testing is getting done. Software testing is a challenging intellectual exercise, and software programs are very complex. The more testing we can do, and the more collaboration we can do to do more effective testing, the better. The entire team should be the Quality Assurance department. We succeed or fail as a team, and product quality, as well as adherence to development processes are everyone’s responsibility.