Michael Bolton said this about numbers on testing projects:
In my experience, the trusted people don’t earn their trust by presenting numbers; they earn trust by preventing and mitigating bad outcomes, and by creating and maintaining good ones.
I agree. Lately, I’ve been thinking about how we report numbers on testing projects. I was recently in a meeting of Software Quality Assurance professionals and a phrase kept coming up that bothered me: “What percent complete are you on your project?” I think they mean that they have exercised a certain percentage of test cases on the development project. I don’t feel like I can know what “percent complete” of test cases I am on a project, so I’m uncomfortable giving out a number such as “90%” complete. How can I know what 100% of all test cases on a project are? Cem Kaner’s paper: Impossibility of Complete Testing shows us how vast the possible tests that can be run on a project are.
For all the projects I’ve been on that claimed to have 100% test coverage, each one had a major bug discovered by a customer in the field that required a patch. We obviously at best had only covered 100%-1 of the possible test cases. That one test case that the customer found a bug that we did not find was not in our set of test cases, so how could we claim that we had 100% completion? How many more are we missing?
Reporting numbers like this are dangerous in that they can create a false sense of security for testers and project stakeholders alike. If we as testers make measurement claims without looking at the complexity of measurement, we had better be prepared to lose credibility when bugs are found after we report a high “percent complete” number prior to shipping. Worse still, if testers feel that they are 90% complete of all tests on a project, the relentless pursuit of knowledge and test idea generation is easily replaced by apathy.
Cem Kaner points out many variables in measuring testing efforts in this paper: Measurement Issues and Software Testing. Accurate, meaningful measurement of testing activities is not a simple thing, so why the propensity for providing simple numbers?
I look at a testing project as a statistical problem. How many test cases could be in this project if I knew the bounds of the project? Since I don’t usually know the bounds of the entire project, it is difficult to do an accurate statistical analysis using formulas. Instead, I can estimate based on what I know about a project now, and use heuristics to help deal with the vast numbers of possible tests that would need to be covered to get a good percentage. As the project progresses, I learn more about it, and use risk-based techniques to try to mitigate the risk to the customer. I can’t know all the possible test cases at any given time. I may have a number at a particular point in the project, so of the test cases that I know of, right now, we may have a percentage of completion. However, there may be a lot of important ones that I haven’t, or the testing team together haven’t thought of. That is why I don’t like to quote numbers of “percent complete” without providing a context, and even then I don’t present just numbers.
The Software Quality Assurance school of thought seems to be numbers obsessed these days. I am interested in accurate numbers, but I don’t think we have enough information on testing projects to be using many of the numbers we have conditioned project stakeholders to rely on. Numbers are only part of the picture – we need to realize that positive project outcomes are what are really important to project stakeholders.
Numbers without a context and careful analysis and thought can give project stakeholders a false sense of security. This brings me back to Michael Bolton’s thought: what is more important to a stakeholder? A number, or the opinion of a competent professional? In my experience, the latter outweighs the former. Trust is built by delivering on your word, and helping stakeholders realize the results they need. Numbers may be useful when helping provide information to project stakeholders, but we need to be careful how we use them.