Category Archives: strategy

Getting Started With Exploratory Testing – Part 4

Practice, Practice, Practice

To practice exploratory testing, you will find that you need to be able to generate test ideas, sometimes at a moment’s notice. The sky (or more accurately your mind) is the limit to the kinds of tests you can implement and practice with. The more you think up new tests, and learn to analyze software from many different angles, the more thorough your testing and the resulting discoveries will be. Like anything else, the more you practice, the better you will get.

Test Idea Strategies

Here are some test strategies you can use to generate test ideas to practice exploratory testing. Practice testing, but also practice thinking about testing, applying strategies and doing analysis work.

Concrete Strategies

Sometimes as a tester you need to grab and use specific test ideas off the shelf to get started quickly. Do you have a source of ideas for specific strategies tests sequences, or test inputs? Some test teams have quick reference lists with specific test ideas for their applications under test for this purpose. If you are practicing exploratory testing, and find you need some new ideas quickly, here are some places to look.
Michael Hunter has a fabulous list of test ideas in his You Are Not Done Yet blog series.

Elisabeth Hendrickson’s Test Heuristics Cheat Sheet describes “data type attacks & web tests”.

Karen Johnson has some concrete test ideas on her blog as well. Check out Testing by the Numbers, Chars, Strings and Injections, and ; SQL injections- -‘ for some ideas.

Concrete test ideas can be incredibly useful. Recently I decided to grab some of my favorite testing numbers off the shelf and input them through a test interface I had just developed (written in Java using a library which allowed me to write commands in JUnit tests.) I found a bug within seconds thanks to a testing interface that allowed me to use more speed and accuracy in my test inputs, and my off-the-shelf testing numbers.

Abstract Strategies

Abstract strategies are all about testing ideas. These require more thinking than a quick off the shelf test, but the benefits are enormous. Applying these kinds of strategies to testing can result in very thorough, creative, and disciplined test strategies and execution. You can use these strategies to help design tests in the moment, or while doing test analysis or test planning. Some examples include test models and heuristics.

James Bach’s Heuristic Risk-Based Test Strategy Model is a great place to start. In fact, James’ methodology page is full of great ideas for testers. I touched on the subject of a benefit of using models in this blog post Speeding Up Observation with Models.

Thanks to the work of James, there are several testing mnemonics you can use, or better yet, use as inspiration to create your own.

I touched on the subject of learning testing heuristics here.

Other models include coverage models, failure mode analysis, McLuhan thinking, and many others.

Language Strategies

This is another abstract strategy, but language is so important I decided to give it its own category. Examples of strategies are to look at the meaning of words, how they are put together, and how they might be interpreted in different ways. I use dictionaries a lot to help my thinking, and they can be a great tool to expose vague terms. I also use grammar rules when I investigate a product and look for potential problem areas. I engage in almost a type of “software hermeneutics.” The linguistics (study of language) field has a wealth of tools we can also use to help generate test ideas.

The requirements-based testing community have a good handle on testing written documentation. When I look at anything that is written down that has to do with the product, I carefully look at the wording. That includes in the software, in any supporting documentation, comments, tests, etc. and I look for potential ambiguities. For example, Bender RBT Inc, has a paper, the “The Ambiguity Review Process” which contains a section: “List of Words that Point to Potential Ambiguities“.

Once you find an ambiguity, can you interpret the usage of the software in different ways and test according to different definitions? This is often a surprising source of new test ideas. I use language analysis a lot as a tester, and it is a powerful method for me to generate test ideas.

Sensing and Feeling Strategies

Using Emotions

This is can include situations where our senses are telling us something that we aren’t immediately aware of. We feel a sense of discomfort, sometimes called “cognitive dissonance”. Things appear to be functioning, but we feel conflicted, sometimes for reasons that aren’t readily apparent . I’ve learned to use the “feeling that something is off” as a clue to start investigating that area much more heavily. It almost always pays off in the discovery of a good bug.

Michael Bolton describes this emotional side of testing very well:

…When I feel one of those emotional reactions, I pause and ask “Why do I feel this way?” Often, it’s because the application is violating some consistency heuristic–it’s not working as it used to; it’s not supporting our company’s image; there’s a claim, made in some document or meeting or conversation, that’s being violated; the product is behaving in a way that’s inconsistent with a comparable product; …

Update: Be sure to check out Michael’s excellent presentation on this topic here. It might change the way you think about testing.

Using Your Senses

Have you ever pushed a machine to its limits when testing and noticed something physically change in your test environment? This is also a trigger for me to push harder in that direction, because a discovery is on the way.
Here are some examples:

  • sluggish response when interacting with the UI, or an API
  • interaction slowing down
  • sense the machine labouring
  • hear the fan kick in hard when running a test
  • feel the device getting hot

An old rule of thumb when traveling is to use a map, but follow the terrain. Being aware of any strange feelings you have when testing (it feels off, let’s investigate this a bit more), and being aware of changes to the test environment are great ways to explore the system.

George Dinwiddie reminded me of a good rule of thumb:

As Bob Pease (an analog EE) used to say in Electronic Design and EDN magazines, ‘If you notice anything funny, record the amount of funny.’

As a tester, when I see something funny, my first instinct is to get the application to repeat the funny behavior. Then I observe whether follow-on testing creates even more funny behavior. In many cases it does, because bugs often tend to cluster.

Some of my most significant discoveries have come about because I paid attention to a weird feeling, investigated further, noticed something flicker out of the corner of my eye, or noticed that the log file on a server that I was tailing was writing at a strange rate, or that a certain action caused my machine to start laboring, etc. It’s often the subtle things that are the biggest clues to overlooked problems in the system, so it’s a good idea to practice your observation skills.

Oblique Strategies

These are ideas you can use to spur creative thought processes if you find yourself running out of ideas. Michael Bolton introduced me to these last year while we were working on a testing exercise. I was running out of ideas, and Michael sent me to this web site that generates oblique strategies. After a couple of tries, I ended up on a whole new idea set that jump-started my testing.

Wikipedia says this about Oblique Strategies: “…is a set of published cards created by Brian Eno and Peter Schmidt. Now in its fifth edition, it was first published in 1975. Each card contains a phrase or cryptic remark which can be used to break a deadlock or dilemma situation.”

When I was a kid, my family had a game where you would blindly open a dictionary, close your eyes, and put your finger down on a page. You would then open your eyes, look at the word closest to your finger, and then make up a short story involving that word. A variation involved doing this for two words, and then combining them to try to come up with something creative or funny. This is a similar kind of idea generating strategy to oblique strategies, but not as sophisticated. There are other, similar ways of getting oneself out of a rut.

With these sorts of tools you often need to be creative to apply your result to testing, but with a little bit of trial and error and creative thinking, you will be surprised at what you come up with.

Uncategorized, or Inexplicable Strategies

Sometimes, believe it or not, some of my friends dislike my need to systematize, categorize and neologize. They prefer to think that their testing ideas are the work of intuition, or are dropped to them by the testing muse, or the idea fairy. That’s fine, and I can respect that they think about things differently. Furthermore, I frequently miss out on ideas and insights that they have. I’ve added this section to include the mysterious, the unexplained, or any other test strategy I might not be aware of, or be hard-pressed to categorize.

What works for you? Do you use a strategy for testing that I haven’t touched on in this post? This list is in no way exhaustive, and it, like any other model, is imperfect. I’m sure my own model and strategies will change as I learn more, and get more feedback from colleagues, and you, the reader.

I hope this list of test strategies helps you as you practice exploratory testing. Test idea generation can be as creative as we permit ourselves to explore, and one of the wonderful things about software testing is that sometimes the weird idea combinations are the best ones. Furthermore, practice helps build the strength and agility of the most powerful testing tool we have at our disposal, the human mind.

Why Are We in Business?

When I look at my work in software testing, I find myself at the intersection of three disciplines:

  • Technology
  • Philosophy
  • Business

I have business training, with a focus on technology. Sometimes I forget about the business, and focus on testing ideas, technology concerns, and following a process. As a tester, I think about the product, and its impact on users, but I forget about what really keeps us in business: the sale of a product or service. Whether a product might be saleable or not might be a testing question we should ask.

It matters not a whit what our process is if we aren’t bringing in as much or more money than our business is shelling out. I have seen fabulous execution of software development processes in companies that went under. I’ve also seen completely chaotic development environments in wildly successful companies. What was the secret? One group focused on sales, on making the product easy to use, and really listened to the customer. In the end, the difference between the development teams having a job or not did not come down to technology or process decisions, but whether the company could sell the product.

Technology, process and engineering considerations are useful to the extent that they help the company feed the bottom line. If a process helps us deliver a product that customers are happy with, then great. However, we need to be careful on what we are measuring. We need to look at the big picture, and not be self-congratulating because of adherence to our favorite process. If the company isn’t making enough money, it simply won’t matter. In time we’ll be a project team looking for work, lamenting about how great our process was at the last company we were at.

As Deming said, everyone within a company is responsible for quality. I also believe we are all responsible for the company’s success in the market. Focusing only on engineering, technology and process may not help if we are a company that can’t sell our products, and can’t satisfy and impress a customer. If we reward employees for adherence to a process, how long will it take for those rewards to become more important than the goals of the business? At the heart of every business is this equation:

Revenue - Expenses == Profit

Without profits, the business will eventually cease to exist. In software development, it pays to keep this in mind. Too often we are distracted by details in our daily work, and forget the basics of business.

Unhealthy Goals

Chris Morris has blogged recently on a topic that is frequently misunderstood. There is often an attitude that we should shoot as high as we can when setting project goals. “Why not attempt to achieve perfection? Even if we don’t get there, we’ll get better results than if we set a lower goal.” is the line of thinking. Unfortunately on software projects (and probably other projects), this line of thinking can have the opposite effect, actually causing harm to a project.

Here are some project goals from my experience that have hindered the quality of a product, and have been detrimental to the development team:

  1. 100% test coverage
  2. Zero defects
  3. 100% test automation

Goal 1, “100% test coverage” is easily refuted as shown in this paper by Cem Kaner: The Impossibility of Complete Testing. What’s wrong with having this as a goal? An experienced tester realizes that it is an impossible task, and will probably feel like they are now on a death march project. At best, they might feel demoralized and not be able to finish an impossible task. At worst, they might feel pressured to falsify test results to please management.

An inexperienced tester might get complacent, and stop thinking about testing, instead rubber stamping the software with a suite of regression tests. Why should we challenge our ideas about testing the software when we already have 100% coverage? Every time I have seen a product go out the door with “100% Coverage”, a bug was found in the field. This ruins the credibility of the project team, especially if these numbers are used to measure performance, or market some sort of quality.

Goal 2, “Zero Defects” is also an impossible goal. If we can’t test every possible permutation and combination that the software might be exercised with ever, how can we guarantee that our software has zero defects? But this is a good goal you say, even if it is unachievable. Not in my experience. Every zero defect attempt project I have seen has caused defect reporting to become politicized. After the initial exhuberance wears off and testers and developers realize they are finding a lot of defects, strange things happen. Terminology changes, so certain classes of defects are called “issues” or “variances” or “thingies” so that they aren’t measured anymore. Developers (and managers) pressure testers and other developers to not log defects. Defects are closed without being fixed. A “shadow process” emerges where the defects that really need to be fixed aren’t logged through formal channels. Instead, they are logged and fixed away from the eyes and ears of other teammates and management. Even more defects can be injected into the code because of the resulting lack of communication and collaboration.

This is important to note, as Joel writes: “Fixing bugs is only important when the value of having the bug fixed exceeds the cost of the fixing it.” These are businesses we are running and working for, and everything we do needs to make sense for the financial health of the company.

Goal 3, “100% test automation” has been recently re-popularized by the agile development community. The problem with this goal, is that there are certain tests that we aren’t able to automate. In fact, there is little about testing that we can automate, especially if the end user of the software is a human. An automated test script is a very rough approximation of what a human does because computers are not intelligent. Entire classes of tests are ignored or not thought of because they do not fit this testing paradigm, especially exploratory testing. Once automated test suites become sufficiently large, maintenance becomes an issue. There is pressure to not add or execute new test cases on software because there are too many automated test cases to worry about.

Thorough, accurate, meaningful testing can be sacrificed when it is more important to automate tests to reach this goal. Relying too much on these automated tests often allows bugs to be released that a human would have caught instantly. Less rich, manual testing is completed as those cycles are taken up with maintenance.

Measuring individuals and teams by these sorts of standards is almost always counter-productive. SMART goals and other devices used on performance appraisals map nicely to numbers and percentages, but there is little in life we do that can be accurately mapped to a two-variable graph. There are lots of other factors beyond our control which are known as “spurious variables”. At some point, people who are measured against something that is always just beyond their reach will cause them to behave in unintended ways. There are even Dilbert cartoons about rewarding developers for the amount of bugs fixed, and measuring testers on bugs found is equally counter-productive. Both can lead to a breakdown within a team and product quality suffering.
When I talk to the people who decide on these goals, in most cases the actual goal they set isn’t the intended result they are looking for. If a senior manager says to me that they have a goal for zero defects, I ask why.

Usually there is a quality issue that is angering customers with the software being delivered, and they desperately want it fixed. If the issue is reframed to: “Would it be ok if the software you delivered was reliable and robust enough so that your customers are happy?” “Why not make having happy customers who can rely on our software as our goal?” Often, that is a satisfactory goal to the manager, and is something that is reasonable to shoot for. It is also helpful to look at goals over the long term, and have a goal of consistent attempts at improvement.

When I hear things like “100% test automation”, and I question further, it is almost always about efficiency. If the issue is reframed to: “We need to look for ways to be more efficient in our testing. Why don’t we analyze our testing processes (both manual and automated), and choose the most efficient methods we can, and keep working for more efficiency. Why not make efficiency our goal?” In some cases, strategic manual testing may be more efficient and cost effective than automated testing. In many projects, especially in the beginning of a test automation effort, a little test automation can go a long way.

The true motivation behind these goals is important to understand. In many cases I’ve seen the numbers line up nicely with the goals, only to have the intended but unexpressed goal fail. “That’s great that you have zero defects when shipping, but our customers are unhappy!” an executive might say. Mary Poppendieck says: “Measure Up!” This means measure what is really important. Look at the big picture. If you measure details too much, you may miss the big picture. If you do set details-oriented goals, carefully analyze resources, schedules and the people on the project instead of just picking a number to shoot for. Be sure to measure how the goals feed the big picture. If they aren’t helping contribute to the big picture (the bottom line, happy customers, a happy, healthy, productive team), drop them. It might be surprising under scrutiny to see how many of these unrealistic goals are barriers to a good bottom line, happy customers and a happy, healthy, productive team.

Many process certified projects with wonderful charts and graphs and SMART goals all over the place release crummy products that customers quietly stop using. After all the cheering over process numbers fades and the company finds it can’t sell products like it used to, people wonder why. If we measure product success instead of adherence to a process, and measure how the project feeds the bottom line instead of things like “Zero Defects”, we might end up with better results.

Tests as Documentation Workshop Notes

At this year’s XP Agile Universe conference, Brian Marick and I co-hosted a workshop on Tests as Documentation. The underlying theme was: How are tests like documentation, and how can we use tests as project documentation?. Can we leverage tests to use as project documentation to help minimize wasteful documentation?

Since the most up to date information about a product is in the source code itself, how do we translate that into project documentation? In the absence of a tool to traverse the code, translate it and generate documentation, are tests a good place to look? Can we just take the tests we have and use them as documentation, or do we need to design tests a specific way?

We solicited tests from workshop participants, and had some sample tests developed in JUnit with the corresponding Java code, tests developed with test::unit and the corresponding Ruby code, and some FIT tests.
Brian organized the workshop in a format similar to Patterns or Writers workshops. This was done to facilitate interaction and to generate many ideas in a constructive way. Groups divided up to look at the tests, and to try to answer the questions from the workshop description. Once the pairs and groups had worked through these questions, they shared their own questions with the group. Here is a summary of some of the questions that were raised:

  1. Should a test be written so that it is understood by a competent practitioner? (Much like skills required to read a requirements document.)
  2. How should customer test documentation differ from unit test documentation?
  3. With regards to programmer tests: Is it a failure if a reader needs to look at the source code in order to understand the tests?
  4. What is a good test suite size?
  5. How do we write tests with an audience in mind?
  6. What is it that tests document?
  7. How should you order tests?
  8. Should project teams establish a common format for the documentation regardless of test type?
  9. How do we document exception cases?

Some of these questions were taken by groups, but not all of them. I encourage anyone who is interested to look at examples that might answer some of them and share them with the community. While discussion within groups and with the room as a whole didn’t provide a lot in the way of answers to these questions, the questions themselves are helpful for the community to think about tests as documentation.

Of the ideas shared, there were some clear standouts for me. These involve considering the reader – something I must admit I haven’t spent enough time thinking of.

The Audience

Brian pointed out an important consideration. When writing any kind of documentation whether it is an article, a book, project documentation etc., one needs to write with an audience in mind. When reviewing tests (and as a test writer myself), I notice that I don’t always write tests with an audience in mind. Often I’m thinking more about the test design, than the audience who might be reading the tests. This is an important distinction that we need to think about when writing tests if we want them to be used as documentation. Can we write tests with an audience in mind and still have them as effective tests? Will writing tests with an audience in mind help us write better tests? If we don’t write with an audience in mind, they won’t work very well as documentation.

What are We Trying to Say?

Another standout for me was what is it that tests document? We were fortunate to have example tests for people to review. The FIT tests seemed to be easier for non-developers to read, while the developers jumped into the Ruby test::unit and JUnit tests immediately. Some testers who weren’t programmers paired with developers who explained how to read the tests and what the tests were doing. I enjoyed seeing this kind of collaboration, and it got me thinking. More on that later. The point is, if we are writing a document, we need to have something to say. I’m reminded of high school English classes and learning how to develop a good thesis statement, and my teachers telling us we need to find something to say.

Order is Important

Another important point that emerged about tests as documentation was the order of the tests. Thinking of tests as documentation means thinking of the order of the tests not unlike chapters in a book, or paragraphs in a paper. A logical order is important. Without it we can’t get our ideas across clearly to the reader. It is difficult to read something that has jumbled ideas and doesn’t have a consistent, orderly flow.

With regards to the audience, one group identified two different potential audiences among programmers: designers and maintainers. A designer will need a different set of tests than a maintainer. Furthermore, the order of the tests developed will differ if one is a maintainer than if one is a designer. There are more audiences on the project than programmers, and these audiences may require a different order of tests.

Dealing with the “What is it that tests document?” question, one group felt that different kinds of tests document different things. For example, the unit tests the developers write will document the design requirements while the User Acceptance Tests will document the user requirements. The fact that some developers seemed more at home reading the unit tests, and some testers were more comfortable reading the FIT tests might give some credence to this. They are used to reading different project literature and might be more familiar with one mode over another.

Another important question was: “How do we define tests and explain what they are supposed to do?” If tests also should serve as project documentation and not just exercise the code or describe how to exercise the product in certain ways, the definition of tests will change according to how they are defined for a project.

Workshop Conclusions

I’m not sure we developed any firm conclusions from the workshop, though the group generated many excellent ideas. A workshop goal was to look at areas for further study, so we certainly met that. One idea that came up that I’ve been thinking about for a few months is to have meta descriptions in the automated tests that are more verbose. The tests would have program-describing details within the comments. A tool such as JavaDoc or RDoc could be used to generate project documentation from the specially tagged automated test comments. I like this idea, but the maintenance problem is still there. It’s easy for the comments to get out of date, and requires duplication of effort.

Most important to me were the questions raised about the audience, and how to write tests with an audience in mind. It appears that the tests we seem to be writing to date may not necessarily be taken on their own and used as documentation like requirements documents. None of the tests that we looked at sufficiently explained the product. The readers either had to consult the developer or look at the source code. This wasn’t a weakness or shortcoming of the tests, but showed us that tests as documentation is an area that needs more thought and work.

A couple of other very interesting observations were made. One was by a developer who said that you can tell whether tests were generated by Test Driven Development (TDD) or not by reading them. Another idea was that if one is reading tests and has to consult the source code to figure out what the program is doing might be a testing smell. These observations coupled with the tester/developer collaboration when reading tests got me thinking in a different direction.

My Thoughts

At the end of the workshop, I found myself less interested in tests serving as documentation to replace requirements documents, project briefs or other project documents. Instead, I started thinking about reading tests as a kind of testing technique. I started to imagine a kind of “literary criticism” technique to use to test our tests. This is an area that is hard to deal with. How thorough is our test coverage? Are our tests good enough? Are we missing anything? How do we know if our tests are doing the job they could be? I see a lot of potential to test our tests by borrowing from literary criticism.

Brian spoke about writer’s workshops as a safe place for writers to have practitioners, peers and colleagues look over each other’s work before they are published. This kind of atmosphere helps writers do better work and is a safe environment to get good constructive criticism before they are published and potentially savaged by the masses if they miss something important. For a “testing the tests” technique, instead of an us-versus-them relationship to simply negatively criticize, we could have test writers’ workshops to critique each other’s tests. The point is to have a safe environment to make the tests (and thereby the product) as solid as they could be before they are open to be potentially “…savaged by the masses,” for example, customers finding problems or faults of omission.

Here are three areas I saw in the workshop that could potentially help in testing the tests:

  1. I saw testers and developers collaborating, and it occurred to me that explaining what you have written (or coded) is one of the best ways of self-critiquing. When explaining how something works to someone else, I find myself noticing holes in my logic. Also, the other person also may spot holes in what has been written. That editor, or second set of eyes really helps as pair programming has demonstrated.
  2. I heard expert developers saying they could read *Unit tests and be able to tell immediately whether they were TDD tests or not. TDD tests are richer by nature they told us because they are more tightly coupled to the code. I thought that there is potential there for senior developers to read the tests to help critique constructively and find potential weak spots. One could have a developer outside of the pair that has been working read the tests as a type of test audit or editorial review.
  3. The emergence of a possible test smell: “If we have to look at the code to explain the program, are we missing a test?” prompted me to think of the potential for a catalog of test smells that reviewers could draw on. We look for bad “writing smells” using rules of grammar, spelling, etc. We could possibly develop something similar for using this style of review for our tests to complement the work that has already been done in the test automation area. This could involve reading the tests to find “grammatical” errors in the tests.

I still think there is a lot of potential to use tests as documentation, but it isn’t necessarily as simple as taking the tests we seem to be writing today and making them into project documentation in their original form. I encourage developers and testers to look at tests as documentation, and to think about how to use them to possibly replace wasteful documentation.

I learned a lot from the workshop, and it changed my thinking about tests as documentation. I’m personally thinking more about the “test the tests” idea than using tests as project documentation right now.

Testing and Numbers

Michael Bolton said this about numbers on testing projects:

In my experience, the trusted people don’t earn their trust by presenting numbers; they earn trust by preventing and mitigating bad outcomes, and by creating and maintaining good ones.

I agree. Lately, I’ve been thinking about how we report numbers on testing projects. I was recently in a meeting of Software Quality Assurance professionals and a phrase kept coming up that bothered me: “What percent complete are you on your project?” I think they mean that they have exercised a certain percentage of test cases on the development project. I don’t feel like I can know what “percent complete” of test cases I am on a project, so I’m uncomfortable giving out a number such as “90%” complete. How can I know what 100% of all test cases on a project are? Cem Kaner’s paper: Impossibility of Complete Testing shows us how vast the possible tests that can be run on a project are.

For all the projects I’ve been on that claimed to have 100% test coverage, each one had a major bug discovered by a customer in the field that required a patch. We obviously at best had only covered 100%-1 of the possible test cases. That one test case that the customer found a bug that we did not find was not in our set of test cases, so how could we claim that we had 100% completion? How many more are we missing?

Reporting numbers like this are dangerous in that they can create a false sense of security for testers and project stakeholders alike. If we as testers make measurement claims without looking at the complexity of measurement, we had better be prepared to lose credibility when bugs are found after we report a high “percent complete” number prior to shipping. Worse still, if testers feel that they are 90% complete of all tests on a project, the relentless pursuit of knowledge and test idea generation is easily replaced by apathy.

Cem Kaner points out many variables in measuring testing efforts in this paper: Measurement Issues and Software Testing. Accurate, meaningful measurement of testing activities is not a simple thing, so why the propensity for providing simple numbers?

I look at a testing project as a statistical problem. How many test cases could be in this project if I knew the bounds of the project? Since I don’t usually know the bounds of the entire project, it is difficult to do an accurate statistical analysis using formulas. Instead, I can estimate based on what I know about a project now, and use heuristics to help deal with the vast numbers of possible tests that would need to be covered to get a good percentage. As the project progresses, I learn more about it, and use risk-based techniques to try to mitigate the risk to the customer. I can’t know all the possible test cases at any given time. I may have a number at a particular point in the project, so of the test cases that I know of, right now, we may have a percentage of completion. However, there may be a lot of important ones that I haven’t, or the testing team together haven’t thought of. That is why I don’t like to quote numbers of “percent complete” without providing a context, and even then I don’t present just numbers.

The Software Quality Assurance school of thought seems to be numbers obsessed these days. I am interested in accurate numbers, but I don’t think we have enough information on testing projects to be using many of the numbers we have conditioned project stakeholders to rely on. Numbers are only part of the picture – we need to realize that positive project outcomes are what are really important to project stakeholders.

Numbers without a context and careful analysis and thought can give project stakeholders a false sense of security. This brings me back to Michael Bolton’s thought: what is more important to a stakeholder? A number, or the opinion of a competent professional? In my experience, the latter outweighs the former. Trust is built by delivering on your word, and helping stakeholders realize the results they need. Numbers may be useful when helping provide information to project stakeholders, but we need to be careful how we use them.