MythBusters and Software Testing

I enjoy the Discovery Channel show MythBusters. For those who may not be familiar with the show, it is hosted by two film-industry techies: Adam Savage and Jamie Hyneman who test popular urban legends to see if they are plausible or not. It’s almost like an extreme snopes. They take a popular urban legend, design a concept on how to recreate it, and test to see if they can disprove it or not.

What I like about the show is the process that they follow when testing out whether a myth is plausible or not. Once a myth has been selected, they treat it like a theory (or hypothesis) that needs to be shown it can’t be disproven. They design and build tools to simulate a particular environment, and use observation to see whether the myth holds up. They usually improvise to create an environment that simulates the conditions needed to test the myth. They may not have the exact tools or objects readily at hand to use, so they will design objects to get the desired effect, and measure key elements to ensure they will help create the conditions that are needed.

The process the MythBusters follow isn’t perfect, and the simulations they create are not necessarily identical to the original objects in the myth, but the environments and tests they create and execute (as software testers often find on their projects) are generally good enough.

I think they would make great software testers. What they do reminds me of software testing. We often have to build specialized tools to simulate appropriate conditions, and we use observation and some measurement tools to test whether a theory can be reasonably proven to be false, under certain conditions. If we can demonstrate that the theory is falsifiable,1 we gather the data from our observations and demonstrate how the theory was shown to be false under certain conditions. For software testers, this data is usually gathered into a bug report or bug story card. On the show, they say whether a myth is plausible or “busted”.

What’s more, the MythBusters have that certain testing spirit that says: “What would happen if?….” which compels them to push things to the limit, and quite literally blow things up. Testers have fun pushing something to the brink, and then giving it that one last shove over the edge where the server dies, or the application runs out of memory. Not only are we having a bit of fun to satisfy curiosity, but we are gathering important data about what happens when something is pushed to perhaps extreme limits. Not only do we get a certain enjoyment out of watching things break, we add that observed behavior to our catalog of behavior. When we see it again, we can draw on some of those patterns and predict what might happen. Sometimes we can spot a bug waiting to happen based on the observation of what happens before the application blows up.

A related testing example might be a development theory we are testing such as: “the web application can handle 30 concurrent users”. We may have 8 people on our development team, so we can’t test 30 concurrent users with so few people. Instead, like the MythBusters we use, develop or modify a tool to simulate this. We might develop a test case that threads several sessions on a machine to simulate the 30 concurrent users, or use a test tool designed for this purpose. If the application fails to hold up under the simulated conditions in a repeatable fashion, other factors equal, we have evidence that this theory is falsifiable. If the reverse occurs, and the application holds up perfectly fine, we know the theory is not likely to be false under the conditions we ran the test. We may alter other variables to design new test cases, and observe what happens. As testers, we usually only record the results that demonstrate falsifiability which is a bit different from the MythBusters, and from others who follow the general pattern of the Scientific Method.

1 Check out Chapter 2 in Lessons Learned in Software Testing for more on testing and inference theory.