Category Archives: test automation

Watir Release

The Web Testing with Ruby group released version 1.0 of the Watir tool today. Check it out here: Get Watir

Download the latest release which is a zip file at the top of the watir package list. (At the time of this posting, the latest version is 1.0.2.) Open the User Guide for installation instructions, and check out the examples to see how you can use the tool.

You will need Ruby if you don’t have it installed. Version 1.8.2-14 Final is a good candidate to use, or 1.8.1-12 if you prefer to be more on the trailing edge.

For those who may not be familiar with the tool, it allows for testing web applications at the web browser level. The project utilizes testable interfaces in web browsers to facilitate test automation. Currently it only supports Internet Explorer, but plans are underway to support other browsers. It is also only available on Windows.

Green Bar in a Ruby IDE!

The Ruby Development Tool project team, who develop a plugin for Eclipse, have added some great features in their latest release. Most important (to me), is test::unit support, as well as some other features such as code-completion based on configurable templates, and a regular expression plugin.

I’ve been using RDT in Eclipse for a few months, and with this latest release, I’m very pleased to finally have a green bar in my Ruby development. Or more often in my case, a red bar.

Thanks to Glenn Parker for letting me know about this release.

Gerard Meszaros on Computer Assisted Testing

At last night’s Extreme Tuesday Club meeting, Gerard Meszaros had a great way of describing what I do using automated scripts with Ruby combined with exploratory testing. (See my blog post about this here: Test Automation as a Testing Tool.)

As I was describing using a Ruby script to automate some parts of the application I’m testing, and then taking over manually with exploratory testing, Gerard said:

Sounds like you are using your Ruby program to set up the test fixture that you will then use as a starting point for exploratory testing.

I hadn’t thought of what I was doing in this way before, and I like the idea. It is a great way to succinctly explain how to complement the strengths of a testing tool and a brain-engaged tester.

Check out Gerard’s Patterns of xUnit Test Automation site for more of his thoughts on testing.

“Test” and “generalist” are vague words

Brian Marick has posted his Methodology Work Is Ontology Work paper. As I was reading it, I came across this part on page 3 which puts into words what I have been thinking about for a while:

… there are two factions. One is the “conventional” testers, to whom testing is essentially about finding bugs. The other is the Agile programmers, to whom testing is essentially not about bugs. Rather, it’s about providing examples that guide the programming process.

Brian uses this example as an ontology conflict, but has provided me with a springboard to talk about vague words. Not only conflicts, but vague terms can cause breakdowns in communication which can be frustrating to the parties involved.

Tests

When I was studying philosophy at university, we talked a lot about vague words, which are words that can have more than one meaning. We were taught to state assumptions before crafting an argument to drive out any vague interpretations. Some work has been done in this area with the word “testing” on agile projects. Brian Marick has talked about using the term “checked examples“, and others have grappled with this as well.

“Test” is a term that seems to work in the agile development context, so it has stuck. Attempts at changing the terminology haven’t worked. For those of us who have testing backgrounds and experience on agile teams, we automatically determine the meaning of the term based on the context. If I’m talking to developers on an agile team and they use the word “test”, it often means the example they wrote during TDD to develop software. This and other tests are automated and run constantly as a safety net or “change detectors” when developing software. If I’m talking to testers, a test is something that is developed to find defects in the software, or in the case of some regression tests, to show that defects aren’t there. There are many techniques and many different kinds of tests in this context.

Generalists

Another vague term that can cause communication breakdowns is “generalist”. An expression that comes up a lot in the agile community is that agile projects prefer generalists to specialists. What does that mean in that context? Often, when I talk with developers on agile teams, I get the impression that they would prefer working with a tester who is writing automated tests and possibly even contributing production code. As James Bach has pointed out to me, this concept of a project generalist to a tester would be “…an automated testing specialist, not a project generalist.” Sometimes independent testers express confusion to me when they get the impression that on some agile teams, testing specialists need not apply, the team needs to be made up of generalists. A tester may look at the generalist term differently than a developer. To a tester, a generalist may do some programming, testing, documentation, work with the customer – a little bit of everything. A tester feels that by their very dilletante nature on projects that their role by definition truly is a generalist one. Again, we’re using the same word, but it can mean different things to different people.

For those who have software testing backgrounds, working at the intersection of “conventional” testing and agile development is challenging. This is a challenge I enjoy, but I find that sometimes testers and developers are using the same words and talking about two different things. Testers may be intially drawn to the language of agile developers only to be confused when they feel the developers are expecting them to provide a different service than they are used to. Agile developers may initially welcome testers because they value the expertise they hope to gain by collaborating, but may find that the independent tester knows little about xUnit, green bars and FIT. They may be saying the same words, but talking about completely different things. It can be especially frustrating on a project when people don’t notice this is happening.

Other vague words?

I’m trying to make sense of this intersection and share ideas. If you have other vague words you’ve come across in this intersection of conventional testers and agile developers, please drop me a line.

Automating Tests at the Browser Layer

With the rise in popularity of agile development, there is much work being done with various kinds of testing in the software development process. Developers as well as testers are looking at creative solutions for test automation. With the popularity of Open Source xUnit test framework tools such as JUnit, NUnit, HTTPUnit, JWebUnit and others, testing when developing software can be fun. Getting a “green bar” (all automated unit tests in the harness passed) has become a game, and folks are getting more creative about bringing these test tools to new areas of applications.

One area of web applications that is difficult to test is the browser layer. We can easily use JUnit or NUnit at the code level, we can create a testable interface and some fixtures for FIT or FITnesse to drive the code with tabular data, and we can run tests at the HTTP layer using HTTPUnit or JWebUnit. Where do we go from there, particularly in a web application that relies on JavaScript and CSS?

Historically, the well-known “capture/replay” testing tools have owned this market. These are an option, but do have some drawbacks. Following the home brew test automation vein that many agile development projects use, there is another option using a scripting language and Internet Explorer.

IE can be controlled using its COM interface (also referred to as OLE or ActiveX) which allows a user to access the IE DOM. This means that all users of IE have an API that is tested, published, and quite stable. In short, the vendor supplies us with a testable interface. We can use a testable interface that is published with the product, and maintained by the vendor. This provides a more stable interface than building one at run-time against the objects in the GUI, and we can use any kind of language we want to drive the API. I’m part of a group that prefers the scripting language Ruby.

A Simple Example

How does it work? We can find methods for the IE COM On the MSDN web site, and use these to create tests. I’ll provide a simple example using Ruby. If you have Ruby installed on your machine, open up the command interpreter, the Interactive Ruby Shell. At the prompt, enter the following (after Brian Marick’s example in Bypassing the GUI, what we type is in bold, while the response from the interpreter is in regular font).

irb> require ‘win32ole’
=>true
irb> ie = WIN32OLE.new(‘InternetExplorer.Application’)
=>#<WIN32OLE:0x2b9caa8>
irb> ie.visible = true
#You should see a new Internet Explorer application appear. Now let’s direct our browser to Google:
irb> ie.navigate(“http://www.google.com”)
=>nil
#now that we are on the Google main page, let’s try a search:
irb> ie.document.all[“q”].value = “pickaxe”
=>”pickaxe”
irb> ie.document.all[“btnG”].click
=>nil

#You should now see a search returned, with “Programming Ruby” high up on the results page. If you click that link, you will be taken to the site with the excellent “Programming Ruby” book known as the “pickaxe” book.

Where do we go from here?

Driving tests this way through the Interactive Ruby Shell may look a little cryptic, and the tests aren’t in a test framework. However, it shows us we can develop tests using those methods, and is a useful tool for computer-assisted exploratory testing, or for trying out new script ideas.

This approach for testing web applications was pioneered by Chris Morris, and taught by Bret Pettichord. Building from both of those sources, Paul Rogers has developed a sophisticated library of Ruby methods for web application testing. An Open Source development group has grown up around this method of testing first known as “WTR” (Web Testing with Ruby). Bret Pettichord and Paul Rogers are spearheading the latest effort known as WATIR. Check out the WATIR details here, and the RubyForge project here. If this project interests you, join the mailing list and ask about contributing.

*This post was made with thanks to Paul Rogers for his review and corrections.

Elisabeth Hendrickson on Test Automation

Elisabeth Hendrickson has two excellent posts on test automation that are worth the read: Do We Need Specialized Test Automation Tools? and Snakeoil?

A couple of related favorites:

If you are thinking about test automation, you should read James Bach’s “Test Automation Snake Oil” article.

For more info on test automation tool alternatives, check out Bret Pettichord’s “Home Brew Test Automation” slides.

Test Automation as a Testing Tool

When we think of GUI-level test automation, we usually think of taking some sort of test case, designing a script with a language like Perl, Ruby or a vendor testing tool, and developing the test case programatically from beginning to end. The prevailing view is often that there is some sort of manual test case that we need to repeat, so we automate it in its entirety with a tool. This is a shallow view of test automation as James Bach, Bret Pettichord, Cem Kaner, Brian Marick and others have pointed out. I prefer the term “Computer Assisted Testing” (I believe coined by Cem Kaner) over “test automation” for this reason.

While developing automated tests, an interesting side effect came out in my development efforts. While I was debugging script code, I would run a portion of a test case many times. This would be a sequence of events; not an entire test case. I started to notice behavior changes from build to build when I watched a series of steps play back on my screen. When I would investigate, I found bugs that I may not have discovered doing pure manual testing, or running unattended automated tests. I started keeping snippets of scripts around to aid in exploratory testing activities, and found them very useful as another testing tool.

There are some benefits to automating a sequence of steps and blending this type of testing with manual testing. For example, if a test case has a large number of steps required to get to the area we want to focus testing on, using a script to automate that process helps us get there faster, frees us from distractions and helps us focus on the feature or unit under test. Humans get tired repeating tasks over and over and are prone to error. If precision is needed, computers can be programmed to help us. We can also test the software in a different way using a tool. We can easily control and vary test inputs, and measure what was different from previous test runs when bugs are discovered.

There are certain tasks that a human can do much better than a computer when it comes to testing. As James Bach says, testing is an interactive, cognitive process. Human reasoning and inference simply cannot be programmed into a test script, so the “test automation” notion will not replace a tester. Blending the investigative skills and natural curiousity of a tester with a tool that helps them discover bugs is a great way to focus some test automation efforts.

If you think of your automation efforts as “Computer Assisted Testing”, many more testing possibilities will come to mind. You just might harness technology in a more effective way, and it will show in your testing efforts.

I currently use Ruby as my Computer Assisted Testing tool, with the WTR Controller as my web application testing tool of choice. (cf. A Testing Win Using Ruby)

James Bach on Test Automation

James Bach has an excellent post on his blog about test automation with developer and tester collaboration. Be sure to check out his presentation on Agile Test Automation. It is well worth the read. A collaborative approach in test development is important if the tests are to be useful to the entire team. Tests should not just be useful to the testing team, or specialists who know how to use a proprietary testing tool.

Borrowing from Bret Pettichord’s article “Testers and Developers Think Differently“, pairing good developers who are effective problem solvers and software creators with testers who are effective problem presenters and test idea generators can be a powerful combination. In my own experience working with developers in this way, solutions that the testers need can be quickly developed to meet the unique needs of a project or testing department.