TDD in Test Automation Projects

I’ve written before about pairing with developers during Test-Driven Development(TDD), and I’ve been fortunate to work with very talented TDD developers who are apt to teach. I’ve learned a lot, and decided to try TDD in a programming role. Recently, I’ve taken off my tester hat and started doing TDD myself with test automation projects. I’m not completely there yet – I often need to do an architectural spike first when I’m developing something new. Once I have figured out a general design, or have learned how a particular library works, I throw away the spike code and start off development by writing a test. I then write enough code to get the test to pass, write a new test, add new code and repeat until the design is where I need it to be.

So what does this gain in test automation projects? I’m loathe to have test cases that are so complex that they themselves require testing. If our test cases are so complex that they are causing problems themselves, that’s a test design smell. However, there are other kinds of software in our automation projects than just the test cases. In automation frameworks, we need special libraries, or adaptors, or a way to access an application we need to test, and all sorts of utilities that help us with automation. Since these utilities are still software, they are subject to the same problems that any other software development effort is. Sometimes there is nothing more frustrating than buggy test code, so we need to do what we can to make it as reliable as possible.

In my own development, I’m finding a lot of benefits doing TDD. My designs improve, because if they aren’t testable, I know there is a problem. When I make my code testable, it suddenly becomes more usable and more reliable. Too often testers aren’t given time to refactor test code. I’ve found that refactoring usually starts when technical debt in a test harness and custom test library are starting to interfere with productivity. When there are no unit tests for custom test library code, it can involve a few minutes to change the code, and several hours testing it. Having a safety net of unit tests helps immensely with refactoring. You can refactor your test code with greater confidence, and when it’s done consistently with automated unit tests, with much greater speed. It just becomes a normal part of development.

Recently, I’ve found several bugs in my test library code in the elaborative phase of TDD that I didn’t find testing manually. The opposite is also true, I tested libraries I was developing manually and found a couple of bugs that my unit tests didn’t uncover. The TDD-discovered bugs required design changes that helped my design immensely. The tests guided the design into something different (and better) than what I had in my head when I started. However, after a couple of days of only running the unit tests to satisfy the “green bar”, I found a big hole that was only uncovered by using the library the way an end user would. The balance of testing techniques is helpful. I have also adopted a practice of pairing a positive test with a negative test, a technique I learned from John Kordyback. If I do an assert_equal for example, I also do an assert_not_equal (using test::unit style). This has really come in handy at times where one assertion would work, but the other would fail.

TDD may not be for everyone, but I find it is a nice complement to other kinds of testing we can do. For my own work, it seems to suit the way I think about development. Even if the test cases I develop while programming are trivial and small, there is strength in numbers, and writing and using them helps assuage that tester voice in the back of my head that comes out when programming. I encourage other conventional testers who work on automation projects to give it a try. You may find that your designs improve, and you have a safety net of automated tests to give you more confidence in your automation code, especially when you need to enhance it down the road. At the very least, it helps you gain an appreciation and helps you communicate when working with TDD folks on a project.

Exploratory Testing using Personas

I get asked a lot about Exploratory Testing on agile projects. In my position paper for the Canadian Agile Network Workshop last month, I described Exploratory Testing using personas. I’m reposting it here to share some of my own experience Exploratory Testing on agile projects.

Usability Testing: Exploratory Testing using Personas

Usability tests are almost impossible to automate. Part of the reason might be that usability is a very subjective thing. Another reason is that automated tests do not run within a business context. Program usability can be difficult to test at all, but working regularly with many end users can help. We usually don’t have the luxury of many users testing full-time on projects; usually there is one customer representative. Once we get information from our test users, how do we continue testing when they aren’t around? One possible method is Exploratory Testing with personas.

I’ve noticed a pattern on some agile teams. At first the customer and those testing have some usability concerns. After a while, the usability issues seem to go away. Is this because usability has improved, or has the team become too close to the project to test objectively? On one project, the sponsor rotated the customer representative due to scheduling issues. We were concerned at first, but a benefit emerged. Whenever a new customer was brought into the team, they found usability issues when they started testing. Often, the usability concerns were the same as what had been brought up by the testers and the customer earlier on, but had been contentious issues that the team wasn’t able to resolve.

On another project using XP and Scrum, a usability consultant was brought in. They did some prototyping and brought in a group of end users to try out their ideas. Any areas the users struggled with were addressed in the prototypes. The users were also asked a variety of questions about how they used the software, and their level of computer skills, which we used to create user profiles or personas. As the developers added more functionality in each iteration, testers simulated the absent end users by Exploratory Testing with personas to more effectively test the application for usability. The team wanted to automate these tests, but could not.

Exploratory Testing was much more effective at providing rapid feedback because it relies on skilled, brain-engaged testing within a context. The personas helped provide knowledge of the business context, and the way end-users interacted with the program in their absence. The customer representative working on the team also took part in these tests.

Tension on usability issues seemed to be reduced as well. These issues were no longer mere opinions. Now the team had something quantifiable to back up usability concerns. Instead of having differing opinions from developers, testers could say: “when testing with the persona ‘Mary’, we found this issue.” This proved to be effective at reducing usability debates. The team compromised with most issues being addressed, and others not. There were still three contentious issues that were outstanding when the project had completed the UI changes. We scheduled time to revisit end-users and had some surprising results.

Each end-user struggled with the three contentious usability issues the testers had discovered, which justified the approach, but there were three more areas we had completely missed. We realized that the users were using the software in a way we hadn’t intended. There was a flaw in our data gathering. Our first sample of users tested in our office, not their own. We had them work with the software at their own desks, and within their business context. Lesson learned: get the customer data when they are using the software in their own work environment.

On this project, Exploratory Testing with personas proved to be an effective way to compensate for limited full-time end user testing on the project. It also helped to provide rapid feedback in an area that automated tests couldn’t address. It didn’t replace the customer input, but worked well as a complementary testing technique with automation and customer acceptance testing. It helped to retain their voice in the usability of the product throughout development instead of sporadically, and helped to combat group think.

TDD – A Fifth School of Software Testing

Bret Pettichord has a presentation on the Four Schools of Software Testing. He covers the schools of testing identified by Cem Kaner, James Bach and himself: Analytic, Factory, Quality and Context-Driven. (For a brief time, Bret had renamed “Factory” school to the “Routine” school, but has since reverted back to “Factory”.) This breaking down of popular testing ideas into schools is a thought-provoking concept, and just reading his presentation notes should get testers thinking about what they do on projects. “What school do I identify with?” a reader might ask themselves.

Cem Kaner has identified Test-Driven Development as another school of software testing. He talks about TDD as a big force in software development from the past decade in this paper: The Ongoing Revolution in Software Testing. On the context-driven testing mailing list, Cem Kaner explained why he identifies TDD as a testing school:


…TDD isn’t completely inside my [schools of software testing] paradigm. At [this] point, I ask:

(a) Is it coherent enough, popular enough, and looked to for guidance–a foundation for teaching about testing– enough to be called a school?

I think this is a clear yes. The fact that courses in TDD exist and are fashionable demonstrates that it’s a source of knowledge, inspiration, and guidance (which is at the essence of what I think of in a school). And there’s clearly an active group of collaborators who jointly develop and push the ideas.

(b) Is it testing?

It’s not inconsistent with my definition of testing, and a lot of people who advocate it think it is testing. OK, I accept it as testing.

Is it a unified field theory of testing? No. But neither is anything I’ve done. TDD focuses on problems that many of us haven’t focused on before, and sheds remarkably little light (or interest) on problems that many of us take very seriously. To the extent that it thinks its problems are the primary interesting problems, it asserts itself as a paradigm–elucidates some key issues and puts on blinders with respect to the rest. But I have my own blinders, so the mere fact that TDD is blind to some of the problems I find most interesting can’t be a basis for invalidating it as a school of testing. It can only be a basis for rejecting it as the driving philosophy of my school of testing.

Along with techniques, TDD advocates often espouse a set of values, a set of broader ideas about development, an attitude about people and how they should use their time and how they should be held accountable, an attitude about the use of tools, about the types of skills people doing this work should possess and how they should/could grow them–for sure, these are fuzzy sets, not everybody adopts all of them. It’s all of this stuff that separates schools — it’s also this type of stuff that gives context/interpretation for the use of the tools/techniques and the directions of advancement of the use and enhancement of these tools/techniques.

I think these illustrate an approach to testing that includes some deep and careful thinking, a rich set of practices, and attention to a lot of basic issues. I think I see different answers than I would expect from the factory school. I think it is much more brain-engaged than factory school, and much more about the cognition of the test creator than about the detail of procedures to be followed by the test runner.

So, I see a thoughtful, coherent view. A lot of practitioners. A body of work and advocates that guide thinking, practice and the teaching of thinking about how to envision, research, and practice testing. Looks like a school to me. And a welcome one.”

(posted with permission)


I agree with what he has said. In my own forays into pair testing with test-driven developers and learning how to do TDD from skilled practitioners, I have met people who are really dedicated to testing. They tend to seek out conventional testers who are also passionate about testing. This intersection of schools can sometimes have some confusing results at first.

I’ve witnessed confusion when conventional testers and TDD testers are working together. That prompted a blog post on vague words used in testing. Those I would characterize as Quality school testers often feel responsible for the TDD process and unit tests even though they aren’t programmers and haven’t tried TDD themselves. I have heard of TDD testers and Quality school testers attending the same meeting, using the same language, agreeing to move forward together, and then independently moving in completely different directions. In one case, a TDD developer and I spent several sessions where I “translated” the language that the Quality school folks were using. He started changing his use of the shared terminology by using synonyms to improve communication and reduce confusion. They managed to work things out reasonably well.

Is it helpful to make distinctions of testing schools like this? Here is one area where it certainly helps: when communicating testing concepts. When a word can mean different things depending on how a practitioner defines their role, it helps to understand where they are coming from. Cem Kaner has provided further insight on TDD, and we can use that to enhance our communication and collaboration with developers who are honing their own testing craft.

Fast Failures

I was talking with Mark McSweeny the other day about a test design I was thinking about. It involved attempting to automate a process that currently relies on a lot of manual testing involving visual inspection. The tricky part of automating this kind of testing is the potential for variation in the items under test. Variation is hard for a computer to handle, but a human who understands the context can instantly spot the variation and see whether it is permissible, or whether it constitutes a test failure. I was asking Mark’s advice on how to deal with the variation, and mentioned the tools I was thinking of using. Mark pointed out that I was overcomplicating the test design by thinking about what data structures, xUnit tools and libraries to use, and not thinking enough about the human tester.

He described “fast failure” tests he writes that are pure change detectors, designed to run quickly. While they can provide feedback very quickly, they don’t provide a lot of information other than “Pass” or “Fail”. When a test fails, the human steps in and does the manual inspection. In many cases, the human can tell at a glance if the failure is a bug or not. If the change is ok, and recurring, the test gets changed. If it isn’t, the human recognizes the bug, and logs it, or just adds a new unit test and fixes the problem in their own code.

Fast failure tests have a lot of potential as a testing tool. What Mark described is something I’ve written about before: Computer Assisted Testing. After all, I’m part of a school that believes testing is an intellectual craft, and skilled tester activities cannot be automated. An issue many have heard me rant about before is that until we can program inference, and have some sort of intelligent, thinking bots doing testing, we can’t automate all the tests a human tester can do. Humans handle variation easily, and respond to change. Fast failure tests yield the best of both worlds. The computer does what it is good at, and the human exploits the computer to help them concentrate on what they are good at. “So why didn’t you think of this solution Jonathan?” you might ask. I guess I got caught up in the technology instead of thinking about a solution that harnesses both the computer and the skills of a human. I forgot to practice what I preach. This also demonstrates how brilliant people like Mark design simple, elegant solutions, and teach me something every time I talk to them.

There are some potential benefits for fast failure tests, even if they don’t completely emulate what a human does. For example, say a tester must manually inspect ten items every build. We develop some fast failure tests that do a rough approximation of that inspection, and now run these automated tests every build. With the fast failure tests, say that two of those ten items under test report failures because of a variation. The tester inspects the two files manually, and using their judgement and skill realizes that these are legal variations. Also, say that in one of five builds, the fast failure tests fail on one of the items under test because of a bug that can be reported. Instead of the burden being completely on the tester to inspect each item every build, they now have to inspect far fewer items after each build. Even though they may get a couple of red herrings each build due to variation, the tests prove their worth by helping the tester quickly identify potential problems. This test design shows how a computer helps the tester work more efficiently.

If there is a need to have a more complex test that can provide more information about the failures, we can develop it over time. In the mean time, we have a solution that is good enough, and we can use it as a base line for the test under development. However, a complex test automation solution can be dangerous. As Mark warned, test automation is software development, so it is just as prone to bugs, design problems, maintenance issues etc. as any other software. A simple solution that requires some human intervention may be more efficient than a complex one that requires a lot of time spent in the automation code to keep it working.

I’m glad I have smart people like Mark around to talk to, who challenge my ideas and let me know when I’m off the mark.

Tim’s Comments on Software Testing and Scientific Research

Tim Van Tongeren commented on one of my recent posts, building on my thoughts on software testing and the philosophy of science. I like the correlation he made between more scripted testing and exploratory testing to quantitative vs. qualitative scientific research. When testing, what do we value more on a project? Tim says that this depends on project priorities.

He recently expanded more on this topic, and talks about similarities in qualitative research and exploratory testing.

Tim researches and writes about a discipline that can teach us a lot about software testing: the scientific process.

Testing Values

I was thinking about the agile manifesto, and this blatant ripoff came to mind. As a software tester, I’ve come to value the following:

  • bug advocacy over bug counts
  • testable software over exhaustive requirements docs
  • measuring product success over measuring process success
  • team collaboration over departmental independence

Point 1: Project or individual bug counts are meaningless unless the important bugs are getting fixed. There are useful bug count related measurements, provided they are used in the right context. However, bug counts themselves don’t have a direct impact on the customer. Frequently, testers are motivated much more by how many bugs they log than they are by how many important bugs they found, reported, advocated and pitched in to help get fixed before the product went out the door.

Point 2: We usually don’t sell requirements documents to our customers, (we tend to sell software products) and these docs often provide a false sense of all that is testable. Given a choice, I’d rather test the software finding requirements by interacting with customers and collaborating with the team than following requirements documents. At least we can start providing feedback on the software. At best, requirements docs are an attempt to put tacit knowledge on paper. At worst, they are out of date, and out of touch with what the customer wants. Only test planning off of requirements documents leaves us open to faults of omission.

Point 3: I find the obsession with processes in software development a bit puzzling, if not absurd. “But to have good software, we need to have a good process!” you say. Sure, but I fear we measure the wrong things when we look too much at the process. I’ve seen wonderful processes produce terrible product too many times. As a customer, I haven’t bought any software processes yet, but I do buy software products. I don’t think about processes at all as a consumer. I’ll take product excellence over “process excellence” any day. The product either works or doesn’t work as expected. If it doesn’t, I quietly move on and don’t do business with that company any more.

I have seen what I would call process zealotry where teams were pressured not to talk about project failures because they “would cast a bad light” on the process that was used. I have seen this in “traditional” waterfall-inspired projects, and interestingly enough, in the agile world as well. If we have some problems with the product, learn from the mistakes and strive to do better. Don’t cover up failures because you fear that your favorite process might get some bad press. Fix the product, and make the customer happy. If you don’t they will quietly move on and you will eventually be out of business.

Point 4: The “QA” line of thinking that advocates an independent testing team doesn’t always work well in my experience. Too often, the QA folks end up as the process police at odds with everyone else, and not enough testing is getting done. Software testing is a challenging intellectual exercise, and software programs are very complex. The more testing we can do, and the more collaboration we can do to do more effective testing, the better. The entire team should be the Quality Assurance department. We succeed or fail as a team, and product quality, as well as adherence to development processes are everyone’s responsibility.

Conventional Testers on Agile Projects – Getting Started Continued

Some of what you find out about agile methods may sound familiar. In fact, many development projects have adopted solutions that some agile methods employ. You may have already adjusted to some agile practices as a conventional tester without realizing it. For example, before the term “agile” was formally adopted, I was on more traditional projects that had some “agile” elements:

  • when I started as a tester, I spent a lot of my first year pair testing with developers
  • during the dot com bubble, we adopted an iterative life cycle with rapid releases at least every two weeks
  • one project required quick builds, so the team developed something very similar to a continuous integration build system with heavy test automation
  • developers I worked with had been doing refactoring since the early ’80s. they didn’t call it by that name, and used checkpoints in their code instead of xUnit tests that would be used now
  • in a formal waterfall project, we had a customer representative on the team, and did quick iterations in between the formal signoffs from phase to phase
  • one project adapted Open Source-inspired practices and rapid prototyping

These actions were done by pragmatic, product-focused companies who needed to get something done to please the customer. Many of these projects would not consider themselves to be “agile” – they were just getting the job done. The difference between them and an agile development team is that the agile methods are a complete methodology driven towards a certain goal rather than a team who has adjusted some practices to improve what they are doing.

Other conventional testers tell me about projects they were on that were not agile, but did agile-like things. This shouldn’t be surprising. The iterative lifecycle has been around for many years (at least back to the 1940s). There are a lot of methodologies that people have used, but not necessarily codified into a formal method within the iterative lifecycle as some agile champions have. A lot of what agile methods talk about isn’t new. Jerry Weinberg has said that methods employed on the Mercury project team he was on in the early ’60s looks to be indistinguishable from what is now known as Extreme Programming.1

Another familiar aspect of agile methods is the way projects are managed. Much of the agile management theory draws very heavily from the quality movement, lean manufacturing, and what some might call Theory Y management. Like the quality pundits of past, many agile management writers are once again educating workers about the problems of Taylorism or Theory X management.

What is new with agile methods, are comprehensive methodology descriptions that are driven from experience. From these practices, discplined design and development methodologies have improved rapidly, such as Test-Driven Development. Most importantly, a shared language has emerged for practices like “unit testing”, “refactoring”, “continuous integration” and others – many of which might have been widely practiced but called different things. This shared language helps a community of practice share and improve ideas much more efficiently. Common goals are much more easily identified when everyone involved is using the same terminology. As a result, the needs of the community have been quickly addressed by tool makers, authors, consultants and practitioners.

This has several implications for conventional testers that require some adjustments:

  • a new vocabulary of practices, rituals, tools and roles
  • getting involved in testing from day one
  • testing in iterations which are often 2-4 weeks long
  • an absence of detailed, formalized requirements documents developed up front
  • requirements done by iteration in backlogs or on 3×5 story cards
  • often, a lack of a formal bug-tracking system
  • working knowledge of tools such as refactoring and TDD-based IDEs, xUnit automation and continuous integration build tools
  • a team focus over individual performance
  • developers who are obsessed with testing
  • working closely with the entire team in the same work area
  • not focusing on individual bug counts or lines of code
  • less emphasis on detailed test plans and scripted test cases
  • heavy emphasis on test automation using Open Source tools

Some of these changes sound shocking to a conventional tester. Without a detailed requirements document, how can we test? Why would a team not have a bug tracking database? What about my comprehensive test plans and detailed manual regression test suites? Where are the expensive capture/replay GUI automation tools? How can we keep up with testing when the project is moving so quickly?

A good place to address some of these questions is: Lessons Learned in Software Testing: A Context-Driven Approach by Cem Kaner, James Bach and Bret Pettichord.

We’ll address some of these challenges in this series, as well as examples of testing activities that conventional testers can engage in on agile projects.

1 p. 48 “Iterative and Incremental Development: A Brief History”, Larman and Basili, 2003

Conventional Testers on Agile Projects – Getting Started

At this point, the conventional tester says that they can really identify with the values, understand some of the motivations behind agile methods and are ready to jump in. “How do I get started? What do I do?”

Testers Provide Feedback

I’ve talked about this before in the Testers Provide Feedback blog post.

A conventional tester starting out on an agile team should engage in testing activities that provide relevant feedback. It’s as simple as that.

I’m hard pressed to think of any activity that doesn’t tie into the tester as service role, ultimately helping the tester provide feedback. What activity that is depends on what the needs are on a project, right now.

Testing is what I do to provide good feedback on any development project. What is relevant depends on what your goals are, and what the team needs. This can be risk assessments, bug reports, a thumbs up on a new story, all sorts of things.

To have confidence in that feedback, we can engage in many activities to gather information. Exploratory testing is one effective way to do this, another is to use automated tests. There are lots of ways that we can gather information by inquiring, observing, and reporting useful information. What is key to me is to figure out what information the team needs at a particular time. What are some things that have worked well for you? Please let me know.

Personally, a testing activity is useful to the extent that it helps me get the information I need to provide useful feedback to the rest of the team. Sometimes it involves working with a customer and helping identify risks. Other times it’s a status report on automated tests that I give to the team. It may involve manual testing when on a bug hunt, or another useful testing mission where I need to do testing activities beyond automated tests. Other times it’s real-time feedback done when pair testing with a developer. Other times I am working with a customer helping them develop tests. The kind of feedback needed on a project guides what kind of testing activities I need to do.

Providing information is central. As James Bach says: “testing lights the way”. If I am not able to provide more feedback than the automated tests and customer are already providing, then I need to evaluate whether I should be on that agile team or not. If a particular area is not being addressed well and the team needs more information, then I should focus activities on that area, not focus slavishly on what role I think I should be filling.

Doing what needs to be done to help the team and the customer have confidence in the product is central. That means stepping out of comfort zones, learning new things and pitching in to help. This can be intimidating at first, but helps the tester gather more information and helps me learn what kinds of feedback the team needs. It’s a challenge, and those who enjoy challenges might identify with this way of thinking. Doing what needs to be done helps testers gather different kinds of information that can be used to provide the right kind of feedback.

More Information for Testers

In The Ongoing Revolution in Software Testing, Cem Kaner describes the kind of thinking that I am trying to get across. Testers who identify and agree with what Cem Kaner has said should have few problems adjusting to agile teams. This article is worth reading for anyone who is thinking about software testing.

Continue reading the series >>

Conventional Testers on Agile Projects – Values

Values Are Key

Good conventional software testers can potentially offer a lot to a team provided their working attitude is aligned with that of the team. One of the most important aspects of agile development is the values that many agile methods encourage. A team focus rather than an adversarial relationship is important, and testers on agile teams tend to agree with the principles guiding the development process. A conventional tester should at least understand them and follow them when they are on an agile project. Understanding the values goes a long way to understanding the other activities, and why agile teams are motivated to do the things they do. What is important to an agile team? For one, working software. It isn’t the process that is important, it’s the product we deliver in the end. Agile methods are often pragmatic approaches to that end.

A good place to start to learn about values on agile projects is by looking at the values for Extreme Programming. These are the values that I personally identify the most with.

Independence

Many times I hear that testing teams should remain separate from development teams so that they can retain their independence. Even agilists have different opinions on this. This might be due to a misunderstanding of what “independence” can mean on a project. Testers must be independent thinkers, and sometimes need to stick to their guns to get important bugs fixed. To be an independent thinker who advocates for the customer does not necessitate being in a physically independent, separate testing department. Agile projects tend to favor integrated teams, and there are a lot of reasons why having separate teams can cause problems. It can slow down development processes, discourage collaboration, encourage disparate team goals, and impede team communication.

Testers who are integrated with a development team need not sacrifice their independent thinking just because they are sitting with and working closely with developers. The pros of integration can far outweigh the cons. If your project needs an independent audit, hire an auditing team to do just that. Then you should be guaranteed an independent, outside opinion. In other industries, an audit is generally done by an outside team. Any team that does a formal audit of itself wouldn’t be taken seriously. That doesn’t mean the team can’t do a good job of auditing itself, or doing work to prepare for a formal audit isn’t worthwhile. What it means is that a formal audit from an outsider overcomes a conflict of interest. If your team needs independent auditing, prepare for it by testing yourselves, and hire an outsider to do the audit.

I personally would rather be influenced by the development team and collaborate with them to do more testing activities. I get far more testing work done the more I collaborate. If I become biased towards the product in the process, I will trade that for the knowledge and better testing I am able to do by collaborating.

Do What Needs to be Done

A talented professional who cares about the quality of the product that they work on, and believes in the values of agile methods should be able to add to any team they are a part of. This isn’t limited to those who do software development or software testing, but also technical writers, business analysts and project managers. If the team values are aligned, the roles will emerge and come and go as needs arise and change. “That’s not my job!” should not be in an agile team member’s vocabulary.

On an agile project it is important to not stick slavishly to a job title, but to pitch in and do whatever it takes to get the job done. This is something that agilists value. If you can work with them, they can work with you, provided your values are aligned.

Understand the Motivations

The motivation behind the values of agile methods are important. Read Kent Beck’s Extreme Programming Explained for more on values. Read Ken Scwaber and Mike Beedle’s Agile Software Development with Scrum to get insight into how an agile methodology came about. The first couple of chapters of the Scrum book really provide a picture for why agile methods can work.

My favourite line in the Schwaber/Beedle Scrum book is:

They inspected the systems development processes that I brought them. I have rarely provided a group with so much laughter. They were amazed and appalled that my industry, systems development, was trying to do its work using a completely inappropriate process control model.

p. 24, Agile Software Development with Scrum, 2002, Prentice Hall.

This book provides a lot of insight into what motivated people to try something new in software development, and the rationale behind an agile methodology.

The values behind agile methods really flow from these early motivations and discoveries of pragmatic practitioners, and are well worth reading. When you understand where the knowledge of delivering working systems was drawn from, the values and activities really start to make sense.

Continue reading the series >>

Thoughts on product development, management, design, mobile and other topics.