Who Do We Serve?

Bob and Doug are programmers on an Extreme Programming team. Their customer representative has written a story card for a new feature the company would like to see developed in the system to help with calclulations undertaken by the finance group. Currently, they have spreadsheets and a manual process that is time-consuming and somewhat error prone. Bob and Doug pair up to work on the story. They feel confident – they are both experienced with Test-Driven Development, and they constantly use design patterns in their code.

They sit at a machine, check out the latest version of the code from source control, and start to program. They begin by writing a test and forming the code test-first, to meet the feature request. Over time, they accumulate a good number of test cases and corresponding code. Running their automated unit test harness gives them feedback on whether their tests passed or not. Sometimes the tests fail sporadically for no apparent reason, but over time the failures appear less frequently.

As they are programming and have their new suite of automated test cases passing, they begin refactoring. They decide to refactor to a design pattern that seems suitable. It takes some rework, but the code seems to meet the pattern definition and all the automated unit tests are passing. They check the code in, and move on to a new story.

The next day, they demonstrate the feature to the customer representative, but the customer representative isn’t happy with the feature. It feels awkward to them. Bob and Doug explain what pattern they were using, and talk about facades, persistence layers, inversion of control, design patterns, and demonstrate that the automated unit tests all pass by showing the green bar in their IDE. The customer representative eventually reluctantly agrees with them that the feature has been implemented correctly.

Months later, the finance department complains that the feature is wrong almost half of the time, and is causing them to make some embarrassing mistakes. They aren’t pleased. Wasn’t XP supposed to solve these kinds of problems?

Looking into the problem revealed that the implementation was incomplete – several important scenarios had been missed. A qualitative review of the automated tests showed that they were quite simple, worked only at the code level, and no tests had been done in a production-like environment. Furthermore, no thought had been put into the design for the future when the system would be under more load due to more data and usage. “That isn’t the simplest thing that could possibly work!” they retorted. However, the team knew that data load would greatly increase over the coming months. If they weren’t going to deal with that inevitability in the design right then, they needed to address it soon after that. They never had, and blamed the non-technical customer representative for not writing a story to address it.

Finally, when the customer representative tried to express that something wasn’t quite right with the design, they were outnumbered and shot down by technical explanations. One representative against the six technical programmers on the team who were using jargon and technical bullying didn’t stand much of a chance. After the problem was discovered in production, Bob and Doug admitted that things felt a little awkward even to them when they initially implemented the code for the story, but since they were following practices by the book, they ignored signs to the contrary. The automated tests and textbook refactoring to patterns had given them a false sense of security. In fact, the automated tests trumped the manual tests done by a human, an actual user, in their eyes.

What went wrong? Bob and Doug measured their project success on their adoption of their preferred development practices. They worried more about satisfying their desire to use design patterns, and to follow Test-Driven Development and XP practices than they did about the customer. Oh sure, they talked about providing business value, worked with the customer representative on the team and wrote a lot of automated unit tests. The test strategy was sorely lacking, and their blind faith in test automation let them down.

Due to this incident, the development team lost credibility with the business and faced a difficult uphill battle to regain their trust. In fact, they had been so effective at marketing these development methods as the answer to all of the company’s technical problems that “Agile” was blamed, and to many in the business, it became a dirty word.

Sadly, this kind of scenario is far too common. On software development teams, we seem especially susceptible to desire silver-bullet solutions. Sometimes it seems we try to force every programming problem into a pattern whether it needs it or not. We suggest tools to take care of testing, or deployments, or any of the tasks that we don’t understand that well, or don’t enjoy doing as much as programming. We expect that following a process will prevail, particularly a good one like Extreme Programming.

Through all of this we can lose sight of what is really important – satisfying and impressing the customer. In the end, the process doesn’t matter as much as getting that result. The process and tools we use are our servants, not our masters. If we forget that and stop thinking about what we’re trying to solve, and who we are trying to solve it for, the best tools and processes at our disposal won’t help us.