patkua@work

Automated story-based acceptance tests lead to unmaintainable systems

Projects where the team directly translates story-level acceptance criteria into new automated test cases set themselves up for a maintenance nightmare. It seems like an old approach (I’m thinking WinRunner-like record-play back scripts), although at least the teams probably feel the pain faster. Unfortunately not many teams seem to know what to do. It sounds exactly like the scenarios that my colleagues, Phillip and Sarah are experiencing or experienced recently.

Diagnosing this style of testing is easy. If I see the story number or reference in the title of the test class or test case name, chances are, your team experiences automated story-based acceptance tests.

Unfortunately the downfall to this approach has more to do with the nature of stories than it does with the nature of acceptance tests (something I’ll get to later). As I like to say, stories represent the system in a certain snapshot of time. The same thing that lets us deliver incremental value in small chunks just doesn’t scale if you don’t consolidate the new behaviour of the system, with its already existing behaviour. For developers, the best analogy is like having two test classes for a particular class, one that reflected the behaviours and responsibilities of the system at the start, and one that represents the new behaviours and responsibilities of the system right now. You wouldn’t do this at a class level, so why should you do it at the system level?

Avoid temporal coupling in the design of your tests. The same poor design principle of relating chunks of code together simply because someone asked for them at the same time, also apply to how you manage your test cases. In terms of automated story-based acceptance tests, avoid spreading similar tests around the system just because they were needed at different times.

What is a better way? A suggested antidote…

On my current project, I have been lucky enough to apply these concepts early to our acceptance test suites. Our standard is to group tests, not based on time, but on similar sets of functionality. When picking up new stories, we see if any existing tests need to change, before adding new ones. The groupings in our system are based on the system level features, allowing us to reflect the current state of the system as succinctly as possible.

Exit mobile version