patkua@work

Automated Acceptance Tests: What are they good for?

A long time ago, I wrote about the differences between the types of tests I see, yet a lot of people don’t appreciate where acceptance tests fit in. I won’t be the first to admit that at first glance, automated acceptance tests seem to have lots of problems. Teams constantly complain about the build time, the maintenance burden they bring, and the difficulty of writing them in the first place. Developers specifically complain about duplicating their effort at different levels, needing to write assertions more than once (at a unit or integration level), and that they don’t get too much value from from them.

I wish everyone would realise that tests are an investment, a form of insurance against risk (the system not working), and most people don’t even know what risk level they are willing to take. That’s why, on experimental code (i.e. a spike), I don’t believe in doing test driven development. I’ve been fortunate enough, or is that learn some lessons from, seeing both extremes.

Maintaining a system only with acceptance tests (end to end)
I worked on one project where the architect banned unit tests, and we were only allowed integration and acceptance tests. He believed (rightly so) that acceptance tests let us change the design of the code without breaking functionality. His (implicit) choice of insurance was a long term one – ensure the state of the system constantly works over time, even with major architectural redesign. During my time on this project, we even redesigned some major functionality to make the system more understandable and maintainable. I doubt that without the acceptance tests, we would have had the confidence to move so quickly. The downside to this style of testing is that the code-test feedback was extremely slow. It was difficult to get any confidence that a small change was going to work. It felt so frustrating to move at such a slow pace without any faster levels of feedback.

Scenario driven acceptance tests (opposed to the less maintainable, story-based acceptance tests) also provide better communication for end users of the system. I’ve often used them as a tool for aiding communication with end users or customer stakeholders to get a better understanding about what it is they think the system should be doing. It’s rare that you achieve the same with unit or integration tests because they tell you more how a particular aspect is implemented, and rarely lacks the system context acceptance tests have.

Maintaining a system only with unit tests
On another project, I saw a heavy use of mocks, and unit tests. All the developers moved really fast, enjoyed refactoring their code, yet on this particular project, I saw more and more issues where basic problems meant that starting up the application failed because all those tiny, well refactored objects just didn’t play well together. Some integration tests caught some of these, but I felt like this project could have benefited from at least a small set of acceptance tests to prevent the tester continuously deploying a broken application despite a set of passing unit tests.

What is your risk profile when it comes to testing?
I think every team developer must understand that different types of test give us different levels of feedback (see the testing aspect to the V-Model), and each has a different level of cost determined by constraints of technology and tools. You’re completely wrong if you declare all automated acceptance tests bad, or all unit tests are awful. Instead you want to choose the right balance of tests (the insurance) that match system’s constraints for its entire lifetime. For some projects, it may make more sense to invest more time in acceptance tests because the risk of repeated mistakes is significantly costly. For others, the cost of manual testing mixed with the right set of unit and integration tests may make more sense. Appreciate the different levels of feedback tests bring, and understand the different levels of confidence over the lifetime of the system, not just the time you spend on the project.

Exit mobile version