The barrier to auto testing

Jul 3, 2017

Most project I worked on didn't have automated tests, even though all involved agreed they would bring significant benefits. But the cost of writing them was too high, and it couldn't happen. But why so many projects are left in a sad state like this?

One company I worked for had notoriously changing specifications, and the overall project was already running late. There was a QA, but they were not only catching bugs but also tried to match the specifications to the incoming user requests.

One day, there was an email asking all developers to write unit tests. Their argument was that it would not only ease the work of the QA but help fellow developers too.

But the stress was so high to supply new features, everyone just laughed, and nothing happened. Introducing automated testing to a project is not a result of an email instructing everyone to do it. It is either enforced and embraced or not going to happen.

Introducing automated tests is a case of technical debt. When the project is in its infancy, you don't need them. But when it matures, the presence of such tests can be the difference between success and failure. But what is the right time to introduce them?

For small projects and prototypes, testing can be done manually, and the impact of bugs is low. Therefore, writing tests yields little benefit, even though the cost is low.

For large projects, tests bring huge benefits. I've seen projects that seemingly stalled and made very little headway, although a sizeable team worked on it. For those projects, having tests would have been a blessing. But the upfront cost was also huge.

Because of the increasing cost of adding tests, if they are not enforced from day 1, they are hardly added at all. This is the tragedy that haunts most mature projects. They did not emphasize testing early on, so they are stuck without them.

This is why people say auto testing should be the default, and every new project should embrace and enforce it. This way, when the project matures, the tests will be there to prevent collapse.

Why adding tests so hard?

I've found a strange reason why it is hard to add tests later. First, you need to write a lot of them at once to reach a significant coverage. But even more important, you need to know what you are testing.

As development progresses, many unplanned features are added. There is a point quite early on when nobody has a complete view of the system anymore; features and edge cases are forgotten. And in order to be reliable, testing should be as complete as possible.

Another project I've worked on was such a convoluted mess no one even tried to map all the features. This made refactoring a futile process and took an extremely long time. Furthermore, it prevented us from doing testing effectively, even manual ones. That was one of the stalled projects, which despite having an active developer working on it, could not show progress.

A possible solution

One thing you could try is to write test scenarios from early on. Like actual tests, but without any code. The hard part is to keep them up-to-date, just as you would do with real, runnable tests.

This way, when you decide to increase test coverage, you already know the what. If done rigorously, these scenarios also capture unexpected features and edge cases introduced later to the software.