Today, automated tests1 are fairly ubiquitous in software projects. That is, almost every project has a large number of automated tests and that they tend to get run by some continuous integration (CI) server.

If we look back 25 years to when I first started writing automated tests, only the outliers did this. It was very rare to find a project that had automated testing at all and the ones that did, had all rolled their own frameworks to do it. Although JUnit (Java) had been released in 1997, and SUnit (Smalltalk) had existed for almost a decade prior to that, very few people even knew about them and fewer still used them.

So the fact that they’re now ubiquitous would seem to be great news on the surface. Unfortunately, merely having them does not mean that the tests provide any meaningful level of coverage or that they get run often enough to provide useful feedback. It just means we’ve accepted that every project should have them and that we run them on some interval. That’s a good start but isn’t enough.

We have these tests in order to provide a safety net so that we can move quickly without fear of accidentally breaking something. To be an effective safety net, we need to be confident that they will catch mistakes and we need to run them frequently enough that we get timely feedback. If I am notified in the moment, that a line I just added broke some behaviour then I can quickly fix it. If I don’t find out for several hours then I’ve almost certainly moved on to something else and it will take time to get back to where I was. Even worse if I don’t find out for days or weeks.

So how often are we running our tests? Developers that are using Test Driven Development (TDD) are running at least a portion of the tests constantly, but as you’ll see, very few are doing that practice. What about the rest? A study2 published in 2019 says that we run the tests far less often than we might think.

This study ran over 2.5 years, involved 2,443 developers and observed the following:

  • Over half of the studied users do not practice testing.
  • Even if the projects contain tests, developers rarely execute them in the IDE.
  • Test-Driven Development (TDD) is not a widely followed practice, and those who do use it, tend to be more experienced developers.
    • Only 43 developers in the study (1.7%) followed a strict TDD process.
    • Another 136 developers (5.6%) were following a more lenient TDD process.
    • The remaining 92.7% weren’t using TDD at all.
  • Developers overestimate the time they devote to testing almost twofold.

They conclude “These results counter common beliefs about developer testing and could help explain the observed bug-proneness of real-world software systems”.

Note that TDD is not a requirement to write good tests. It is possible to write good tests after the fact, although it’s harder, and we can see that people not doing TDD aren’t really doing effective tests either. Perhaps we should do both. TDD has other benefits beyond the creation of tests.

The authors of the study acknowledge that participants would have been more likely to have done testing simply because they knew that was the point of the study. What that means is that these results are likely more optimistic than they should be and we probably test even less than this.

In order for the tests to be an effective safety net, they need to be good and we also need to run them often to get feedback. We’ve only looked at how often developers are even trying to test and already the news isn’t good. We’ll address quality of the tests in another post.

  1. More precisely, what we’re talking about are automated checks, not tests, but the agile community has never adjusted to using the more precise language so I’m going to keep calling them tests here. 

  2. Beller, M., Gousios, G., Panichella, A., Proksch, S., Amann, S., & Zaidman, A. (2019). Developer Testing in The IDE: Patterns, Beliefs, And Behavior. IEEE Transactions on Software Engineering, 45(3), 261-284. Article 8116886. https://doi.org/10.1109/TSE.2017.2776152