I’ve had several conversations recently with people arguing that mandated code coverage numbers are a positive thing. For example “all code must have 90% code coverage”.

My argument is that code coverage is a great negative metric, in that at low values, it tells you useful things and at high values it tells you nothing. So mandating high numbers is not giving us the benefit we think it is, and in fact is often causing people to game the numbers.

Fresh off that, I just made a change to one of my own source files that had 100% code coverage, and no tests broke.

No, this was not an attempt to prove anything. I was fully expecting one or more tests to break when I did this.

The fact that nothing broke is quite honestly shocking as I thought this code was quite well covered. It does prove my point though.

Code coverage is great feedback to developers; it’s an indicator of where they should put their attention. It should never be a target.

Update: And to the surprise of nobody who believes in good tests, as I add more tests around this particular code to cover the edge cases that were missing, I’ve just found a bug.