Thursday, July 30, 2009

Learning from your unit testing mistakes

By now one hopes that we don't have to convince developers of the need to write unit tests, and to use a tool like Cobertura to enforce some level of code coverage. And yet, there are other steps that really ought to be taken, steps that I don't see many development shops taking.

Even with fairly high levels of code coverage (e.g. 80% or greater), we still find defects in our code. The galling thing is that we find defects even in the code that Cobertura tells us we tested. How is that possible?

The first step in this process is a painful and often time consuming step, but one that can be very revealing. The next time a defect gets reported against 'tested' code I suggest you stop and write a unit test that finds that exact defect. (Now, many shops require a unit test for all defects so this step isn't new for them, but lots of other shops don't require this step).

The next step is the most interesting: reflect honestly on why you did not write that unit test already. Write down the reason in your engineering notebook (you do keep an engineering notebook, right?) Over time you may detect patterns in the types of tests you do and don't tend to write.

A fairly common case is where mainMethod() ought to have a call to subMethod() but doesn't. The defect is that subMethod() is not called. Even if you have individual tests that execute every line of mainMethod and subMethod you will not detect the missing call unless you go further. Presumably subMethod accomplished something useful, something that your tests of mainMethod could detect.

The question really morphs from line coverage to results analysis. As sad as it may seem many developers do not really understand that your unit tests are useless until you test the results. The following test increases your coverage numbers but doesn't actually test anything

@Test
public void myTest() {
int results = mainMethod();
}

There are static code analysis tools such as PMD that will flag unit tests without asserts. Try adding one of these to your build…but don’t be surprised if many of your unit tests turn out not to be tests at all.

A question that has bothered me about at this is what to do about stub modules. We sometimes have to implement an interface (a bad interface) that has dozens and dozens of methods, only a few of which are actually necessary. Your concrete class has to have stub implementations of the unused methods, which leads to a code coverage decision. You can write unit tests for these no-op methods, but since they don’t do anything you have nothing to test (which may be hard to explain to your static analyzer). Or you can skip testing these modules which may reduce your coverage numbers enough to get your coverage enforcement scripts to complain. I’d like to see an annotation of some sort like @STUB that told Cobertura to ignore this method or class for calculating its coverage metrics.

No comments:

Post a Comment