Thursday, July 30, 2009

Sharpening Your Sword

We’re recently reinstituted our technical book club, even though we’re probably now busier than ever before. I sent an email asking our technical staff if they thought there would ever be a time in the future where we would be less busy. If not, were they willing to permanently suspend their own technical growth? Interestingly enough that email generated a lot of interest in our book club!

Choosing which book to study is an exercise in learning about balancing your needs with the needs of the company and the other developers. Having recently attended JavaOne I can think of lots of interesting topics: alternate JVM languages (Groovy, Scala, Ruby), frameworks: (Spring Roo, Spring DM, OSGI, Jigsaw), gui languages (JavaFX, Flex), tools (Maven, Eclipse goodies), general (“Concurrency In Practice”, “Pragmatic Thinking and Learning”).

To some degree you need to build consensus around the book choice but remember that your choice doesn’t have to be perfect. You may also find as I did that most people are busy enough to defer the choice to you.

Having selected the book we’re taking a slightly different approach to studying it than we have previously. In the past we all read the book chapters offline prior to the meeting and then discussed the chapter during the meeting itself. When questions would arise we’d sometimes open up a laptop and try something but it was largely a text centric discussion. Since our current book is discussing a language that’s new to most attendees we’re going to be much more laptop and code-centric. Our first meeting in fact will be devoted to getting the language and associated tools installed on everyone’s laptops so that we can all type along with the examples in the book. The goal is to make this more than an academic exercise. We’ve all attended trainings that were interesting and yet covered technologies that we never touched again. We’re aiming with the code-centric approach to fairly quickly add this new language to the tool box of our developers and testers.

Some of the practical steps in that direction include:

Installing the IDE plug-in for the new language
Modifying our main system build to compile classes written in the new language
Adding a HelloWorld class in the new language to the source tree to ensure that the modified build scripts actually
work.

And probably most useful, pick an existing but non-critical bug/feature request in our product to fix by adding a new class in the language. I think that this set of steps will make the new language real for people.

As a side comment I say that I like asking interview candidates what books they’re recently read. It can be fairly revealing about the ongoing educational habits of the candidate. It can also reveal how they react to an unexpected question, and one that they might have answered “wrong”.

Learning from your unit testing mistakes

By now one hopes that we don't have to convince developers of the need to write unit tests, and to use a tool like Cobertura to enforce some level of code coverage. And yet, there are other steps that really ought to be taken, steps that I don't see many development shops taking.

Even with fairly high levels of code coverage (e.g. 80% or greater), we still find defects in our code. The galling thing is that we find defects even in the code that Cobertura tells us we tested. How is that possible?

The first step in this process is a painful and often time consuming step, but one that can be very revealing. The next time a defect gets reported against 'tested' code I suggest you stop and write a unit test that finds that exact defect. (Now, many shops require a unit test for all defects so this step isn't new for them, but lots of other shops don't require this step).

The next step is the most interesting: reflect honestly on why you did not write that unit test already. Write down the reason in your engineering notebook (you do keep an engineering notebook, right?) Over time you may detect patterns in the types of tests you do and don't tend to write.

A fairly common case is where mainMethod() ought to have a call to subMethod() but doesn't. The defect is that subMethod() is not called. Even if you have individual tests that execute every line of mainMethod and subMethod you will not detect the missing call unless you go further. Presumably subMethod accomplished something useful, something that your tests of mainMethod could detect.

The question really morphs from line coverage to results analysis. As sad as it may seem many developers do not really understand that your unit tests are useless until you test the results. The following test increases your coverage numbers but doesn't actually test anything

@Test
public void myTest() {
int results = mainMethod();
}

There are static code analysis tools such as PMD that will flag unit tests without asserts. Try adding one of these to your build…but don’t be surprised if many of your unit tests turn out not to be tests at all.

A question that has bothered me about at this is what to do about stub modules. We sometimes have to implement an interface (a bad interface) that has dozens and dozens of methods, only a few of which are actually necessary. Your concrete class has to have stub implementations of the unused methods, which leads to a code coverage decision. You can write unit tests for these no-op methods, but since they don’t do anything you have nothing to test (which may be hard to explain to your static analyzer). Or you can skip testing these modules which may reduce your coverage numbers enough to get your coverage enforcement scripts to complain. I’d like to see an annotation of some sort like @STUB that told Cobertura to ignore this method or class for calculating its coverage metrics.