Tuesday, September 6, 2011

YAAACCC

(Yet Another Argument Against Complete Code Coverage)

It's been said many times, but still bears repeating, that 100% code coverage in unit tests is not a good goal. If you don't already believe that, I probably won't convince you. I'm not going to rehash all the reasons. Look around, you can find lots of eloquent arguments already in print.

But I will give you one that is perhaps less obvious and less frequently cited than some, with a concrete example: code coverage software is just that, software. It is as unlikely to be completely bug-free as any other software.

Case in point: Devel::Cover is a nice package. I really like the color-coded HTML output annotating the code, and showing various coverage statistics, with drill-down. But at work, I frequently encountered low Condition coverage.

I was writing a lot of code that looked something like this:

sub twiddle { 
  my ($self, $c, $user, $device, $knob) = @_; 
  $user = decode_param($user) || return $self->error(400); 
  $device = decode_param($device) || return $self->error(400); 
  $knob = decode_param($knob) || return $self->error(400); 
  return $self->error(401) if !authenticated($user); 
  return $self->error(401) if !authorized($user, $device); 
  my $model = $c->model('Twiddle') || return $self->error(500); 
  return $model->twiddle($device, $knob); 
}
Under Condition Coverage, for a line like $user = decode_param($user) || return $self->error(400); I would get truth tables that looked like:






Even though I had run tests for both success and failure of decode_param($user), I had only 67% condition coverage, evidently because Devel::Cover didn't understand that it couldn't evaluate the truth or falsity of the second clause of ||, since return takes execution out of the current scope.

While sometimes examining coverage output can help improve code, in a case like this, modifying the code to make the coverage tool happy would just be stupid. And, as it turns out, short-lived.

I was writing up this example to present at the Catalyst Developer's Meetup, creating the sample code above and the tests and coverage to use in my presentation, when I discovered that behavior of Devel::Cover had changed. The same code and tests in my home environment give this result:





Turns out that the behavior I described above was fixed between Devel::Cover Version 0.65, and Devel::Cover Version 0.73. This doesn't disprove my point. Rather, it reinforces it. Don't blindly assume that your coverage tool is perfect, or code to its idiosyncrasies.

Code coverage should not be used as a gating factor, or to satisfy bean-counters. It is a useful tool for the developer to ensure that what should be tested is, and nothing more.