Unit Testing Principles

15 May 2016. comments

Writing good unit tests is critical for any software you write. Over time I’ve accumulated a number of principles about unit testing. Here are some of them:

TDD

Write your tests first. It’s a hard habit to adopt if you haven’t been doing it but its well worth it. Not only will you have test coverage to catch regressions, but more importantly you will drive cleaner designs for APIs. When you write tests first you by definition must be designing a clean API since your tests are consumers. This helps lead to designs that have low coupling because you’ll be writing the consumer code at the same time as the implementation.

How much is enough?

If you have doubts about whether your code will do what you intend, you haven’t written enough tests. There isn’t a hard rule about how much testing you should do. The purpose of testing is to build confidence. When you are not yet confident, keep going. When you feel confident, stop. Don’t test for imaginary scenarios that don’t yet exist. When you fix a bug, make sure you write a test to demonstrate the bug first, so that when you fix the bug you can watch it go green.

Isolation

This is a contentious point. The tools are your disposal are fakes/stubs/mocks. There are some who believe that you should 100% isolate the unit under test, while others think its OK to let some collaborators in so that you don’t fool yourself into thinking your code works. At the very least you shouldn’t be hitting I/O in tests. Things like databases, filesystems, network calls, etc should be stubbed out because they have a tendency to be unreliable and slow. Tests that run quickly are crucial so that when you have thousands of them you have rapid feedback. Some frameworks (like Rails) insist on having the database be involved in all your tests; it is possible to circumvent this but its not conventional. Use your judgement; when you run into slowness or unreliability in unit tests then you’ve got to address it.

Don’t worry about duplication so much

Keeping your code DRY is a good practice to have, but in tests it might not be as important. Your tests are there for 3 reasons:

  1. Prevent regressions
  2. Help drive designs
  3. Communicate to other developers how your code is consumed

That last point is super important. When other developers come to look at your code they are going to look at your tests first to see how to consume your API. When peers are maintaining the code you’re writing they need to quickly understand how your code works. If you extract a bunch of test helpers or de-duplicate setup code, it can often make it harder to get in and understand whats happening without tracing through your helpers. So some duplication might be OK if it helps communicate clearly what each test is doing and expecting.

Don’t test libraries/frameworks

You should only be testing your code. If your unit has no logic and is just leaning on the framework to do some advertised behavior then the tests you write are going to be of fairly low value. Sometimes you will write tests against a framework as a means to understand the framework, and thats fine, but once you understand a framework you should expect that its got its own tests and treat it like a black-box. Its reasonable to have tests of this type if you know or expect that your dependency is unstable or has no tests, but at the same time if you know that then maybe thats a bad dependency.

Heavy mocking is POISON.

I can’t stress this point enough. Its also a very contentious issue in the community. Some people fall into the “classicist” camp of testing and some fall into the “mockist” camp. Mockist testing styles are about verifying internals of code, by ensuring specific method calls took place. An example in rspec:

it "blends" do
  foo = Foo.new(bar)
  foo.frobnicate
  expect(bar).to have_received(:wiggle)
end

A classicist would not test this. Thats not to say that a classicist would leverage bar’s actual behavior, they might stub it out, but they wouldn’t verify that internal implementation details like method calls took place.

I’m personally firmly in the classicist camp because I’ve been bitten by mocks way too much in my day. Verifying the behavior of internal implementation means that refactoring the internals of your implementation are guaranteed to break your tests, which makes the value of your tests very low since they cannot verify your old and new code without themselves changing. Yes, you can update your mocks to match the implementation, but thats some tight coupling and cascading change that you shouldn’t have to do just to refactor to a cleaner design.

There is a really good article written by Martin Fowler I tend to send people when they want to learn more about this distinction:

comments

Tagged: unit tests tdd mocking

2017 Ben Lakey

The words here do not reflect those of my employer.