Testing - It's about confidence

01 Sep 2017. comments

Lets talk about testing. Its fairly well accepted that you should be writing tests, and that more tests are better than less tests, and that writing tests first can be very helpful. But often times new people to the field aren’t really sure why they are writing tests, or how to write good tests, or why certain practices are desirable while others are not. Lets dive in.

Tests are for humans​ and tell a story

Tests are for humans to gain confidence in the code they have written.​ Anything you can do to your tests so that they give the team more confidence is a win.​

Being boring, obvious, simple and clear is a great way to gain confidence.​ Make tests easy to read as though they were telling a story about your feature (because that is what they are doing).​

Cleanly separate test setup with test implementation

To see how we can tell a story (or not tell a story), take a look at this code:

describe('save', function() {

  it('storage', function() {
    webStorage.save.mockReturnValue({ alpha: 10 });
    const persistence = new Persistence();
    persistence.save('books', { alpha: 10 });
      type: 'books',
      record: { alpha: 10 }


It isn’t clear without reading carefully through the code what this is testing.​ Its tough for us humans to see what part of this is test setup vs test.​ When this test fails we will need to rely on a stack trace to identify what expectation failed (what thing being tested has failed?). ​

We’ll see how to clean this test up shortly.

Given, When, Then

A good test should be quite obvious about:​

  • What scenario is being tested​
  • The thing that is under test​
  • The expected outcome​

Martin Fowler calls this “Given, When, Then”​.

Others call it “Arrange, Assert, Act”​:

​ Looking back at our previous test, what this means is that we need to do the following:

  • We need a ‘describe’ for the scenario that we are testing.
  • We need that ‘describe’ to be married to a ‘beforeEach’ that translates that plain-english scenario into code that sets it up.
  • We need an ‘it’ block that plain-english describes our expectation.
  • The ‘it’ block needs to then have code that asserts only that expectation alone. ​

This gives us a clean separation of​:

  • “this is my scenario” (describe) (Given) (Arrange)
  • “this sets it up” (beforeEach) ​(When) (Act)
  • “this is my expectation” (it) (Then) (Assert)

Lets re-create our test under those guidelines:

describe('Persistence', function() {

  let persistence;

  beforeEach(function() {
    persistence = new Persistence();

  describe('isEmpty()', function() {

    describe('when no records have been saved', function() {

      it('returns true', function() {


    describe('when we save a record', function() {

      beforeEach(function() {
        webStorage.save.mockReturnValue({ alpha: 10 });

        persistence.save('books', { alpha: 10 });

      it('returns false', function() {



  describe('save()', function() {

    describe('given a book to save', function() {

      let toSave;

      beforeEach(function() {
        toSave = { alpha: 10 };


      it('stores the book correctly in persistence', function() {
          type: 'books',
          record: toSave




We can clearly see our “Given, When, Then” showing up here.​

  • Given a persistence
  • When I dont save any records
  • It is empty


  • Given a persistence
  • When I store a record
  • It is not empty


  • Given a persistence and a book
  • When I store call save() with the book record
  • It stores the book correctly in persistence

Its clear from the outer descriptions what we’re testing.​ ​In each case we know what scenario we setup by looking at the describe/beforeEach pairing (plain english + code to make it happen).​ ​And its is very easy to see what our expectation is because it isn’t mixed in with setup code.​ And in the case where the “expect” might not be clear to everyone, we have a plain-english documentation of what the expectation is checking.

Additional Benefits

Beyond making it read clearer to humans (arguably the most important part), how does this structure help us?​ Well, adding another test for a given scenario (that has already been arranged) is just a matter of adding another it/expect. None of the other tests in the scenario must change, and we can be confident that the existing scenario/expectations are still proving our code works because we didn’t need to change them (which would decrease our confidence).


The purpose of tests is to give you confidence.​ Tests are the wall at your back that allow you to move forward confidently when making changes.​ Ease-of-understanding breeds confidence.​ Testing is not just a box to tick during development; they are there to help you.​ And more importantly, they are there to help FUTURE YOU.​

Don’t change your implementation in lock-step​​!

You should be free to refactor the internals of a thing under test and use the unchanging tests to validate that it still works the way it did before.​ ​If you have to change your tests when you change implementation details (private functions) your tests are not adding value and just be come a box to tick.​

Mocking/Faking/Spying can help, but beware

Mocking is both a powerful tool and a dangerously seductive one. It can be easy to mock many things to get your test to be green, but that can often fool you into believing your implementation is working when really you’ve just overmocked it to death. So how do you know when is enough vs too-much?

  • If you have to change your tests in response to having changed internal private methods or structure of the file under test, you’ve probably over-mocked.
  • If you’re mocking a collaborator of a collaborator, you’ve probably over-mocked.
  • If your test is hitting I/O, you’ve under-mocked.

Additional Resources

The following are great resources to expand on how to structure your tests.​

BetterSpecs - Guidance for ruby rspec tests, but guidance still applicable for other frameworks and languages:​

TestDouble’s wiki on how to write good tests:​


Tagged: testing tests unit testing mocking mocks test driven development tdd

Game Development and Application Development Best Practices- Not As Different As You Might Think

03 Jul 2017. comments

You might expect game development methodologies and best-practices to differ from how we develop software (I did). But John Romero (Keen, Wolf3D, Doom, Quake, Diakatana, Deus Ex, etc) just gave a talk last month where he talked about all the lessons iD Software learned while developing their games in the early days. A lot of them are very suspiciously similar to agile/xp/iterative development methodologies we believe in:

The principles he outlined were:

  • “Polish as you go. Don’t depend on polish happening later. Always maintain constantly shippable code.”
  • “It’s incredibly important that your game can always be run by your team. Bulletproof your engine by providing defaults upon load failure”
  • “Keep your code absolutely simple. Keep looking at your functions and figure out how you can simplify further”.
  • “Great tools help make great games. Spend as much time on tools as possible.”
  • “We are our own beta testing team and should never allow anyone else to experience bugs or see the game crash. Don’t waste others time. Test thoroughly before checking in your code.”
  • “As soon as you see a bug, you fix it. Do not continue on. If you don’t fix your bugs your new code will be built on a buggy codebase and ensure an unstable foundation.”
  • “Use a development system (platform) that is superior to your target (platform).”
  • “Write your code for this game only - not for a future game. You’re going to be writing new code later because you’ll be smarter.”
  • “Encapsulate functionality to ensure design consistency. This minimizes mistakes and saves design time.”
  • “Try to code transparently. Tell your lead and peers exactly how you are going to solve your current task and get feedback and advice. Do not treat game programming like each coder is a black box. The project could go off the rails and cause delays.”
  • “Programming is a creative art form based in logic. Every programmer is different and will code differently. It’s the output that matters.”

Remind you of the Unix Philosophy?


Tagged: software craftsmanship encapsulation software engineering yagni continuous deployment continuous integration iterate agile agility

Naming is abstraction

24 Oct 2016. comments

The point of choosing good names in software development is that it provides abstraction for others. Good names don’t exist for compilers because the compiler isn’t going to care about names at all. Naming is not just some process invented by language designers to make you box up your code arbitrarily. A good name exists so that when you look at code that I wrote (or I later look at code I wrote) I can understand at a glance what responsibilities it has and what it might be doing. I covered this in a previous post: Readable is not Elementary.

Let’s say someone on your team submits a code review to you and you open it up and find this method call:


This isn’t a good name. What does process mean in this case? What are we doing with foo? What does manager manage? What does ‘managing’ even mean in this case?

The problem with the above code is that it failed to abstract the inner concepts from humans. The practical implication of this poorly named method is that you and every other programmer who comes across it will have to dive inside of it to understand what it’s doing. That’s a failed abstraction because nothing got abstracted for you if you had to open it up. The method name wasn’t clear so you don’t know what you’re getting into when you call it. It’s as if there was no point to putting its contents inside the method and could have just inlined its contents.

What if the above code instead looked like this:


Do you need to dive into the method to understand it now? No, not really, because the abstraction is clear. You don’t need to know the specifics of the internals of this method because the person who wrote it wanted to communicate to you what was going on inside of it by expressing a clear name. Time is saved because you now have less surface area of the code to think about and the implementation of that method is free to be refactored so long as it upholds the responsibilities that it advertises.


Tagged: software craftsmanship encapsulation abstraction naming

Unit Testing Principles

15 May 2016. comments

Writing good unit tests is critical for any software you write. Over time I’ve accumulated a number of principles about unit testing. Here are some of them:


Write your tests first. It’s a hard habit to adopt if you haven’t been doing it but its well worth it. Not only will you have test coverage to catch regressions, but more importantly you will drive cleaner designs for APIs. When you write tests first you by definition must be designing a clean API since your tests are consumers. This helps lead to designs that have low coupling because you’ll be writing the consumer code at the same time as the implementation.

How much is enough?

If you have doubts about whether your code will do what you intend, you haven’t written enough tests. There isn’t a hard rule about how much testing you should do. The purpose of testing is to build confidence. When you are not yet confident, keep going. When you feel confident, stop. Don’t test for imaginary scenarios that don’t yet exist. When you fix a bug, make sure you write a test to demonstrate the bug first, so that when you fix the bug you can watch it go green.


This is a contentious point. The tools are your disposal are fakes/stubs/mocks. There are some who believe that you should 100% isolate the unit under test, while others think its OK to let some collaborators in so that you don’t fool yourself into thinking your code works. At the very least you shouldn’t be hitting I/O in tests. Things like databases, filesystems, network calls, etc should be stubbed out because they have a tendency to be unreliable and slow. Tests that run quickly are crucial so that when you have thousands of them you have rapid feedback. Some frameworks (like Rails) insist on having the database be involved in all your tests; it is possible to circumvent this but its not conventional. Use your judgement; when you run into slowness or unreliability in unit tests then you’ve got to address it.

Don’t worry about duplication so much

Keeping your code DRY is a good practice to have, but in tests it might not be as important. Your tests are there for 3 reasons:

  1. Prevent regressions
  2. Help drive designs
  3. Communicate to other developers how your code is consumed

That last point is super important. When other developers come to look at your code they are going to look at your tests first to see how to consume your API. When peers are maintaining the code you’re writing they need to quickly understand how your code works. If you extract a bunch of test helpers or de-duplicate setup code, it can often make it harder to get in and understand whats happening without tracing through your helpers. So some duplication might be OK if it helps communicate clearly what each test is doing and expecting.

Don’t test libraries/frameworks

You should only be testing your code. If your unit has no logic and is just leaning on the framework to do some advertised behavior then the tests you write are going to be of fairly low value. Sometimes you will write tests against a framework as a means to understand the framework, and thats fine, but once you understand a framework you should expect that its got its own tests and treat it like a black-box. Its reasonable to have tests of this type if you know or expect that your dependency is unstable or has no tests, but at the same time if you know that then maybe thats a bad dependency.

Heavy mocking is POISON.

I can’t stress this point enough. Its also a very contentious issue in the community. Some people fall into the “classicist” camp of testing and some fall into the “mockist” camp. Mockist testing styles are about verifying internals of code, by ensuring specific method calls took place. An example in rspec:

it "blends" do
  foo = Foo.new(bar)
  expect(bar).to have_received(:wiggle)

A classicist would not test this. Thats not to say that a classicist would leverage bar’s actual behavior, they might stub it out, but they wouldn’t verify that internal implementation details like method calls took place.

I’m personally firmly in the classicist camp because I’ve been bitten by mocks way too much in my day. Verifying the behavior of internal implementation means that refactoring the internals of your implementation are guaranteed to break your tests, which makes the value of your tests very low since they cannot verify your old and new code without themselves changing. Yes, you can update your mocks to match the implementation, but thats some tight coupling and cascading change that you shouldn’t have to do just to refactor to a cleaner design.

There is a really good article written by Martin Fowler I tend to send people when they want to learn more about this distinction:


Tagged: unit tests tdd mocking


08 Aug 2015. comments

Interviewing is hard for both candidates and interviewers but some of the difficulty can be avoided if we’d just learn from the experience. I recently went through the process again and I want to share some things.

For many people the process of interviewing for a software developer role is stressful. But it can be less stressful if you re-frame the situation a bit. Many people view an interview as being on the receiving end of an interrogation. When you view it that way the experience is one where you’re primarily concerned with how to defend/sell yourself and your abilities, to see if you measure up. And if you feel you don’t that’s where the stress comes in. It’s partially confidence, but it’s also perspective.

An interview is about determining fit from both the company’s perspective as well as yours. If you interview for a role that isn’t a good match for you from the company’s perspective then it wouldn’t have been a good experience if you had landed it. The same goes for the other way round; You need to determine if the company is somewhere you should work as well and ask enough questions to ensure that it is.

So if you think of the interview process as much more of a 2-way street where both you are interviewing with the company and the company is also interviewing with you then you may find the experience far less stressful.

Preparation can also help immensely with keeping your stress levels down. If you don’t prepare for an interview you may not do well and that can harm your confidence. Interviewing is unfortunately usually done in a way that’s very unlike a day-to-day software development job. This is unfortunately common in our field but we have to be able to deal with it since most companies still haven’t figured out the best way to do interviews. So you need to practice your interviewing ability. Brush up a bit and exercise those CS muscles that you haven’t used in a while.

Some things you can do are:

If you have a software developer friend it can also sometimes help to have them perform a mock interview on you. How you prepare depends a lot on where you plan on interviewing; For example some places use whiteboarding while others have you code on an actual machine. Do some research and prepare; who knows maybe the way a company interviews is also a signal about what kind of place they will be to work for?

Software development interviewing is all over the board in terms of quality and style. We’re extremely bad at interviewing in this industry. This is well known but for many companies it has simply been accepted as un-fixable. Some companies have learned and have made improvements that seem to be yielding better results, but its very hit-and-miss.


When people think of software development interviews they naturally think of whiteboards since they’ve been the staple for many decades. From an outsiders perspective this might seem a bit odd since writing code by hand on a whiteboard has very little in common with the day to day software development activities at a job. It’s true that whiteboards are used for hallway discussions, diagrams, etc, but when it comes to code a text editor is used. So whiteboards are very controversial.

Pair Programming

In my opinion the highest signal-to-noise ratio for an interview can be achieved by pair programming between interviewer and candidate. The benefits are numerous:

  1. The interviewer gets to see the candidate write actual for-reals code.
  2. You can do TDD exercises that actually run and show you how the interviewer/candidate tests.
  3. Sitting with the candidate can reveal to the interviewer how well they work with others, take feedback, allow for give-and-take, etc.
  4. It’s generally lower stress than a whiteboard since it’s more familiar to type than scrawl with a marker.

A few months ago I was the candidate in such a situation and I can tell you of a 5th and possibly more important benefit of this style of interview and one that’s harder to quantify: I had fun. I think the reason was that I felt like I could more accurately represent myself and my abilities in this mode. Maybe thats just me, I’m not sure.


Another option is to give the candidate some really bad code and see what they do with it to clean it up. You can gauge whether they break down things into smaller things, standardize the code to a style-guide, make iterative step-by-step improvements guided by tests, etc. There’s a lot to learn about a candidate from this style and it can also be as open-ended as you need or as specific as you need. When doing this or the pair-programming style it’s a good idea to constrain the exercise to a relatively small amount of code; perhaps a class or two or maybe a handful of methods. You can also mix this style with pair-programming for a pair-refactoring of sorts.


What has the candidate done? It’s pretty common to ask this question in terms of what projects the candidate has worked on at past jobs/university but why not have them show you? If they have a GitHub account you can give them a machine and a projector and have them point you at some code they are proud of and walk you through it. If they don’t have a GitHub you can have them show you a product that they either built or supported and talk about that. The point here is that you can gauge whether they are proud of work they’ve done, can explain to another human at the appropriate level of abstraction (not too in-the-weeds and not too high-level), and whether they can get excited about the product/service that they work on. This can be as open ended as it needs to be and you can guide it however you feel is best for what you’re looking for in a candidate.

Let Them Ask

Interviewers sometimes act like a noble gas. What I mean by that is that some interviewers make the mistake of pelting a candidate with enough questions and discussion to fill the space available. This doesn’t let the candidate gauge you as a company very well because they don’t get a chance to ask you things. Let them have some time to ask you about your company. And if they don’t ask you anything that might be a red flag, which is an indicator you need to know about. Good candidates will be very curious about how your team works, how you develop software, etc.

We’re Getting Better

I’ve seen a dramatic shift over the past 5 years which seems to indicate that interviews in this industry are getting better. We still have a long way to go but maybe you can help by taking some of these strategies to your company or your interviews with companies.


Tagged: interviews interviewing software engineering soft skills

2017 Ben Lakey

The words here do not reflect those of my employer.