Naming is abstraction

24 Oct 2016. comments

The point of choosing good names in software development is that it provides abstraction for others. Good names don’t exist for compilers because the compiler isn’t going to care about names at all. Naming is not just some process invented by language designers to make you box up your code arbitrarily. A good name exists so that when you look at code that I wrote (or I later look at code I wrote) I can understand at a glance what responsibilities it has and what it might be doing. I covered this in a previous post: Readable is not Elementary.

Let’s say someone on your team submits a code review to you and you open it up and find this method call:

manager.process(foo)

This isn’t a good name. What does process mean in this case? What are we doing with foo? What does manager manage? What does ‘managing’ even mean in this case?

The problem with the above code is that it failed to abstract the inner concepts from humans. The practical implication of this poorly named method is that you and every other programmer who comes across it will have to dive inside of it to understand what it’s doing. That’s a failed abstraction because nothing got abstracted for you if you had to open it up. The method name wasn’t clear so you don’t know what you’re getting into when you call it. It’s as if there was no point to putting its contents inside the method and could have just inlined its contents.

What if the above code instead looked like this:

investors.distribute_funds_equally(money)

Do you need to dive into the method to understand it now? No, not really, because the abstraction is clear. You don’t need to know the specifics of the internals of this method because the person who wrote it wanted to communicate to you what was going on inside of it by expressing a clear name. Time is saved because you now have less surface area of the code to think about and the implementation of that method is free to be refactored so long as it upholds the responsibilities that it advertises.

comments

Tagged: software craftsmanship encapsulation abstraction naming

Unit Testing Principles

15 May 2016. comments

Writing good unit tests is critical for any software you write. Over time I’ve accumulated a number of principles about unit testing. Here are some of them:

TDD

Write your tests first. It’s a hard habit to adopt if you haven’t been doing it but its well worth it. Not only will you have test coverage to catch regressions, but more importantly you will drive cleaner designs for APIs. When you write tests first you by definition must be designing a clean API since your tests are consumers. This helps lead to designs that have low coupling because you’ll be writing the consumer code at the same time as the implementation.

How much is enough?

If you have doubts about whether your code will do what you intend, you haven’t written enough tests. There isn’t a hard rule about how much testing you should do. The purpose of testing is to build confidence. When you are not yet confident, keep going. When you feel confident, stop. Don’t test for imaginary scenarios that don’t yet exist. When you fix a bug, make sure you write a test to demonstrate the bug first, so that when you fix the bug you can watch it go green.

Isolation

This is a contentious point. The tools are your disposal are fakes/stubs/mocks. There are some who believe that you should 100% isolate the unit under test, while others think its OK to let some collaborators in so that you don’t fool yourself into thinking your code works. At the very least you shouldn’t be hitting I/O in tests. Things like databases, filesystems, network calls, etc should be stubbed out because they have a tendency to be unreliable and slow. Tests that run quickly are crucial so that when you have thousands of them you have rapid feedback. Some frameworks (like Rails) insist on having the database be involved in all your tests; it is possible to circumvent this but its not conventional. Use your judgement; when you run into slowness or unreliability in unit tests then you’ve got to address it.

Don’t worry about duplication so much

Keeping your code DRY is a good practice to have, but in tests it might not be as important. Your tests are there for 3 reasons:

  1. Prevent regressions
  2. Help drive designs
  3. Communicate to other developers how your code is consumed

That last point is super important. When other developers come to look at your code they are going to look at your tests first to see how to consume your API. When peers are maintaining the code you’re writing they need to quickly understand how your code works. If you extract a bunch of test helpers or de-duplicate setup code, it can often make it harder to get in and understand whats happening without tracing through your helpers. So some duplication might be OK if it helps communicate clearly what each test is doing and expecting.

Don’t test libraries/frameworks

You should only be testing your code. If your unit has no logic and is just leaning on the framework to do some advertised behavior then the tests you write are going to be of fairly low value. Sometimes you will write tests against a framework as a means to understand the framework, and thats fine, but once you understand a framework you should expect that its got its own tests and treat it like a black-box. Its reasonable to have tests of this type if you know or expect that your dependency is unstable or has no tests, but at the same time if you know that then maybe thats a bad dependency.

Heavy mocking is POISON.

I can’t stress this point enough. Its also a very contentious issue in the community. Some people fall into the “classicist” camp of testing and some fall into the “mockist” camp. Mockist testing styles are about verifying internals of code, by ensuring specific method calls took place. An example in rspec:

it "blends" do
  foo = Foo.new(bar)
  foo.frobnicate
  expect(bar).to have_received(:wiggle)
end

A classicist would not test this. Thats not to say that a classicist would leverage bar’s actual behavior, they might stub it out, but they wouldn’t verify that internal implementation details like method calls took place.

I’m personally firmly in the classicist camp because I’ve been bitten by mocks way too much in my day. Verifying the behavior of internal implementation means that refactoring the internals of your implementation are guaranteed to break your tests, which makes the value of your tests very low since they cannot verify your old and new code without themselves changing. Yes, you can update your mocks to match the implementation, but thats some tight coupling and cascading change that you shouldn’t have to do just to refactor to a cleaner design.

There is a really good article written by Martin Fowler I tend to send people when they want to learn more about this distinction:

comments

Tagged: unit tests tdd mocking

Interviewing

08 Aug 2015. comments

Interviewing is hard for both candidates and interviewers but some of the difficulty can be avoided if we’d just learn from the experience. I recently went through the process again and I want to share some things.

For many people the process of interviewing for a software developer role is stressful. But it can be less stressful if you re-frame the situation a bit. Many people view an interview as being on the receiving end of an interrogation. When you view it that way the experience is one where you’re primarily concerned with how to defend/sell yourself and your abilities, to see if you measure up. And if you feel you don’t that’s where the stress comes in. It’s partially confidence, but it’s also perspective.

An interview is about determining fit from both the company’s perspective as well as yours. If you interview for a role that isn’t a good match for you from the company’s perspective then it wouldn’t have been a good experience if you had landed it. The same goes for the other way round; You need to determine if the company is somewhere you should work as well and ask enough questions to ensure that it is.

So if you think of the interview process as much more of a 2-way street where both you are interviewing with the company and the company is also interviewing with you then you may find the experience far less stressful.

Preparation can also help immensely with keeping your stress levels down. If you don’t prepare for an interview you may not do well and that can harm your confidence. Interviewing is unfortunately usually done in a way that’s very unlike a day-to-day software development job. This is unfortunately common in our field but we have to be able to deal with it since most companies still haven’t figured out the best way to do interviews. So you need to practice your interviewing ability. Brush up a bit and exercise those CS muscles that you haven’t used in a while.

Some things you can do are:

If you have a software developer friend it can also sometimes help to have them perform a mock interview on you. How you prepare depends a lot on where you plan on interviewing; For example some places use whiteboarding while others have you code on an actual machine. Do some research and prepare; who knows maybe the way a company interviews is also a signal about what kind of place they will be to work for?

Software development interviewing is all over the board in terms of quality and style. We’re extremely bad at interviewing in this industry. This is well known but for many companies it has simply been accepted as un-fixable. Some companies have learned and have made improvements that seem to be yielding better results, but its very hit-and-miss.

Whiteboard

When people think of software development interviews they naturally think of whiteboards since they’ve been the staple for many decades. From an outsiders perspective this might seem a bit odd since writing code by hand on a whiteboard has very little in common with the day to day software development activities at a job. It’s true that whiteboards are used for hallway discussions, diagrams, etc, but when it comes to code a text editor is used. So whiteboards are very controversial.

Pair Programming

In my opinion the highest signal-to-noise ratio for an interview can be achieved by pair programming between interviewer and candidate. The benefits are numerous:

  1. The interviewer gets to see the candidate write actual for-reals code.
  2. You can do TDD exercises that actually run and show you how the interviewer/candidate tests.
  3. Sitting with the candidate can reveal to the interviewer how well they work with others, take feedback, allow for give-and-take, etc.
  4. It’s generally lower stress than a whiteboard since it’s more familiar to type than scrawl with a marker.

A few months ago I was the candidate in such a situation and I can tell you of a 5th and possibly more important benefit of this style of interview and one that’s harder to quantify: I had fun. I think the reason was that I felt like I could more accurately represent myself and my abilities in this mode. Maybe thats just me, I’m not sure.

Refactor

Another option is to give the candidate some really bad code and see what they do with it to clean it up. You can gauge whether they break down things into smaller things, standardize the code to a style-guide, make iterative step-by-step improvements guided by tests, etc. There’s a lot to learn about a candidate from this style and it can also be as open-ended as you need or as specific as you need. When doing this or the pair-programming style it’s a good idea to constrain the exercise to a relatively small amount of code; perhaps a class or two or maybe a handful of methods. You can also mix this style with pair-programming for a pair-refactoring of sorts.

Show-and-Tell

What has the candidate done? It’s pretty common to ask this question in terms of what projects the candidate has worked on at past jobs/university but why not have them show you? If they have a GitHub account you can give them a machine and a projector and have them point you at some code they are proud of and walk you through it. If they don’t have a GitHub you can have them show you a product that they either built or supported and talk about that. The point here is that you can gauge whether they are proud of work they’ve done, can explain to another human at the appropriate level of abstraction (not too in-the-weeds and not too high-level), and whether they can get excited about the product/service that they work on. This can be as open ended as it needs to be and you can guide it however you feel is best for what you’re looking for in a candidate.

Let Them Ask

Interviewers sometimes act like a noble gas. What I mean by that is that some interviewers make the mistake of pelting a candidate with enough questions and discussion to fill the space available. This doesn’t let the candidate gauge you as a company very well because they don’t get a chance to ask you things. Let them have some time to ask you about your company. And if they don’t ask you anything that might be a red flag, which is an indicator you need to know about. Good candidates will be very curious about how your team works, how you develop software, etc.

We’re Getting Better

I’ve seen a dramatic shift over the past 5 years which seems to indicate that interviews in this industry are getting better. We still have a long way to go but maybe you can help by taking some of these strategies to your company or your interviews with companies.

comments

Tagged: interviews interviewing software engineering soft skills

Chasing the New Shiny

20 May 2015. comments

There have been a few conference talks and blog posts recently that have touched on this idea that using simple, well-understood technology is preferable to using unproven new shiny tech. I couldn’t agree more.

“Standard is better than better.”

John Hyland

There are lots of reasons to use well-understood tech including:

  • Stability
  • Talent pool
  • Integration options
  • Abundant Resources

With well-understood technology the rough edges and bugs have already been worked out which saves you from having to invest time/money running up against those things first. Others have already payed the cost and smoothed the way ahead.

On the other hand if you use the New Shiny then the downsides haven’t been explored, the rough edges will be yours to figure out, and the best practices don’t really exist yet. And if you add too much of this new tech you might start to feel like Bilbo Baggins:

“I feel thin, sort of stretched, like butter scraped over too much bread.” - Bilbo Baggins

New tech is expensive and distracts you from what you really need to be doing which is solving your business problem instead of solving technology problems (unless that is your business). The alternative is to not ship and possibly never get your idea of the ground while you constantly chase the new shiny tech.

One community that seems to struggle the most with this is javascript. New frameworks pop up daily and often with only marginal improvements or differences on whats already out there (if any at all). The number of web frameworks in javascript has erupted so vigorously that there is an entire site dedicated to sifting through the different options. This explosion of code is creating mountains of code that ultimately must be maintained and I wonder about the cost to the industry as a whole. Perhaps we will look back on this era of software development with amusement (and shame?) years from now.

If the idea of using simple well-understood tech resonates with you then you may be interested in John Hyland’s talk from Cascadia Ruby last year: Be Awesome By Being Boring. And more recently, Dan McKinley wrote about this topic as well in his post ‘Choose Boring Technology’.

comments

Tagged: architecture complexity maintainability software engineering yagni

Unix Philosophy

12 Apr 2015. comments

I believe the Unix Philosophy is one of the most overused yet least understood set of concepts in software development. But its also just interesting to think about and revisit it periodically.

1. Rule of Modularity: Write simple parts connected by clean interfaces.

This is really all about abstraction and separation of concerns. A part of a system is easier to understand on its own more-so than the entire system. And those parts are easiest to understand and maintain if they have tight cohesion around a single concept. It’s also natural and beneficial for a system appropriately broken down into small parts to have small interfaces. The abstraction should expose as few details as possible so that the consumer can focus instead on the broader system instead of the details for one of its parts.

2. Rule of Clarity: Clarity is better than cleverness.

I would argue this is the most important yet least practiced of all the rules on this page. Too often developers write code that is fancy rather than communicative. This is insidious as well because the pain of writing fancy code is not immediately felt since maintenance comes later. When the system needs to be extended, repaired, or otherwise modified it needs to be completely clear what it does otherwise it becomes a time/money sink. Your most important job as a software developer is to write code that other developers can maintain with ease. Write code for humans, not compilers.

3. Rule of Composition: Design programs to be connected to other programs.

The general idea is that if software can be re-used then it can be re-connected to other software and therefor provide long term dividends beyond the original project. We have to be careful with this one though because it can quickly lead to premature optimization or gold plating.

4. Rule of Separation: Separate policy from mechanism; separate interfaces from engines.

How a thing is accomplished is something that should be interchangeable. A basic example of this is the strategy pattern where logic can be pluggable. Another example is that the same system might be interactable via a web application, web API, command line interface, etc. This again touches on separation of concerns since you must have modularity before being able to vary a systems parts.

5. Rule of Simplicity: Design for simplicity; add complexity only where you must.

Get it working. Don’t gold plate. Don’t prematurely optimize. Only make performance optimizations when you have proven that a performance problem exists and system needs to be more performant. Don’t generalize until your specific problem has been solved and a more general approach would be useful for multiple parts of the system or other systems. Prove that complexity would be advantageous before sacrificing simplicity. Remember Rule #2 and write code for humans first, compilers second.

6. Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do.

Really this is just Rule #5 (and #2) restated. Do the simplest thing that could possibly work. Have measurements that indicate adding code would be beneficial. Remember that every line of code added is a line that must be maintained and calculate that negative against the theoretical positives of adding more code.

7. Rule of Transparency: Design for visibility to make inspection and debugging easier.

This is Rule #2. Make it so that when things go wrong it requires very little inspection to know what happened. Keep inheritance heirarchies shallow so that behaviors can be tracked down easily. Break up large things into smaller things.

8. Rule of Robustness: Robustness is the child of transparency and simplicity.

When things can be easily understood and composed into larger things the result is a system that can do more things without sacrificing those things that made it composable in the first place. Legos can be made into greater things in part because each piece can be easily composed into large pieces.

9. Rule of Representation: Fold knowledge into data so program logic can be stupid and robust.

Let’s face it, programming is hard. When given a choice of making data complex vs logic complex we should always choose the data because code must be maintained and understood by future developers.

10. Rule of Least Surprise: In interface design, always do the least surprising thing.

This is sort of like defensive driving for computer programming. Make it obvious what you code does by thinking carefully about your APIs. Ideally consumers of your code should be utterly bored with how expected everything is. Your code should fade into the background and allow consumers to instead focus on their own problems they are trying to solve.

11. Rule of Silence: When a program has nothing surprising to say, it should say nothing.

Are status messages or other outputs necessary for a user? Is the information you are giving to a user useful or actionable? If it isn’t then it’s just noise to them and you shouldn’t concern them with it.

12. Rule of Repair: When you must fail, fail noisily and as soon as possible.

Nothing is worse than a system that goes wrong but brushes the problem under the rug for a while. Doing this makes it very difficult to know where the problem is since the root cause may be several levels removed from where the problem happened. Failing noisily makes us aware of the problem. Failing as soon as possibly makes localizing the failure easy.

13. Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.

This is rule #2 and #5 again. Your fancy code might make a system slightly more performant or flexible but at what cost? If it costs future developers too much time to understand it then you have a net negative. Developers always tend to underestimate the maintenance costs of the software they write, or how many future developers will run into their code.

14. Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.

Be lazy. The first time you have to do something by hand might not be too bad but if you have to do it again you should automate it. Twice is evidence that the likelihood of you or someone else having to perform this manual work again is high.

15. Rule of Optimization: Prototype before polishing. Get it working before you optimize it.

If you spend time making your software whiz-bang awesome, performant, or generalized before you understand a problem and get it working at a basic level then you’re really just wasting time. Doing these things early is dangerous because the shape of the problem or your understanding may change greatly as you explore the problem space and therefor all that extra code you’ve written just adds time and complexity as you evolve the system.

16. Rule of Diversity: Distrust all claims for “one true way”.

In software change is inevitable and nothing is one-size-fits-all. What solves your problem might not solve mine, and even if it does it may only be appropriate right now. In an industry as young as software we are still trying to figure things out (languages, patterns, practices). Saying you know the best way or tool is shutting off your search for even better ways/tools. This isn’t even really software specific and has wider applicability to life or science of any kind.

17. Rule of Extensibility: Design for the future, because it will be here sooner than you think.

Don’t misunderstand this one; it isn’t about premature optimization or overgeneralizing. It’s about leaving doors open and not painting yourself into a corner. You don’t need to go all the way towards making something generalized or reusable; you just need to make it easy to do that down the road.

comments

Tagged: unix software engineering software craftsmanship patterns

2016 Ben Lakey

The words here do not reflect those of my employer.