Naming is abstraction

29 Apr 2016. comments

The point of choosing good names in software development is that it provides abstraction for others. Good names don’t exist for compilers because the compiler isn’t going to care about names at all. Naming is not just some process invented by language designers to make you box up your code arbitrarily. A good name exists so that when you look at code that I wrote (or I later look at code I wrote) I can understand at a glance what responsibilities it has and what it might be doing. I covered this in a previous post: Readable is not Elementary.

Let’s say someone on your team submits a code review to you and you open it up and find this method call:

manager.process(foo)

This isn’t a good name. What does process mean in this case? What are we doing with foo? What does manager manage? What does ‘managing’ even mean in this case?

The problem with the above code is that it failed to abstract the inner concepts from humans. The practical implication of this poorly named method is that you and every other programmer who comes across it will have to dive inside of it to understand what it’s doing. That’s a failed abstraction because nothing got abstracted for you if you had to open it up. The method name wasn’t clear so you don’t know what you’re getting into when you call it. It’s as if there was no point to putting its contents inside the method and could have just inlined its contents.

What if the above code instead looked like this:

investors.distribute_funds_equally(money)

Do you need to dive into the method to understand it now? No, not really, because the abstraction is clear. You don’t need to know the specifics of the internals of this method because the person who wrote it wanted to communicate to you what was going on inside of it by expressing a clear name. Time is saved because you now have less surface area of the code to think about and the implementation of that method is free to be refactored so long as it upholds the responsibilities that it advertises.

comments

Tagged: software craftsmanship encapsulation abstraction naming

Interviewing

08 Aug 2015. comments

Interviewing is hard for both candidates and interviewers but some of the difficulty can be avoided if we’d just learn from the experience. I recently went through the process again and I want to share some things.

For many people the process of interviewing for a software developer role is stressful. But it can be less stressful if you re-frame the situation a bit. Many people view an interview as being on the receiving end of an interrogation. When you view it that way the experience is one where you’re primarily concerned with how to defend/sell yourself and your abilities, to see if you measure up. And if you feel you don’t that’s where the stress comes in. It’s partially confidence, but it’s also perspective.

An interview is about determining fit from both the company’s perspective as well as yours. If you interview for a role that isn’t a good match for you from the company’s perspective then it wouldn’t have been a good experience if you had landed it. The same goes for the other way round; You need to determine if the company is somewhere you should work as well and ask enough questions to ensure that it is.

So if you think of the interview process as much more of a 2-way street where both you are interviewing with the company and the company is also interviewing with you then you may find the experience far less stressful.

Preparation can also help immensely with keeping your stress levels down. If you don’t prepare for an interview you may not do well and that can harm your confidence. Interviewing is unfortunately usually done in a way that’s very unlike a day-to-day software development job. This is unfortunately common in our field but we have to be able to deal with it since most companies still haven’t figured out the best way to do interviews. So you need to practice your interviewing ability. Brush up a bit and exercise those CS muscles that you haven’t used in a while.

Some things you can do are:

If you have a software developer friend it can also sometimes help to have them perform a mock interview on you. How you prepare depends a lot on where you plan on interviewing; For example some places use whiteboarding while others have you code on an actual machine. Do some research and prepare; who knows maybe the way a company interviews is also a signal about what kind of place they will be to work for?

Software development interviewing is all over the board in terms of quality and style. We’re extremely bad at interviewing in this industry. This is well known but for many companies it has simply been accepted as un-fixable. Some companies have learned and have made improvements that seem to be yielding better results, but its very hit-and-miss.

Whiteboard

When people think of software development interviews they naturally think of whiteboards since they’ve been the staple for many decades. From an outsiders perspective this might seem a bit odd since writing code by hand on a whiteboard has very little in common with the day to day software development activities at a job. It’s true that whiteboards are used for hallway discussions, diagrams, etc, but when it comes to code a text editor is used. So whiteboards are very controversial.

Pair Programming

In my opinion the highest signal-to-noise ratio for an interview can be achieved by pair programming between interviewer and candidate. The benefits are numerous:

  1. The interviewer gets to see the candidate write actual for-reals code.
  2. You can do TDD exercises that actually run and show you how the interviewer/candidate tests.
  3. Sitting with the candidate can reveal to the interviewer how well they work with others, take feedback, allow for give-and-take, etc.
  4. It’s generally lower stress than a whiteboard since it’s more familiar to type than scrawl with a marker.

A few months ago I was the candidate in such a situation and I can tell you of a 5th and possibly more important benefit of this style of interview and one that’s harder to quantify: I had fun. I think the reason was that I felt like I could more accurately represent myself and my abilities in this mode. Maybe thats just me, I’m not sure.

Refactor

Another option is to give the candidate some really bad code and see what they do with it to clean it up. You can gauge whether they break down things into smaller things, standardize the code to a style-guide, make iterative step-by-step improvements guided by tests, etc. There’s a lot to learn about a candidate from this style and it can also be as open-ended as you need or as specific as you need. When doing this or the pair-programming style it’s a good idea to constrain the exercise to a relatively small amount of code; perhaps a class or two or maybe a handful of methods. You can also mix this style with pair-programming for a pair-refactoring of sorts.

Show-and-Tell

What has the candidate done? It’s pretty common to ask this question in terms of what projects the candidate has worked on at past jobs/university but why not have them show you? If they have a GitHub account you can give them a machine and a projector and have them point you at some code they are proud of and walk you through it. If they don’t have a GitHub you can have them show you a product that they either built or supported and talk about that. The point here is that you can gauge whether they are proud of work they’ve done, can explain to another human at the appropriate level of abstraction (not too in-the-weeds and not too high-level), and whether they can get excited about the product/service that they work on. This can be as open ended as it needs to be and you can guide it however you feel is best for what you’re looking for in a candidate.

Let Them Ask

Interviewers sometimes act like a noble gas. What I mean by that is that some interviewers make the mistake of pelting a candidate with enough questions and discussion to fill the space available. This doesn’t let the candidate gauge you as a company very well because they don’t get a chance to ask you things. Let them have some time to ask you about your company. And if they don’t ask you anything that might be a red flag, which is an indicator you need to know about. Good candidates will be very curious about how your team works, how you develop software, etc.

We’re Getting Better

I’ve seen a dramatic shift over the past 5 years which seems to indicate that interviews in this industry are getting better. We still have a long way to go but maybe you can help by taking some of these strategies to your company or your interviews with companies.

comments

Tagged: interviews interviewing software engineering soft skills

Chasing the New Shiny

20 May 2015. comments

There have been a few conference talks and blog posts recently that have touched on this idea that using simple, well-understood technology is preferable to using unproven new shiny tech. I couldn’t agree more.

“Standard is better than better.”

John Hyland

There are lots of reasons to use well-understood tech including:

  • Stability
  • Talent pool
  • Integration options
  • Abundant Resources

With well-understood technology the rough edges and bugs have already been worked out which saves you from having to invest time/money running up against those things first. Others have already payed the cost and smoothed the way ahead.

On the other hand if you use the New Shiny then the downsides haven’t been explored, the rough edges will be yours to figure out, and the best practices don’t really exist yet. And if you add too much of this new tech you might start to feel like Bilbo Baggins:

“I feel thin, sort of stretched, like butter scraped over too much bread.” - Bilbo Baggins

New tech is expensive and distracts you from what you really need to be doing which is solving your business problem instead of solving technology problems (unless that is your business). The alternative is to not ship and possibly never get your idea of the ground while you constantly chase the new shiny tech.

One community that seems to struggle the most with this is javascript. New frameworks pop up daily and often with only marginal improvements or differences on whats already out there (if any at all). The number of web frameworks in javascript has erupted so vigorously that there is an entire site dedicated to sifting through the different options. This explosion of code is creating mountains of code that ultimately must be maintained and I wonder about the cost to the industry as a whole. Perhaps we will look back on this era of software development with amusement (and shame?) years from now.

If the idea of using simple well-understood tech resonates with you then you may be interested in John Hyland’s talk from Cascadia Ruby last year: Be Awesome By Being Boring. And more recently, Dan McKinley wrote about this topic as well in his post ‘Choose Boring Technology’.

comments

Tagged: architecture complexity maintainability software engineering yagni

Unix Philosophy

12 Apr 2015. comments

I believe the Unix Philosophy is one of the most overused yet least understood set of concepts in software development. But its also just interesting to think about and revisit it periodically.

1. Rule of Modularity: Write simple parts connected by clean interfaces.

This is really all about abstraction and separation of concerns. A part of a system is easier to understand on its own more-so than the entire system. And those parts are easiest to understand and maintain if they have tight cohesion around a single concept. It’s also natural and beneficial for a system appropriately broken down into small parts to have small interfaces. The abstraction should expose as few details as possible so that the consumer can focus instead on the broader system instead of the details for one of its parts.

2. Rule of Clarity: Clarity is better than cleverness.

I would argue this is the most important yet least practiced of all the rules on this page. Too often developers write code that is fancy rather than communicative. This is insidious as well because the pain of writing fancy code is not immediately felt since maintenance comes later. When the system needs to be extended, repaired, or otherwise modified it needs to be completely clear what it does otherwise it becomes a time/money sink. Your most important job as a software developer is to write code that other developers can maintain with ease. Write code for humans, not compilers.

3. Rule of Composition: Design programs to be connected to other programs.

The general idea is that if software can be re-used then it can be re-connected to other software and therefor provide long term dividends beyond the original project. We have to be careful with this one though because it can quickly lead to premature optimization or gold plating.

4. Rule of Separation: Separate policy from mechanism; separate interfaces from engines.

How a thing is accomplished is something that should be interchangeable. A basic example of this is the strategy pattern where logic can be pluggable. Another example is that the same system might be interactable via a web application, web API, command line interface, etc. This again touches on separation of concerns since you must have modularity before being able to vary a systems parts.

5. Rule of Simplicity: Design for simplicity; add complexity only where you must.

Get it working. Don’t gold plate. Don’t prematurely optimize. Only make performance optimizations when you have proven that a performance problem exists and system needs to be more performant. Don’t generalize until your specific problem has been solved and a more general approach would be useful for multiple parts of the system or other systems. Prove that complexity would be advantageous before sacrificing simplicity. Remember Rule #2 and write code for humans first, compilers second.

6. Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do.

Really this is just Rule #5 (and #2) restated. Do the simplest thing that could possibly work. Have measurements that indicate adding code would be beneficial. Remember that every line of code added is a line that must be maintained and calculate that negative against the theoretical positives of adding more code.

7. Rule of Transparency: Design for visibility to make inspection and debugging easier.

This is Rule #2. Make it so that when things go wrong it requires very little inspection to know what happened. Keep inheritance heirarchies shallow so that behaviors can be tracked down easily. Break up large things into smaller things.

8. Rule of Robustness: Robustness is the child of transparency and simplicity.

When things can be easily understood and composed into larger things the result is a system that can do more things without sacrificing those things that made it composable in the first place. Legos can be made into greater things in part because each piece can be easily composed into large pieces.

9. Rule of Representation: Fold knowledge into data so program logic can be stupid and robust.

Let’s face it, programming is hard. When given a choice of making data complex vs logic complex we should always choose the data because code must be maintained and understood by future developers.

10. Rule of Least Surprise: In interface design, always do the least surprising thing.

This is sort of like defensive driving for computer programming. Make it obvious what you code does by thinking carefully about your APIs. Ideally consumers of your code should be utterly bored with how expected everything is. Your code should fade into the background and allow consumers to instead focus on their own problems they are trying to solve.

11. Rule of Silence: When a program has nothing surprising to say, it should say nothing.

Are status messages or other outputs necessary for a user? Is the information you are giving to a user useful or actionable? If it isn’t then it’s just noise to them and you shouldn’t concern them with it.

12. Rule of Repair: When you must fail, fail noisily and as soon as possible.

Nothing is worse than a system that goes wrong but brushes the problem under the rug for a while. Doing this makes it very difficult to know where the problem is since the root cause may be several levels removed from where the problem happened. Failing noisily makes us aware of the problem. Failing as soon as possibly makes localizing the failure easy.

13. Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.

This is rule #2 and #5 again. Your fancy code might make a system slightly more performant or flexible but at what cost? If it costs future developers too much time to understand it then you have a net negative. Developers always tend to underestimate the maintenance costs of the software they write, or how many future developers will run into their code.

14. Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.

Be lazy. The first time you have to do something by hand might not be too bad but if you have to do it again you should automate it. Twice is evidence that the likelihood of you or someone else having to perform this manual work again is high.

15. Rule of Optimization: Prototype before polishing. Get it working before you optimize it.

If you spend time making your software whiz-bang awesome, performant, or generalized before you understand a problem and get it working at a basic level then you’re really just wasting time. Doing these things early is dangerous because the shape of the problem or your understanding may change greatly as you explore the problem space and therefor all that extra code you’ve written just adds time and complexity as you evolve the system.

16. Rule of Diversity: Distrust all claims for “one true way”.

In software change is inevitable and nothing is one-size-fits-all. What solves your problem might not solve mine, and even if it does it may only be appropriate right now. In an industry as young as software we are still trying to figure things out (languages, patterns, practices). Saying you know the best way or tool is shutting off your search for even better ways/tools. This isn’t even really software specific and has wider applicability to life or science of any kind.

17. Rule of Extensibility: Design for the future, because it will be here sooner than you think.

Don’t misunderstand this one; it isn’t about premature optimization or overgeneralizing. It’s about leaving doors open and not painting yourself into a corner. You don’t need to go all the way towards making something generalized or reusable; you just need to make it easy to do that down the road.

comments

Tagged: unix software engineering software craftsmanship patterns

Less Estimation, More Trust

22 Mar 2015. comments

A good deal of this article is based off of thoughts from Ernie Miller’s talk at Ruby on Ales 2015.

We’re in the business of producing software. If you produce software for products/services then you know that these businesses tend to value predictability, measureability, and control. The way most businesses try to obtain this predictability/measurability is by abstracting software developers to ‘resources’ and then using math derived from software estimation. This fails utterly, every time.

Software developers are not ‘resources’ in the same way that software managers are not ‘overhead’ and using those terms are offensive. Software developers and those who manage them are humans. And it turns out humans are messy. Software is messy too, because it’s abstract and built by humans.

There is a long standing tradition in this field of comparing software development to construction, like constructing a building. This was a reasonable place to start when software development was a new field since humans operate best with analogy and comparison to things they already know. But we quickly found out that building software is nothing like building buildings. Buildings are static and we’ve built a lot of them over hundreds of years. Software, on the other hand, is an organic, creative, moving target. Software is malleable. There are very few bits of software that are identical because the people and businesses for whom the software is being built are rarely similar, much less identical.

And the time you know the absolute least about the software you are going to be building is at the start.

“Writing software involves figuring out something in such incredibly precise detail that you can tell a computer how to do it.”

“If you were to write a specification in such detail that it would capture [all] those issues, you’d be writing the software itself.”

Dan Milstein, Coding Fast and Slow:

bug

Given this organic nature of software, I believe long term estimates are lies. Giving these estimates means that you are saying:

  1. I understand everything about the problem.
  2. I know that my customer will absoloutely love the software I give them and will not want any changes.
  3. I know that the software constructed according to the plan will be the exact software we end up with at the end.
  4. I have built this exact thing before and therefor know the time it takes.

All of those are ludicrous and we shouldn’t kid ourselves by thinking otherwise. At best these estimates are useless; at worst they are malicious and destructive to the business who is planning based on them.

Software also never lives on its own. Every piece of software has dependencies on other software. That other software is just as organic, malleable, and creative as your software. The effect then is multiplicative. The uncertainty isn’t even contained to just your software.

The issues can be wide ranging:

  • The service you are integrating with doesn’t actually meet your needs but this isn’t discovered until down the road.
  • The library you are building on top of has a big bug. That bug is going to be addressed, but you don’t control the timeline of that fix.
  • There are unforseen network issues between you and a service. They arose during development.
  • The service you depend on decides to close shop.
  • The service you depend on decides to start charging you money instead of being free.
  • The customer realizes something about the software that couldn’t be realized until part of the software was written and a demo was given, revealing the issue. New dependencies are needed.
  • Etc, etc, etc.

So be a professional. Providing these estimates is just lying to ourselves and our stakeholders. Estimates also have a funny way of turning into deadlines. And because the organic nature of software development those deadlines quickly become out of touch and just as useless as the deadlines they were produced from. Deadlines emphasize the dates as being the important part and de-emphasize building the right software that will please the business and its customers.

So are all estimates hopeless? Maybe. Large software tasks have been proven to be uncertain/organic/malleable with uncertainty multipled by every dependency you integrate with. But what about small development tasks? Tasks that fit into less than a couple of days are usually estimated reasonably well or at least well enough to where it doesn’t matter if they are off by a bit. Tasks of this size also tend to be much more ‘regular’ than large scale development. And they have the advantage of being small enough to have a tight feedback loop that allows you to course-correct if they fell prey to the same issues outlined above.

But even if you estimate in this way you’ve really just brushed the real problem under the rug by making the failures of estimation less costly. What if instead we ignore estimation in service of some finish line and instead just worry about the next feature we need to deliver? Why not focus on delivering the Next Most Important Thing continuously at a sustainable pace? Maybe you do it weekly; demo working software to your customer at the end of every week? Working at these time scales allow the organic nature of software to be dealt with. Customer changes can be incorporated, dependency issues can be dealt with, course corrections can be made.

A big disconnect exists between developers and managers on these points. I think it stems from:

  • Software developers continuing to provide estimates.
  • Business continuing to pressure developers to provide those estimates.
  • Both believing that those estimates are meaningful.

It’s no wonder there is a divide between software management and software developers. Software developers regularly provide estimates and those estimates are very often off by an order of magnitude. This leads to a lack of trust. Why would you trust software developers when their estimates are so bad? The only way out is for us to stop doing this estimating dance. Stop committing to months-long deadlines. Start talking openly about the true nature of software development and commit to delivering value weekly.

The uncertainty about software development timelines needs to be put right out in the open. The factors that cause the inability to estimate with any kind of accuracy need to be put out in the open. These are issues that need to be on the table for the entire business to grapple with, not just software developers. Businesses need information to be successful and if that information is tied up in the Mysterious Land of Software Development then the business won’t know about the impending failure until it arrives and it’s too late. Developers who all-too-frequently crunch to meet these estimates/deadlines are doing a disservice to the business.

A true professional doesn’t change how he/she operates under pressure. Be honest about estimation and why it fails. Be honest about all the uncertainty factors that make up estimation woes. Work together to deliver valuable working software on a more frequent, less estimation focused basis.

comments

Tagged: estimation agile software engineering software craftsmanship

2016 Ben Lakey

The words here do not reflect those of my employer.