Chasing the New Shiny

20 May 2015. comments

There have been a few things in internet-land recently that have touched on this idea that using simple, well-understood technology is preferable to using unproven new shiny tech; I agree with this viewpoint enthusiastically.

“Standard is better than better.” - John Hyland

There are a lot of reasons to use well-understood tech including:

  • Stability
  • Talent pool
  • Integration options
  • Abundant Resources

With well-understood technology the rough edges and bugs have already been worked out which saves you from having to invest time/money running up against those things first. Others have already payed the cost and smoothed the way ahead.

On the other hand if you use the New Shiny then the downsides haven’t been explored, the rough edges will be yours to figure out, and the best practices don’t really exist yet. And if you add too much of this new tech you might start to feel like Bilbo Baggins:

“I feel thin, sort of stretched, like butter scraped over too much bread.” - Bilbo Baggins

New tech is expensive and distracts you from what you really need to be doing, which is solving your business problem instead of solving technology problems that are blocking you from solving your business problem. You need to innovate in the area of your products/services instead of the technology you’re building with. The alternative is to not ship and possibly never get your idea of the ground while you constantly chase the new shiny.

Javascript

One community that seems to struggle the most with this is javascript. New frameworks pop up daily and often with only marginal improvements on whats already out there (if indeed any at all). The number of MVC frameworks in javascript erupted so vigorously that there is an entire site dedicated to sifting through the different options. This explosion of code is creating mountains of code that ultimately must be maintained and I wonder about the cost to the industry as a whole. Perhaps we will look back on this era of software development with amusement (and shame?) years from now.

This isn’t to say that all the attention on javascript hasn’t yielded benefits; with ES6 the language holes have largely been plugged and we’re starting to see some of the standard problems being solved by the language itself instead of papering over limitations with an explosion of addons.

Additional Reading

If you feel the same way I do you may be interested in John Hyland’s talk from Cascadia Ruby last year: Be Awesome By Being Boring

More recently, Dan McKinley wrote about this topic: Choose Boring Technology

comments

Tagged: architecture complexity maintainability software engineering yagni

Inquiring Minds Want to Know

02 May 2015. comments

If you ask a software developer what their job is many will say something like “I write code”. But is it? It’s very easy to slather more code onto a project when a project manager asks you to make WidgetX do ThingY but its a lot harder to dig in and find out why they want that. And the process of digging can yield some interesting insights that allow you to solve the customer’s problem without adding more code.

Five Whys

When I was a kid it was common game for people to play to continually ask “why” to annoy/exasperate peers/parents/etc.

“Why are we having broccoli?”

tree-swing

“Because it’s healthy”

“Why do we have to be healthy?”

“So we can grow strong and live long”

“Why do we need to grow strong and live long?”

In software development this practice can actually yield very positive outcomes because you may discover that instead of adding a ton of new code, all the customer wanted was a tree swing.

This might go down something like:

“What are you doing?”

“I’m building an authentication system.”

“Why are you building that?”

“Because people need to sign in on the website.”

“Why do they need to sign in?”

“Because we need to know who someone is to save their preferences.”

“Why do we need to save their preferences?”

“Because we want people to filter just the information they want.”

“For what reason?”

“So that customers who only want to see widgets can find them faster.”

In this example you now know that at the very least the problem you are trying to solve is that customers who want widgets need to be able to find them quickly, which gives you a great deal more context than “we need an authentication system”. Armed with this knowledge you might be able to solve the problem in a different way with far less code than blowing out an entire authentication system. And even if you do ultimately discover that you need to build an authentication system, you now know why you’re building it and can tailor the solution towards that goal.

Given When Then:

A really great way to both ask these questions and capture the goals is to write tests based on conversations with your customer. In Behavior-Driven Development circles this is known as Given-When-Then syntax.

Given (some context)
When (an action is taken)
Then (an observable outcome)

Theres even software that will automate these tests (Cucumber) for you by allowing you to connect them up to executable code. It sounds great in theory but in practice I haven’t found it to be as successful as it promises to be.

You Are a Problem Solver

Code is just a tool to solve problems. By itself code is not useful and every line you write is a line you must maintain going forward. So the act of writing code can actually be a negative cost unless its solving a customer problem and providing value. Good developers are out to reduce code and increase customer satisfaction via frequent communication and questioning.

comments

Tagged: software engineering career software craftsmanship

Unix Philosophy and What It Means To Me

12 Apr 2015. comments

I believe the Unix Philosophy is one of the most overused yet least understood set of concepts in software development. It is blogged about often, so I won’t be offended if you skip this post.

In his book The Art of Unix Programming, Eric S. Raymond wrote 17 rules to summarize the core of the Unix Philosophy and I’m going to pick each apart and provide my own commentary based on my own experiences in developing software.

1. Rule of Modularity: Write simple parts connected by clean interfaces.

This is really all about abstraction and separation of concerns. A part of a system is easier to understand on its own more-so than the entire system. And those parts are easiest to understand and maintain if they have tight cohesion around a single concept. It’s also natural and beneficial for a system appropriately broken down into small parts to have small interfaces. The abstraction should expose as few details as possible so that the consumer can focus instead on the broader system instead of the details for one of its parts.

2. Rule of Clarity: Clarity is better than cleverness.

I would argue this is the most important yet least practiced of all the rules on this page. Too often developers write code that is fancy rather than communicative. This is insidious as well because the pain of writing fancy code is not immediately felt since maintenance comes later. When the system needs to be extended, repaired, or otherwise modified it needs to be completely clear what it does otherwise it becomes a time/money sink. Your most important job as a software developer is to write code that other developers can maintain with ease. Write code for humans, not compilers.

3. Rule of Composition: Design programs to be connected to other programs.

The general idea is that if software can be re-used then it can be re-connected to other software and therefor provide long term dividends beyond the original project. We have to be careful with this one though because it can quickly lead to premature optimization or gold plating.

4. Rule of Separation: Separate policy from mechanism; separate interfaces from engines.

How a thing is accomplished is something that should be interchangeable. A basic example of this is the strategy pattern where logic can be pluggable. Another example is that the same system might be interactable via a web application, web API, command line interface, etc. This again touches on separation of concerns since you must have modularity before being able to vary a systems parts.

5. Rule of Simplicity: Design for simplicity; add complexity only where you must.

Get it working. Don’t gold plate. Don’t prematurely optimize. Only make performance optimizations when you have proven that a performance problem exists and system needs to be more performant. Don’t generalize until your specific problem has been solved and a more general approach would be useful for multiple parts of the system or other systems. Prove that complexity would be advantageous before sacrificing simplicity. Remember Rule #2 and write code for humans first, compilers second.

6. Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do.

Really this is just Rule #5 (and #2) restated. Do the simplest thing that could possibly work. Have measurements that indicate adding code would be beneficial. Remember that every line of code added is a line that must be maintained and calculate that negative against the theoretical positives of adding more code.

7. Rule of Transparency: Design for visibility to make inspection and debugging easier.

This is Rule #2. Make it so that when things go wrong it requires very little inspection to know what happened. Keep inheritance heirarchies shallow so that behaviors can be tracked down easily. Break up large things into smaller things.

8. Rule of Robustness: Robustness is the child of transparency and simplicity.

When things can be easily understood and composed into larger things the result is a system that can do more things without sacrificing those things that made it composable in the first place. Legos can be made into greater things in part because each piece can be easily composed into large pieces.

9. Rule of Representation: Fold knowledge into data so program logic can be stupid and robust.

Let’s face it, programming is hard. When given a choice of making data complex vs logic complex we should always choose the data because code must be maintained and understood by future developers.

10. Rule of Least Surprise: In interface design, always do the least surprising thing.

This is sort of like defensive driving for computer programming. Make it obvious what you code does by thinking carefully about your APIs. Ideally consumers of your code should be utterly bored with how expected everything is. Your code should fade into the background and allow consumers to instead focus on their own problems they are trying to solve.

11. Rule of Silence: When a program has nothing surprising to say, it should say nothing.

Are status messages or other outputs necessary for a user? Is the information you are giving to a user useful or actionable? If it isn’t then it’s just noise to them and you shouldn’t concern them with it.

12. Rule of Repair: When you must fail, fail noisily and as soon as possible.

Nothing is worse than a system that goes wrong but brushes the problem under the rug for a while. Doing this makes it very difficult to know where the problem is since the root cause may be several levels removed from where the problem happened. Failing noisily makes us aware of the problem. Failing as soon as possibly makes localizing the failure easy.

13. Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.

This is rule #2 and #5 again. Your fancy code might make a system slightly more performant or flexible but at what cost? If it costs future developers too much time to understand it then you have a net negative. Developers always tend to underestimate the maintenance costs of the software they write, or how many future developers will run into their code.

14. Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.

Be lazy. The first time you have to do something by hand might not be too bad but if you have to do it again you should automate it. Twice is evidence that the likelihood of you or someone else having to perform this manual work again is high.

15. Rule of Optimization: Prototype before polishing. Get it working before you optimize it.

If you spend time making your software whiz-bang awesome, performant, or generalized before you understand a problem and get it working at a basic level then you’re really just wasting time. Doing these things early is dangerous because the shape of the problem or your understanding may change greatly as you explore the problem space and therefor all that extra code you’ve written just adds time and complexity as you evolve the system.

16. Rule of Diversity: Distrust all claims for “one true way”.

In software change is inevitable and nothing is one-size-fits-all. What solves your problem might not solve mine, and even if it does it may only be appropriate right now. In an industry as young as software we are still trying to figure things out (languages, patterns, practices). Saying you know the best way or tool is shutting off your search for even better ways/tools. This isn’t even really software specific and has wider applicability to life or science of any kind.

17. Rule of Extensibility: Design for the future, because it will be here sooner than you think.

Don’t misunderstand this one; it isn’t about premature optimization or overgeneralizing. It’s about leaving doors open and not painting yourself into a corner. You don’t need to go all the way towards making something generalized or reusable; you just need to make it easy to do that down the road.

Finally

There are some other variations of these rules I’d like to mention:

These rules mean different things to different people. Comment if you have some of your own thoughts.

comments

Tagged: unix software engineering software craftsmanship patterns

Less Estimation, More Trust

22 Mar 2015. comments

A good deal of this article is based off of thoughts from Ernie Miller’s talk at Ruby on Ales 2015.

We’re in the business of producing software. If you produce software for products/services then you know that these businesses tend to value predictability, measureability, and control. The way most businesses try to obtain this predictability/measurability is by abstracting software developers to ‘resources’ and then using math derived from software estimation. This fails utterly, every time.

Software developers are not ‘resources’ in the same way that software managers are not ‘overhead’ and using those terms are offensive. Software developers and those who manage them are humans. And it turns out humans are messy. Software is messy too, because it’s abstract and built by humans.

Software != Construction

There is a long standing tradition in this field of comparing software development to construction, like constructing a building. This was a reasonable place to start when software development was a new field since humans operate best with analogy and comparison to things they already know. But we quickly found out that building software is nothing like building buildings. Buildings are static and we’ve built a lot of them over hundreds of years. Software, on the other hand, is an organic, creative, moving target. Software is malleable. There are very few bits of software that are identical because the people and businesses for whom the software is being built are rarely identical in nature.

The time you know the least about the software you are building is at the start.

Dan Milstein writes:

“Writing software involves figuring out something in such incredibly precise detail that you can tell a computer how to do it.”

“If you were to write a specification in such detail that it would capture [all] those issues, you’d be writing the software itself.”

Estimates are Lies

bug

Given this organic nature of software, I believe long term estimates are lies. Giving these estimates means that you are saying:

  1. I understand everything about the problem.
  2. I know that my customer will absoloutely love the software I give them and will not want any changes.
  3. I know that the software constructed according to the plan will be the exact software we end up with at the end.
  4. I have built this exact thing before and therefor know the time it takes.

All of those are ludicrous and we shouldn’t kid ourselves by thinking otherwise. At best these estimates are useless; at worst they are malicious and destructive to the business who is planning based on them.

Dependencies!

Did I mention that software also never lives on its own? Every piece of software has dependencies on other software. That other software is just as organic, malleable, and creative as your software. The effect then is multiplicative. The uncertainty isn’t even contained to just your software.

The issues can be wide ranging:

  • The service you are integrating with doesn’t actually meet your needs but this isn’t discovered until down the road.
  • The library you are building on top of has a big bug. That bug is going to be addressed, but you don’t control the timeline of that fix.
  • There are unforseen network issues between you and a service. They arose during development.
  • The service you depend on decides to close shop.
  • The service you depend on decides to start charging you money instead of being free.
  • The customer realizes something about the software that couldn’t be realized until part of the software was written and a demo was given, revealing the issue. New dependencies are needed.
  • Etc, etc, etc.

Be a Professional

So providing these estimates is lying. Estimates have a funny way of turning into deadlines. And because the organic nature of software development those deadlines quickly become out of touch and just as useless as the deadlines they were produced from. Deadlines emphasize the dates as being the important part and de-emphasize building the right software that will please the business and its customers.

So are all estimates hopeless? Maybe. Large software tasks have been proven to be uncertain/organic/malleable with uncertainty multipled by every dependency you integrate with. But what about small development tasks? Tasks that fit into less than 4 hours (a half day) are usually estimated reasonably well or at least well enough to where it doesn’t matter if they are off by a bit. Tasks of this size also tend to be much more ‘regular’ than large scale development. They also have the advantage of having a tight feedback loop that allows you to course-correct if they fell prey to the same issues outlined before with other estimation.

But even if you estimate in this way you’ve really just brushed the real problem under the rug by making the failures of estimation less costly. What if instead we ignore estimation in service of some finish line and instead just worry about the next feature we need to deliver? Why not focus on delivering the Next Most Important Thing continuously at a sustainable pace? Maybe you do it weekly; demo working software to your customer at the end of every week? Working at these time scales allow the organic nature of software to be dealt with. Customer changes can be incorporated, dependency issues can be dealt with, course corrections can be made.

If you think this is unrealistic then you might have stumbled across The Divide…

The Divide Between Business and Developers is a Lie.

A lot of the disconnect between business and software development stems from:

  • Software developers continuing to provide estimates.
  • Business continuing to pressure developers to provide those estimates.
  • Both believing that those estimates are meaningful.

It’s no wonder there is a divide between business and software development. Software developers regularly provide estimates and those estimates are very often off by an order of magnitude. This leads to a lack of trust. Why would you trust software developers when their estimates are so bad? We need to stop estimating at this scale. Stop committing to months-long deadlines. Start talking openly about the true nature of software development and commit to delivering value weekly.

The uncertainty about software development timelines needs to be put right out in the open. The factors that cause the inability to estimate with any kind of accuracy need to be put out in the open. These are issues that need to be on the table for the entire business to grapple with, not just software developers. Businesses need information to be successful and if that information is tied up in the Mysterious Land of Software Development then the business won’t know about the impending failure until it arrives and it’s too late. Developers who all-too-frequently crunch to meet these estimates/deadlines are doing a disservice to the business.

A true professional doesn’t change how he/she operates under pressure. Be honest about estimation and why it fails. Be honest about all the uncertainty factors that make up estimation woes. Work together to deliver valuable working software on a more frequent, less estimation focused basis.

Resources:

comments

Tagged: estimation agile software engineering software craftsmanship

There's a Bug In Your Blind Spot

25 Jan 2015. comments

Recently Robert ‘Uncle Bob’ Martin went to twitter to make a bold observation:

The initial gut reaction for some is “Yeah, we should just write less bugs!” which is of course what he’s implying here. But is it that simple?

When does a bug become a bug?

bug

What’s implied when questioning the value of a bug tracking system is that if we would only just not create the bugs in the first place that we’d be able to dump the tracking software and have happier users. What that implies is that all these bugs are programmer slip-ups that could be prevented at the time of writing the software.

The above line of thinking ignores the reality of many bugs whose lives did not begin at code construction time but rather evolved into a bug based on humans discussing the software with each other. Was Feature X supposed to behave in that way? Was Feature Y supposed to only be available to users A and B? Do product and design agree on what Feature Z should do and is that how it was implemented?

There’s no way around it, you need to have human conversations.

Questions like these are not solvable by well-tested software or software engineering rigor. So long as it is possible for humans to misunderstand each other there will be bugs filed based on expected behavior not matching what is implemented by engineers. The only way to solve questions like these are by having conversations with your users, your team, and your business. These conversations must also happen often and continuously because many times a feature is not well understood until it’s actually prototyped or built, and then these ‘bugs’ are filed. The sooner you have these conversations the shorter the lifetime of these sorts of bugs; but they will have a lifetime.

Maybe we’re splitting hairs here about the definition of a ‘bug’ but that’s not really important when you’re trying to debate what constitues a ‘bug’ to your non-technical colleagues. The software isn’t doing what they expected and it’s your job to make the necessary adjustments.

Subtle arrogance

Making a bold claim like ‘we shouldn’t have a bug tracker’ is also a bit arrogant if you think about it. Let’s assume for a moment that you are the best engineer in the world and use the best software development practices. You practice TDD, you work closely with your customer, you release early-and-often, the whole nine yards.

I hate to break it to you but you’re not perfect. You’re still going to introduce bugs. Because you’re a human and humans make mistakes. Human’s misunderstand each other.

And if you agree that you’re not perfect and that it’s possible you won’t have 100% understanding with other humans, isn’t it responsible to track your mistakes? If for no other reason than for you and the reporter to not let the bugs slip through the cracks?

I understand the higher level sentiment

Uncle Bob is no dummy. He’s been around a lot longer than most of us in software. I fully expect that he’d agree with much of what I’ve written here because there’s more nuance than 140 characters on Twitter will allow for. We do write too many bugs and we could use more rigor, more testing, more professionalism. But it isn’t a zero-sum game and we can’t “win” the battle against bugs. Your job isn’t to write zero bugs but rather its to minimize the ones that are mistakes and manage the ones that stem from misunderstanding.

comments

Tagged: bugs software engineering soft skills software craftsmanship

2015 Ben Lakey

The words here do not reflect those of my employer.