Sometimes when you’re working on an MVC codebase like a Rails app you might be tempted to put conditional logic in your views to control the display of certain things based on the model. This is pretty widely accepted as being a bad practice and it’s preferrable instead to push as much logic into the model layer as possible. This is the reason for the rise of ‘logic-less’ templating languages like Mustache. As with anything in software development it is a trade-off and in small doses a single ‘if’ statement might be ok in a view depending on the circumstances.
If we accept the premise that logic-less view templates are desirable then we should try and push any view-based logic that we have right now down the stack and into the model layer. But what if you don’t control the model? What if the logic is purely display-oriented and has no business being in the domain model? Usually these 2 circumstances result in us turning to the ‘presenter’ pattern (also known as a ViewModel in some circles).
A presenter is essentially an object that uses an underlying domain model and exposes methods that interact with it to interface with the view in some way. An example before/after (contrived and arguable whether it should be in the domain model proper, but lets run with it):
1 <p> 2 Full name: 3 <% if model.last_name.present? %> 4 <%= model.last_name %>, 5 <% end %> 6 <%= model.first_name %> 7 </p>
1 class Person 2 attr_accessor :first_name, :last_name 3 end
1 <p><%= presenter.full_name %></p>
1 class PersonPresenter 2 3 def initialize(person) 4 @person = person 5 end 6 7 def full_name 8 [@person.last_name, @person.first_name].compact.join(", ") 9 end 10 11 end
This seems pretty good. We don’t have logic littering the view so it’s easier to deal with. And if we decide to change how full names are displayed (first, last) we don’t need to make any changes to the view. The view in this case isn’t even passed the model directly anymore; it need not know about ‘first’ and ‘last’ names because it doesn’t care how a name is composed. And the best benefit of all is that we can easily unit test the presenter object to assert the name is being displayed as we want.
Perhaps I’ve convinced you of the value of presenters. But if you have an application of any significant size you have many views and models. You can’t just start passing presenters from all of your controllers. This is especially true if your views ferry along your model to a branching tree of partials, or even to other subsystems. A shift to presenters is likely a slow migration that you need to untertake with care. Ideally you’ll take one action at a time, touching just one model/view pairing at a time.
Let’s say you don’t want to disrupt your view wholesale during a migration to presenters. Maybe you want to just focus on eliminating 1 conditional at a time instead of the entire set of conditionals. There is a way that ruby can help here, by both allowing existing methods to be called (first_name, last_name) as well as any new conditional-removing methods you choose to expose:
1 class PersonPresenter < SimpleDelegator 2 3 def full_name 4 [@person.last_name, @person.first_name].compact.join(", ") 5 end 6 7 end
It may not be immediately obvious if you’re unfamiliar with SimpleDelegator but the above object will respond to both the presentation methods as well as the underlying domain model methods:
1 presenter = PersonPresenter.new(person) 2 presenter.first_name # "Ben" 3 presenter.last_name # "Lakey" 4 presenter.full_name # "Lakey, Ben"
This provides a great migration path towards presenters without boiling the ocean by transitioning everything to the new way all at once.
I spend a lot of time talking about the craft of software development. What makes a good software developer?
Do you just go pick up a CS degree and you’re done? I don’t think so. At that point you know mechanical technique but you’re missing the deeper understanding of what it takes to make truly good software.
The truth of the matter is that undergrad degrees are really just the primer. They teach you how to make sounds come out of the musical instrument but they don’t teach you how to compose and play really amazing songs. In order to create amazing software you have to constantly be keeping up to date, experimenting with new things, and have (and never lose) the passion for writing reusable and highly-maintainable code.
So this is my list of the specific things that I think make a really great software developer, in no particular order.
What do you think I should add to this list?
Surely by now you’ve heard about the TDD fiasco that was kicked off at Railconf 2014. In his keynote at the conference David Heinemeier Hansson proclaimed that TDD is dead. I know you want to hear more opinions on the subject (ha) so here’s some of mine:
“Maybe it was necessary to use test-first as the counterintuitive ram for breaking down the industry’s sorry lack of automated, regression testing.” – David Heinemeier Hansson
TDD is absoloutely not about automated regression testing. A nice side effect of TDD is indeed automated regression testing but that’s only a side effect. TDD’s primary purpose is to ask the question “How do I want to consume the code that I’m about to write?”. TDD is about forcing future-you to follow through on promises that past-you made before you dove into the implementation, thus driving designs that are pleasant to use and easy to understand.
Gold Plating is defined by Jeff Atwood (based upon information from Steve McConnell) as follows:
“Developers are fascinated by new technology and are sometimes anxious to try out new features of their language or environment or to create their own implementation of a slick feature they saw in another product–whether or not it’s required in their product. The effort required to design, implement, test, document, and support features that are not required lengthens the schedule.” – Jeff Atwood
TDD is also about reminding future-you that you don’t need to write that other bit of code just “because I might need it”. TDD does not permit you to write code unless it was the minimal amount required to pass the test and therefor you, by definition, can’t add extraneous gold plating.
TDD pushes for your design to be pleasant and easy for consumers because you wrote the consumption of it before you wrote the implementation. Opponents of TDD often trot out the argument of “sometimes I just know what I need and don’t need a test”. This way of thinking ignores 2 critical pieces of information:
Let’s address each one individually.
When you develop code under the “sometimes I just know what it should look like” mentality you are actively proclaiming that your knowledge and yours alone is the One True Way and that it will be completely clear and magical to everyone else who encounters it. I hate to break it to you, but it won’t be. You by definition will not see your blindspots, and for you to ignore it and dive into an implementation without taking a hard look at the consumption (a test) is irresponsible and arrogant.
I like to relate TDD to the scientific method. The scientific method is one of the greatest inventions the human race has ever created:
What a sorry state we would be in if we didn’t leverage the impartiality of having a test to prove or disprove a hypothesis. Perhaps we’d build buildings and bridges out of inadequete materials based on what ‘we think might be good’ or ‘I just know what will work’. It is completely irresponsible to operate this way. Unless you have a test that goes from red-to-green you have no idea if your solution will address the needs of the system.
This exchange on twitter highlights it well:
@unclebobmartin @bdruth @pragdave "If it isn't easy to test, or the tests can't be fast, it's bad design" is a prevalent fallacy.— DHH (@dhh) April 26, 2014
@dhh @unclebobmartin @bdruth @pragdave isn't this generally accepted in other engineering disciplines where they build in testing shims?— Phil Haack (@haacked) April 26, 2014
@haacked Agreed. TDD is the software equivalent of 'test the bridge at 1/8 scale before building at scale' @dhh @unclebobmartin @pragdave— Brice Ruth (@bdruth) April 26, 2014
Aaron Patterson said this best in his closing keynote at Railsconf:
“Science is important. I can’t believe I actually had to say this.” – Aaron Patterson
TDD leaves behind a trail of documentation for the various ways in which the system is consumed. This is what I as a developer care about when I want to know how a system works. English documentation is nice but it’s also full of flaws, misinterpretations, and other inadequecies. Code is the only truth when it comes to how the system actually works; it cannot lie or be misinterpreted because it’s executable. When I don’t understand documentation I always look for the tests to show me the examples of how to consume the system.
“Test-first units leads to an overly complex web of intermediary objects and indirection in order to avoid doing anything that’s ‘slow’. Like hitting the database.” – David Heinemeier Hansson
Not hitting IO in a unit test will improve the speed of your tests, for sure. But once again the author has completely missed the point. The purpose of avoiding IO in unit testing is about isolation. The speed gains are a nice side effect that allow your feedback cycle to be fast enough to support agility, but speed is not the primary motivator. Being able to isolate code allows you to quickly respond to failures and breakages because the only thing that can cause a test failure is the unit being exercised.
As much as I hate making analogies to construction (they are always flawed in some way): Ignoring unit test isolation and allowing your tests to hit external systems is a lot like blindly building an entire building without blueprints, having it collapse, and then asking the question “gee, why did it fall?”. There are too many reasons for the failure and you’ll have to iterate each one before finding the flaw.
If you want to know more about this topic I’d recommend the following 2 books in series:
On Friday, Tom Crockett went to twitter to declare:
The demand that all code be readable by a beginner is the demand that all proofs be elementary. It limits our ability to use powerful tools— Tom Crockett (@pelotom) April 4, 2014
I think that’s a shame.
I feel this is an unfortunate and flawed way of thinking about readable code.
Writing code that is readable does not in any way imply that the code must therefor be elementary, and writing complex code does not somehow indicate it’s power level. On the contrary; the more complex the concepts in the code the greater the need there is to provide an abstraction so that we can reason about it and therefor leverage its power to its greatest potential. The reason we can use the “powerful tools” that Tom mentions in the first place is because their complexity has been masked into a simpler abstraction; thus exposing their power as a tool.
Let’s look at the math analogy a little deeper because it’s an interesting one.
Quick! Solve for ‘x’: (I won’t blame you if you skip ahead)
Don’t have the time? Ok what about this one?
Turns out those 2 equations are exactly the same. The latter is simplified. Did the simplification some how limit the power of the more complex version? Absurd. It simply made it easier to chew off.
Ok that’s factoring out complexity; what about abstraction?
That’s pretty awful. Can we make an abstraction? Yes.
Fine, you say, but that could be considered ‘elementary’. Not important. Ok what about this abstraction?
That one is pretty important, with arguably complex math behind it. Did the fact that we abstracted that knowledge behind a symbol somehow make it not a powerful tool?
Code is simplified for exactly the same reason and purpose that math is simplified. To allow us to reason about concepts at a high level by reducing the amount of knowledge one must have to understand something at a glance. It is a waste of developer time to sit and figure out what complex code is doing just because the author couldn’t be bothered to factor out the complexity into manageable parts.
Recently I wrote code for a sorting problem. At the time I didn’t realize it but it’s a called the “Dutch Flag Problem” and Dijkstra came up with it back in 1976 in his book “A Discipline of Programming”.
The problem is as follows:
Given an array known to contain only the numbers 0, 1 and 2 sort it in O(n).
So for example if you had the input:
Then the output would be:
The time complexity requirement rules out the possibility of using an untuned general sort from most languages standard library (most sorts of that sort are O(n log n) or thereabouts).
There are actually 2 ways of solving the problem.
The first method of solving it is sort of cheating but is simpler; Just iterate the array and maintain a hash of (digit => count). This will get you the information you’re after but you didn’t really sort the data. (In a real world scenario this is probably sufficient though since you probably aren’t going to actually want to iterate the same digit N times if you know there are N of them in the array.)
The 2nd method of solving the problem is the more interesting one; Given that we have known constraints on the incoming data we can develop an algorithm specifically tuned for the situation. The problem becomes one of partitioning:
1 def sort(data) 2 return nil if !data 3 answer = data.dup 4 return answer if answer.length <= 1 5 6 left = 0 7 mid = 0 8 right = answer.length - 1 9 10 while mid <= right 11 suspect = answer[mid] 12 if suspect == 0 13 answer[mid], answer[left] = answer[left], answer[mid] 14 left += 1 15 mid += 1 16 elsif suspect == 2 17 answer[mid], answer[right] = answer[right], answer[mid] 18 right -= 1 19 else 20 mid += 1 21 end 22 end 23 24 answer 25 end
The general idea here is that we iterate through the array once and during that iteration if the current element under inspection is a 0 then we kick it to the left just beyond where a 0 was last placed. Otherwise if it’s a 2 then kick it to the right just before where a 2 was last placed. In the end you get the partitioned data.
(c) 2014 Ben Lakey
The words here do not reflect those of my employer.