a little madness

A man needs a little madness, or else he never dares cut the rope and be free -Nikos Kazantzakis


Archive for February, 2009

Overcoming Coder’s Block

Sometimes I completely run out of answers. Other times I have too many answers. Either way, I end up staring stupidly at the screen achieving nothing (well, OK, reading blogs). Here are a few techniques I use to help get unstuck:

  • Pros and Cons: when I have too many answers, I try to make it objective by drawing up the pros and cons and seeing where that takes me.
  • Simplify: like a lot of developers, I can be prone to over-analyse when I am stuck on an issue for a while. So I remind myself to try simplifying the problem. Often by dropping a layer of flexibility the problem is a lot easier to solve. If I really need the flexibility, I can add it later when I have greater understanding.
  • Take a Break: sometimes I’m just trying too hard, and need to step back. I work from home, so a short walk outside is a welcome break. Actually getting away from the computer relaxes the grey matter. The vitamin D doesn’t hurt either 🙂 .
  • Explain the Problem: very often I find that while I’m explaining the problem, I see it in a different way. If not, the input of another person usually throws a different perspective on the issue. If there’s nobody to bother immediately, just writing down an explanation can help.
  • Switch Gears: this works when I’m getting frustrated by a lack of progress. By switching to a small, unrelated task, I can Get Something Done and develop some new momentum. This also serves as a break from the original problem.
  • Write Some Tests: I don’t practice TDD all the time, but when I’m stuck trying to understand how things work, writing some tests first can be very illuminating. Having tests in place also gives gratifying feedback as I finally start to crack the underlying problem. I find this works best for really tough technical issues, where good test coverage is even more important than normal.
  • Write Some Code: if I have a partial solution, even if I know it is ugly or inadequate, sometimes I’ll just plow ahead anyway. Actually working through a throwaway implementation is better than standing still, as it turns up all the little details. I’m happy to throw that code away since I know the alternative was not getting anywhere.

So, should you be reading this, or taking a walk..? 🙂

Languages, Complexity and Shifting Work

One of the things that bothers me about the rise of dynamic languages is the nagging thought that work is being shifted the wrong way. Generally speaking, we programmers all become more productive as work is shifted down to lower levels. The smarter the toolchain, the less we need to bother with incidental details, leaving us to focus on Real Problems. Sure, someone needs to program that toolchain, but once that is done by the few the benefits can be reaped by the many.

What does this have to do with dynamic languages? Well, a lot of the productivity benefits of these languages come from their support for higher levels of abstraction. Think first class functions, closures, completely dynamic dispatch and so on. Why is it that dynamic languages all seem to have these features, where many statically-typed languages don’t? I believe a primary reason is that these features are easier to implement in dynamic languages. It’s not that it is impossible to support such features in static languages — in fact many static languages do support them — but rather that creating a flexible enough type system is a challenge.

Probably the most obvious example is duck typing. This is a feature of dynamic languages that removes complexity from the language implementation. With duck-typing, the implementation need not inspect or infer anything, it just tries its best to dispatch at runtime and throws an error if it can’t. Compare this with the complexity of implementing a static type system capable of full inference and/or a reasonable alternative like Scala’s Structural Types.

So where is the problem? Well, having these features in dynamic languages is a wonderful thing, but it comes at a cost:

  • No static verification. Without this verification, we need to do more work to test our code. And because testing isn’t perfect, we risk more bugs.
  • Less-precisely specified APIs. As APIs contain no type information, they can be harder to read and learn. Documentation helps, but this kind of documentation comes for free (with verification by a compiler) in a statically-typed language.

Essentially, the dynamic languages are giving us more flexibility, but at the same time are shifting other work our way. Shouldn’t we aim for more? From what we know about shifting work, shouldn’t we be demanding a language which tackles the complex issues head-on to deliver us this flexibility without the costs?

Are there languages that achieve this already? Classics like ML and Haskell, along with newcomers like Scala at least take aim at these harder issues. The biggest problem has always been pushing the complexity down into the language implementation without having it leak too much into the language itself. This is one of the greater challenges in software, but the closer we get to nailing it the more productive we’ll all become.