a little madness

A man needs a little madness, or else he never dares cut the rope and be free -Nikos Kazantzakis

Zutubi

Archive for the ‘Java’ Category

Android Functional Testing vs Dependency Injection

I commonly use Dependency Injection (DI) to create testable Java code. Dependency injection is simple: instead of having your objects find their own dependencies, you pass them in via the constructor or a setter. One key advantage of this is the ability to easily substitute in stub or mock dependencies during testing.

Naturally, as I started working on an Android application, I tried to apply the same technique. Problems arose when I tried to combine DI with the Android SDK’s Testing and Instrumentation support. In particular, I am yet to find a suitable way to combine DI with functional testing of Android activities via ActivityInstrumentationTestCase2. When testing an activity using the instrumentation support, injection of dependencies is foiled by a couple of factors:

  1. Constructor injection is impossible, as activities are constructed by the framework. I experimented with various ways of creating the Activity myself, but was unable to maintain a connection with the Android system for true functional testing.
  2. Setter injection is fragile, as activities are started by the framework as soon as they are created. There is no time to set stub dependencies between the instantiation of the Activity and its activation.

Not ready to give DI away, I scoured the web for existing solutions to this problem. Although I did find some DI libraries with Android support (notably Guice no AOP and roboguice which builds upon it), the only testing support I found was restricted to unit tests. Although roboguice has support for Activities, it relies on being able to obtain a Guice Injector from somewhere — which just shifts the problem by one level of indirection.

Given how complex any DI solution was going to become (if indeed it is possible at all) I decided to step back and consider alternatives. A classic alternative to DI is the Service Locator pattern: where objects ask a central registry for their dependencies. Martin Fowler’s article Inversion of Control Containers and the Dependency Injection pattern compares and contrasts the two patterns in some detail. Most importantly: a Service Locator still allows you to substitute in different implementations of dependencies at test time. The main downside is each class is dependent on the central registry — which can make them harder to reuse. As I’m working with Activities that are unlikely to ever be reused outside of their current application, this is no big deal.

Implementation-wise, I went with the simplest registry that works for me. I found it convenient to use my project’s Application implementation as the registry. In production, the Application onCreate callback is used to create all of the standard dependency implementations. These dependencies are accessed via simple static getters. Static setters are exposed to allow tests to drop in whatever alternative dependencies they desire. A contrived example:

public class MyApplication extends Application
{
    private static IService service;
    private static ISettings settings;

    @Override
    public void onCreate()
    {
        super.onCreate();
        if (service == null)
        {
            service = new ServiceImpl();
        }
        
        if (settings == null)
        {
            SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(getApplicationContext());
            settings = new PreferencesSettings(preferences);
        }
    }
    
    public static IService getService()
    {
        return service;
    }

    public static void setService(IService s)
    {
        service = s;
    }
    
    public static ISettings getSettings()
    {
        return settings;
    }
    
    public static void setSettings(ISettings s)
    {
        settings = s;
    }
}

I access the dependencies via the registry in my Activity’s onCreate callback:

public class MyActivity extends Activity
{
    private IService service;
    private ISettings settings;

    @Override
    public void onCreate(Bundle savedInstanceState)
    {
        super.onCreate(savedInstanceState);

        service = MyApplication.getService();
        settings = MyApplication.getSettings();

        setContentView(R.layout.main);
        // ...
    }

    // ...
}

And I wire in my fake implementations in my functional test setUp:

public class MyActivityTest extends ActivityInstrumentationTestCase2<MyActivity>
{
    private MyActivity activity;

    public MyActivityTest()
    {
        super("com.zutubi.android.example", MyActivity.class);
    }

    @Override
    protected void setUp() throws Exception
    {
        super.setUp();        
        MyApplication.setService(new FakeService());
        MyApplication.setSettings(new FakeSettings());
        activity = getActivity();
    }
    
    public void testSomething() throws Throwable
    {
        // ...
    }

After all of the angst over DI, this solution is delightful in its simplicity. It also illustrates that static is not always a dirty word when it comes to testing!

Are Temp Files Slowing Your Builds Down?

Lately one of our Pulse agents has been bogged down, to the extent that some of our heavier acceptance tests started to genuinely time out. Tests failing due to environmental factors can lead to homicidal mania, so I’ve been trying to diagnose what is going on before someone gets hurt!

The box in question runs Windows Vista, and I noticed while poking around that some disk operations were very slow. In fact, deleting even a handful of files via Explorer took so long that I gave up (we’re talking hours here). About this time I fired up the Reliability and Performance Manager that comes with Vista (Control Panel > Administrative Tools). I noticed that there was constant disk activity, and a lot of it centered around C:\$MFT — the NTFS Master File Table.

I had already pared back the background tasks on this machine: the Recycle Bin was disabled, Search Indexing was turned off and Defrag ran on a regular schedule. So why was my file system so dog slow? The answer came when I looked into the AppData\Local\Temp directory for the user running the Pulse agent. The directory was filled with tens of thousands of entries, many of which were directories that themselves contained many files.

The junk that had built up in this directory was quite astounding. Although some of it can be explained by tests that don’t clean up after themselves, I believe a lot of the junk came from tests that had to be killed forcefully without time to clean up. It was also evident that every second component we were using was part of the problem – Selenium, JFreechart, JNA, Ant and Ivy all joined the party.

So, how to resolve this? Of course any tests that don’t clean up after themselves should be fixed. But in reality this won’t always work — especially given the fact that Windows will not allow open files to be deleted. So the practical solution is to regularly clean out the temporary directory. In fact, it’s quite easy to set up a little Pulse project that will do just that, and let Pulse do the work of scheduling it via a cron trigger. With Pulse in control of the scheduling there is no risk the cleanup will overlap with another build.

A more general solution is to start with a guaranteed-clean environment in the first place. After all, acceptance tests have a habit of messing with a machine in other ways too. Re-imaging the machine after each build, or using a virtual machine that can be restored to a clean state, is a more reliable way to avoid the junk. Pulse is actually designed to allow reimaging/rebooting of agents to be done in a post-stage hook — the agent management code on the master allows for agents to go offline at this point, and not try to reuse them until their status can be confirmed by a later ping.

Functional vs “Java” Style

Stephan Schmidt started it with his third point:

Do not use loops for list operations. Learning from functional languages, looping isn’t the best way to work on collections. Suppose we want to filter a list of persons to those who can drink beer. The loop versions looks like:

List beerDrinkers = new ArrayList();
for (Person p: persons) {
if (p.getAge() > 16) {
beerDrinkers.add(p);
}
}

This can – even in Java – be rewritten to a more a functional programming style. For example using Google collections filter:

Predicate canDrinkBeer = new Predicate() {
public boolean apply(HasAge hasAge) {
return hasAge.getAge() > 16;
}
};
List beerDrinkers = filter(persons, canDrinkBeer);

Then Cedric Beust countered with:

No loops

Java is not very well suited to a functional style, so I think the first example is more readable than the second one that uses Predicates. I’m guessing most Java programmers would agree, even those that are comfortable with comprehensions and closures and especially those that aren’t.

I guess this means I’m not “most Java programmers”. Although I find the verbosity of the anonymous Predicate in Stephan’s code lamentable, aside from that I think the functional style is far superior. And I don’t believe it should be seen as in any way at odds with “normal” Java style. There’s no magic here: it’s just a straightforward anonymous class and filtering is a very simple concept to understand. Conceptually it is no harder than the explicit loop: in fact it is easier as you know the higher purpose as soon as you see “filter”.

The higher level of abstraction is the key to its superiority. Every non-trivial Java project will include many instances of searching, filtering, folding and so on of collections. Repeating this most basic of logic time after time fills the code with mechanical details that hide its true purpose. Why not get those details right once (in a library) and then never have to see them again?

In fact, once you embrace this style, things get even better. Not only can you reuse basic operations like filtering, you can also:

  1. Compose these operations to form even higher-level manipulations.
  2. Build up a reusable set of functors (Predicates, Transforms, etc) instead of using anonymous classes.

The latter point is important, as it allows you to remove an even more damaging sort of repetition — that of your project’s domain-specific logic. Reduce Stephan’s example to:

List beerDrinkers = filter(persons, new CanDrinkBeerPredicate());

and it is clearly superior.

Really, I think it is a shame to see Java programmers hesitant to use this style. The ideas may come from the functional world, but there is nothing but basic Java code at work here. The abstractions are both simple and powerful, which to me is the very definition of elegance.

Ready to Test: Maven + TestNG + Hamcrest + Mockito

I’m no Maven fanboy, but for a new, small Java project the ultra-fast setup time is compelling. So, for my latest little application, I decided to give it a go. Sadly, the default Maven archetype lumped my project with JUnit 3.8.1. Boo. And although the TestNG website mentions an alternative archetype, it appears to have disappeared off the face of the internet.

Luckily, dragging my project into the present wasn’t difficult. Along the way I also added my essential testing libraries: Hamcrest for matchers and Mockito for mocking (well, stubbing, but that’s another story). For posterity’s sake, and for others that share my testing tastes, here’s how it’s done.

Requirements

I’m assuming that you have Maven 2 installed already. If not, it’s trivial to:

  1. Download the latest (2.1.0 at time of writing); and
  2. Install it according to the instructions provided.

You can check if you have Maven ready to go by running:

$ mvn --version
Apache Maven 2.1.0 (r755702; 2009-03-18 19:10:27+0000)
Java version: 1.6.0_12
Java home: /usr/local/java/jdk1.6.0_12/jre
Default locale: en_AU, platform encoding: UTF-8
OS name: "linux" version: "2.6.28-11-generic" arch: "amd64" Family: "unix"

Bootstrap

With the lack of an available alternative, I found it easiest to start with the default archetype. To create a new project, run something like:

$ mvn archetype:create -DgroupId=com.mycompany.app -DartifactId=my-app

Remember that if you’ve just installed Maven it will take this opportunity to download the internet. Be patient. If you’re new to Maven, you might also want to check out the 5 minute guide which walks through this in more detail.

Check that your bare application has been created:

$ cd my-app
$ find . -type f
./src/main/java/com/mycompany/app/App.java
./src/test/java/com/mycompany/app/AppTest.java
./pom.xml
$ mvn package
...
$ java -cp target/my-app-1.0-SNAPSHOT.jar com.mycompany.app.App
Hello World!

Add the Dependencies

Next you need to update the project POM to add the testing libraries as dependencies to your build. This involves three changes:

  1. Removing the default dependency on JUnit.
  2. Adding new dependencies for TestNG, Hamcrest and Mockito to the “test” scope.
  3. Configuring the compiler to accept Java 5 source.

The last step is necessary as the Maven default setting assumes Java 1.3 source, which apart from being ancient doesn’t support goodies such as annotations that are required for the new testing libraries. Your updated pom.xml file should look something like:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.mycompany.app</groupId>
  <artifactId>my-app</artifactId>
  <packaging>jar</packaging>
  <version>1.0-SNAPSHOT</version>
  <name>my-app</name>
  <url>http://maven.apache.org</url>
  <dependencies>
    <dependency>
      <groupId>org.testng</groupId>
      <artifactId>testng</artifactId>
      <version>5.8</version>
      <scope>test</scope>
      <classifier>jdk15</classifier>
    </dependency>
    <dependency>
      <groupId>org.mockito</groupId>
      <artifactId>mockito-all</artifactId>
      <version>1.8.0-rc2</version> 
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.hamcrest</groupId>
      <artifactId>hamcrest-all</artifactId>
      <version>1.1</version> 
      <scope>test</scope>
    </dependency>
  </dependencies>
  <build>
    <plugins>  
      <plugin>
        <artifactId>maven-compiler-plugin</artifactId>
        <configuration>
          <source>1.5</source>
          <target>1.5</target>
        </configuration>
      </plugin>
    </plugins>
  </build>
</project>

I’ve used the latest available versions of each of the libraries in this example — tweak them to suit your current reality.

Update the Sample Test

Now you’re ready to try updating the sample test case to use the trinity of TestNG, Hamcrest and Mockito. The easiest way to do this is to get Maven to generate a project for your IDE, e.g.

$ mvn eclipse:eclipse

or:

$ mvn idea:idea

Fire up your chosen IDE, open the AppTest class, and edit it to exercise all of the dependencies:

package com.mycompany.app;

import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.equalTo;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
import org.testng.annotations.Test;

import java.util.Random;

public class AppTest 
{
    @Test
    public void testApp()
    {
        Random mockRandom = mock(Random.class);
        when(mockRandom.nextInt()).thenReturn(42);
        assertThat(mockRandom.nextInt(), equalTo(42));
    }
}

What are you waiting for? Try it out:

$ mvn test
...
Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESSFUL
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 18 seconds
[INFO] Finished at: Wed Jun 24 16:11:48 BST 2009
[INFO] Final Memory: 20M/100M
[INFO] ------------------------------------------------------------------------

If you got this far, then everything you need is in place. Now you just have to … implement your project!

Extra Credit

If you poke about a bit, you will also find that the maven surefire plugin, which manages the tests, generates some reports by default. Along with HTML output, it also produces a JUnit-like XML report at:

target/surefire-reports/TEST-TestSuite.xml

This report is ideal for integration with a continuous integration server (in my case Pulse, naturally, but many will support it).

Happy testing!

Devoxx: Future of the JVM

Today’s theme at Devoxx, at least from the talks I chose, was very much the future of the JVM, including the future of Java. Two big things were highlighted: modularity and dynamism.

Modularity

The focus of JDK 7 will be a new module system for Java which will in turn be used to modularise the JDK itself. The plan is to develop the system in the open (no more JSR-277-style project management) as part of the recently-announced Project Jigsaw. I encourage you to read Mark Reinhold’s blog entry for details.

One neat point that stood out for me was the desire to make this fit well with OS packaging systems (think RPMs, .debs, etc) — so it will be trivial to create an OS package for a Java module. This should couple nicely with the easy availability of the Sun JDK on Linux these days (now it is open source), making Java software more integrated with the native platform.

A much more critical point, however, is the idea of versioning as an enabler to make incompatible changes. Even as the news of Java’s death continues to roll in, it appears Sun is looking to put a mechanism in place that will allow them to finally make incompatible changes. Once you have a modular JDK and the ability to express dependencies on particular versions of modules, it follows that it should be possible to make incompatible changes to those modules in new versions. Perhaps in time this will allow a much-needed cleanup of the dark corners of the platform.

This also raises big questions for both OSGi and Maven. I don’t expect either of these entrenched projects to disappear, but there is certainly overlap and a need to adapt. On the OSGi front Sun is aiming for interoperability, but they claim that the Jigsaw module system will go down to a lower level than OSGi is capable (Update: it appears that this claim is controversial, see the comments below.).

Dynamism

Stepping away from Java the language and looking at the JVM platform, Brian Goetz and Alex Buckley spoke about the move “Towards a dynamic VM”. The talk focused on the efforts in JSR-292 (aka the invokedynamic JSR) to make hosting dynamic languages on the JVM easier and more efficient. The key component of this JSR is a new invokedynamic bytecode which can be used for dynamic method dispatch (when the actual method to call depends on more than just the type of the instance it is being invoked on).

The cool thing about invokedynamic is it will allow the various dynamic language runtimes to effectively plug method lookup functionality into the JVM. This will allow each dynamic language implementation, many of which have very flexible method binding, to take full control of method selection. This is key to efficient implementation of these languages on the JVM.

So hey, even if Java has lost its mojo, that isn’t the point anymore. It’s the platform, stupid!

Devoxx Conference Day 1

Conference Day 1 (Devoxx Day 3) is done, and on balance it was decent, but hit-and-miss. “Special guest” RoxorLoops — a Belgian Beatboxer — brought some variety to the opening. He is a seriously talented guy and I think most enjoyed it as I did.

Of course the meat of the conference is the talks. Here’s a rundown of those that I attended:

  • Keynote: JavaFX: I have not taken much interest in JavaFX so far, so it was interesting to see a bit of it in action. The most interesting part was the demo of dragging a JavaFX application out of a browser and having it run standalone. It’s high on cool factor, and could be quite useful (though I doubt any user will think themselves to try it, so…).
  • Keynote: Java and RFID: IBM and partners managed to take an interesting idea, namely running a live project at the conference tracking attendees with RFID, and turn it into a yawn-worthy presentation. I kept wanting to see the actual software details behind it (not which RFID printers I can buy…). When they did get to some software stuff, it felt like seriously over-engineered ADD (Acronym Driven Development).
  • From Concurrent to Parallel: Brian Goetz gave a polished overview of the fork-join framework. I’m into java.util.concurrent, so it’s good to get a look at the next logical step.
  • Effective Pairing: The Good, the Bad and the Ugly: Dave Nicolette had previously done this talk with rehearsed players for various pairing scenarios. As the players weren’t available, Dave took the ambitious route of getting audience volunteers to ad-lib instead. It didn’t go too badly considering the obstacles (e.g. audio not set up for the job), but on balance it was probably too ambitious. At least the audience did get involved, though.
  • The Feel of Scala: this was my favourite of the day. Bill Venners is a very clear speaker, and this talk was polished (perhaps done before). I enjoyed the focus on real code samples, which were presented in an easily-followed fashion. Bill’s own motivations for using Scala reminded me of my own desire to check it out, which I now plan to do more seriously than before.
  • Filthy Rich Android Clients: I’m planning to get a G1 so thought I’d check out some eye candy. Sadly, this talk was much too heavy on details and not enough on illustration. Learning how things work is great, but packing slides full of points won’t do it – there needs to be more sample code/hands on work. And this topic was just crying out for some eye-catching demos, of which there were too few in my opinion.
  • Jython: Jythonistas Jim Baker and Tobias Ivarsson made a sometimes-awkward pairing for this talk. They focused too much on Python and Django and not enough on Jython itself for my taste. These are fine topics and we even use Django for zutubi.com, but I was expecting more content about Jython and cool ways to leverage Python on the Java platform. They did cover some of this territory, and get full marks for live coding and real demos.

Oh, and free beer and frites is always a good way to end a day…

Zutubi @ Devoxx 2008

Next week I’ll be heading to Antwerp, Belgium for the 3 days of conference at Javapolis Javoxx Devoxx. I don’t often go for larger conferences (too many marketroids), but have heard that Devoxx is a bit different — I guess I’ll find out!

I’m keen to hook up with any Zutubi customers, competitors or plain old build/continuous integration geeks while I’m there, so drop me an email if you’ll be there…

Maven – Pain = Gradle?

Being in the continuous integration game, it’s part of my job to keep an eye on build tools and technologies. Occasionally I hear of something interesting enough to try out, so when I next start a small project I’ll give it a go.

This time it is the turn of Gradle. From the Gradle website:

Gradle is a build system which provides:

  • A very flexible general purpose build tool like Ant.
  • Switchable, build-by-convention frameworks a la Maven (for Java and Groovy projects). But we never lock you
    in!
  • Powerful support for multi-project builds.
  • Powerful dependency management (based on Apache Ivy).
  • Full support for your existing Maven or Ivy repository infrastructure.
  • Support for transitive dependency management without the need for remote repositories and pom.xml
    or ivy.xml files (optional).
  • Ant tasks as first class citizens.
  • Groovy build scripts.

Build tools that leverage scripting languages such as Groovy, Ruby and Python are all the rage. This is undoubtedly a useful feature, but so common these days that it is not a differentiating factor. After all, just adding a more concise way to write procedural build scripts is not a big win. The focus needs to be on making builds as declarative as possible.

The current king of declarative builds in the Java world is undoubtedly Maven. However, as I have said in a previous post, the current implementation of Maven leaves a lot to be desired. Still, the Maven idea of build-by-convention is still a good one if it can be achieved in a flexible way. This, then, is what attracts me to Gradle — its specific goal of providing build-by-convention without the lock-in.

Installation

To begin, I set myself the lofty goal of writing a “Hello, World” build script. This gets me to the point where I have a working gradle installation. As I already had a JDK installed (the only external dependency), installation was as simple as:

$ wget http://dist.codehaus.org/gradle/gradle-0.4-all.zip
$ unzip gradle-0.4-all.zip
...
$ export GRADLE_HOME="$(pwd)/gradle-0.4"
$ export PATH="$PATH:$GRADLE_HOME/bin"
$ gradle -v
Gradle 0.4
Gradle buildtime: Tuesday, September 16, 2008 9:20:38 AM CEST
Groovy 1.5.5
Java 1.6.0_06
JVM 10.0-b22
JVM Vendor: Sun Microsystems Inc.
OS Name: Linux

Gradle build files are named “build.gradle”, so next I created a trivial example as follows:

createTask('hello')
{
println 'Hello, world!'
}

The above is normal Groovy source code, executed in an environment provided by gradle. It defines a single task which is gradle’s equivalent of a target (almost like the combination of an Ant task and target). Executing this task gives:

$ gradle -q hello
Hello, world!
$

Note that the -q flag suppresses some gradle output.

A Simple Java Project

To test gradle’s claims of build-by-convention, I next put it to work on a simple Java project. Gradle’s build-by-convention support is implemented as “plugins” for different languages, with Java and Groovy plugins provided out of the box. The project to be built is a JavaDoc doclet, so the build just needs to compile Java source files into a single jar. To make life a little interesting, the project does not fit gradle’s default conventions in two ways:

  1. It should not be named after the containing directory.
  2. The source is located under “src/java”, not “src/main/java”.

These are truly simple customisations — so you would expect them to be easily configured. And indeed they are, as shown in the build.gradle file:

name = archivesBaseName = 'com.zutubi.xmlrpc.doclet'
version = '0.1'

usePlugin('java')
sourceCompatibility = 1.5

targetCompatibility = 1.5
srcDirNames = ['java']

The first line customises the project and jar file names, and the last line is used to override the default location for Java source files. With this build file in place, I can build the jar as follows:

$ gradle -q libs
$ ls build
classes com.zutubi.xmlrpc.doclet-0.1.jar reports test-classes test-results

That was pleasantly simple! The Java plugin also gives me a bunch of other tasks for free:

$ gradle -qt
**************************************************
Project :
Task :archive_jar [:test]
Task :clean []
Task :compile [:resources]
Task :dists [:libs]
Task :eclipse [:eclipseCp, :eclipseProject]
Task :eclipseClean []
Task :eclipseCp []
Task :eclipseProject []
Task :eclipseWtpModule []
Task :init []
Task :javadoc []
Task :libs [:archive_jar, :test]
Task :resources [:init]
Task :test [:testCompile]
Task :testCompile [:testResources]
Task :testResources [:compile]
Task :upload [:uploadDists, :uploadLibs]
Task :uploadDists [:dists]
Task :uploadInternalLibs [:libs]
Task :uploadLibs [:libs]

Conclusion

So far, gradle looks very promising. The example above is too simple to judge how it would fare on a larger, more challenging build, but it shows the basics are right: at least a simple case is simple. I hope to give it a try on a larger code base soon.

At the moment the gradle project is still in its infancy, so I’ll be keeping a keen eye on its development. Indeed, if it can achieve its stated goals it will become a build tool to reckon with, and a tough competitor for Maven.


Zutubi Pulse: Continuous Integration made easy
Does your project have a pulse? Try it for free

Checked Exceptions: Failed Experiment?

Checked exceptions is one of those Java features that tends to work up a lively discussion. So forgive me for rehashing an old debate, but I am dismayed to see continued rejection of them as a failed feature, and even more so to see proposals to remove them from the language (not that I ever think this will happen).

In the Beginning…

Most would agree that mistakes were made in the way checked exceptions were used in the early Java years. Some libraries even go so far as doing something about it. However, an argument against the feature itself this does not make. Like anything new, people just needed time in the trenches to learn how to get the most out of checked exceptions. A key point here is that checked exceptions are available but not compulsory. This leads to a reasonable conclusion, as made in Effective Java:

Use checked exceptions for recoverable conditions and runtime exceptions for programming errors

The key point here is recoverable. If you can and should handle the exception, and usually close to the source, then a checked exception is exactly what you need. This will encourage proper handling of an error case – and make it impossible to not realise the case exists. For unrecoverable problems, handling is unlikely to be close to the source, and handling (if any) is likely to be more generic. In this case an unchecked exception is used, to avoid maintenance problems and the leakage of implementation details.

Continued Criticism

None of the above is new, and to me it is not contraversial. But the continued attacks on checked exceptions indicate that some still disagree. Let’s take a look at some common arguments against checked exceptions that still get airtime:

Exception Swallowing/Rethrowing

Some argue that the proliferation of code that either swallows checked exceptions (catches them without handling them) or just rethrows them all (“throws Exception”) is evidence that the feature is flawed. To me this argument carries no weight. If we look at why exception handling is abused in this way, it boils down to two possibilities:

  1. The programmer is too lazy to consider error handling properly, so just swallows or rethrows everything. A bad programmer doesn’t imply a bad feature.
  2. The programmer is frustrated by a poorly-designed API that throws checked exceptions when it shouldn’t. In this case the feature is an innocent tool in the hands of a bad API designer. Granted it took some time to discover when checked exceptions were appropriate, so many older APIs got it wrong. However, this is no reason to throw out the feature now.

You need well-designed APIs and good programmers to get quality software, whether you are using checked exceptions or not.

Maintenance Nightmare

A common complaint is how changes to the exception specification for a method bubbles outwards, creating a maintenance nightmare. There are two responses to this:

  1. The exceptions thrown by your method are part of the public API. Just because this is inconvenient (and when is error handling ever convenient?) doesn’t make it false! If you want to change an API, then there will be maintenance issues – full stop.
  2. If checked exceptions are kept to cases that are recoverable, and usually close to the point when they are raised, the specification will not bubble far.

Switching every last exception to unchecked won’t make maintenance problems go away, it will just make it easier to ignore them – to your own detriment.

Try/Catch Clutter

Some argue that all the try/catch blocks that checked exceptions force on us clutter up the code. Is the alternative, however, to elide the error handling? If you need to handle an error, the code needs to go somewhere. In an exception-based language, that is a try-catch block, whether your exception is checked or not. At least the exception mechanism lets you take the error handling code out of line from your actual functionality. You could argue that the Java’s exception handling syntax and abstraction capabilities make error handling more repetitive than it should be – and I would agree with you. This is orthogonal to the checked vs unchecked issue, though.

Testing Is Better

Some would argue that instead of the complier enforcing exception handling in a rigid way, it would be better to rely on tests catching problems at runtime. This is analogous to the debate re: static vs dynamic typing, and is admittedly not a clear cut issue. My response to this is that earlier and more reliable feedback (i.e. from the compiler) is clearly beneficial. Measuring the relative cost of maintaining exception specifications versus maintaining tests is more difficult. In cases where an exception almost certainly needs to be recovered from (i.e. the only case where a checked exception should be used), however, I would argue that the testing would be at least as expensive, and less reliable.

Conclusion

I am yet to hear an argument against checked exceptions that I find convincing. To me, they are a valuable language feature that, when used correctly, makes programs more robust. As such, it is nonsense to suggest throwing them out. Instead, the focus should be on encouraging their appropriate use.

Q: What Sucks More Than Java Generics?

A: Java without generics. Yes, there are many problems with the Java generics implementation. And I seriously hope that the implementation is improved in Java 7. However, it occurs to me that the problems can generally be worked around by either:

  1. Not using generics where the limitations make it too difficult; or
  2. Using unsafe casts.

In both cases, you are no worse off than you were pre-generics, where you had no choice! Despite their limitations, generics have done a lot to reduce painfully repetitive casting and mysterious interfaces (List of what now?). They also enable abstraction of many extremely common functions without losing type information. In these ways I benefit from generics every day, enough to outweigh the frustrations of erasure.