a little madness

A man needs a little madness, or else he never dares cut the rope and be free -Nikos Kazantzakis

Zutubi

Archive for the ‘Java’ Category

Open Source at Zutubi

At Zutubi we’ve reaped massive benefits over the years from open source. We’ve contributed back bits and pieces in that time — feedback and patches — but always felt we could do more. You may have noticed recent posts here about some of our projects, but we thought they deserved a bit more attention. So we’ve recently launch an new open source section on our website. This is a home for a few small projects of our own that we’ve opened up to the community. We hope you find them useful!

So far we’ve listed three projects:

  1. android junit report: a custom test runner for Android projects that produces XML test reports in a format similar to the Ant JUnit task.
  2. zutubi android ant: a collection of Ant tasks that supplement the standard Android build tooling.
  3. zutubi diff: a Java library for reading, inspecting and applying patch files in unified diff and other formats.

Over time we hope to publish more of our work in this way. You can keep an eye out for new projects on our website and/or follow our organisation on GitHub (or indeed my own GitHub account for Android-specific projects).

And as always: feedback is welcome, so fork away!

Android JUnit Report Updates

Recently I’ve made a few updates to my custom Android test runner, android-junit-report. This runner makes it easy to integrate your Android test results into a continuous integration process by producing de-facto standard JUnit-style XML reports. The latest changes bring the runner up to speed with the latest version of Android and include better documentation. A summary of the updates follows:

  • A new home page for the project.
  • Within the home page, full documentation.
  • Updates to where the reports are written by default. The runner no longer attempts to use the internal storage of the test application, instead it always defaults to the internal storage of the main application (i.e. the one under test).
  • Support for a new __external__ token that allows you to place the reports in the external storage area of the application under test (given availability and permission to do so).
  • Changes to the token syntax to avoid the need to escape when calling the runner via a shell.

I’d like to acknowledge the help of Sebastian Schuberth and Christopher Orr in fine-tuning some aspects of the runner for this release. The latest runner version, 1.5.8, is available for download now.

Android: Easier Source Linking in Eclipse

In my prior post Android: Attaching Sources to Libraries in Eclipse I noted that ADT version r20 added the ability to link sources to libraries in the Android Dependencies classpath container in Eclipse. Although this was a welcome addition, the method used to link sources is a bit cumbersome: you must create a .properties file alongside each library, and add a property that points to the source jar.

I felt this could be a lot simpler if you are willing to adopt a convention for the location of source jars. Once the convention is established, adding linked sources is as simple as placing the source jar in the expected location. This makes the process as seamless as the Android Dependency container is for adding libraries in the first place.

So I’ve made this possible via a new libproperties task in my zutubi-android-ant project. This task inspects the jars in your libs/ directory, looks for corresponding source jars by naming convention, and adds appropriate .properties files when sources are found. If you’re willing to accept the default convention, where source a source jar for library libs/foo.jar is placed at libs/src/foo-source.jar, then this simple Ant target can be used to generate the required .properties files for you:

<target name="link-sources">
    <zaa:libproperties/>
</target>

Of course you’ll need to import the zutubi-android-ant tasks into your build file first. If you want to ensure sources are present for all libraries, set the attribute failOnMissingSrc to true and your build will fail if any source jar is missing. You can even establish your own conventions, using a regular expression to match the libraries and a replacement string to generate paths to source jars:

<target name="link-sources">
    <-- libs/alib-1.3.jar maps to libs/src/alib-src-1.3.jar -->
    <zaa:libproperties failOnMissingSrc="true"
                         jarPattern="(.+)-([0-9.]+)\.jar"
                         srcReplacement="src/$1-src-$2.jar"/>
</target>

Head over to the new zutubi-android-ant home page for more details, including quick-start instructions to get the tasks installed in your build in minutes.

New Android Ant Tasks: lint and ndkbuild

I’m back into Android development these days, and expect plenty more to come (especially with a Nexus 7 due to arrive today!). So my Android-related projects on GitHub should be getting some much-needed attention. In fact I’ve already added a couple of new tasks to zutubi-android-ant (my extensions to the Android SDK Ant tooling): zaa:lint and zaa:ndkbuild.

The names of the tasks are pretty self-explanatory. The zaa:lint task runs Android Lint, which checks your source for common bugs and problems. If you don’t use lint, do yourself a favour and add it to your builds, it will save you a lot of grief! The zaa:ndkbuild task runs the Android NDK build command, which builds your native code.

Both tasks are simple extensions of the venerable exec task which make it simpler to run the underlying tools regardless of their install location and host operating system. This makes it easier to write machine-independent targets in your build files, which is great for teams working in diverse environments (and, of course, your continuous integration server). The tasks rely on the standard Android Ant properties (i.e. sdk.dir and ndk.dir) to locate the tools.

Because the tasks extend the built-in exec, they support all of the standard configuration. Adding arguments, tweaking the environment, and so on should all be familiar to Ant users. In future I may expand the tasks further to support some more tailored configuration, specific to the tools, but they are already fully-capable and very useful.

You’ll find more documentation in the README over at the zutubi-android-ant repository. Expect more tasks to follow, too!

Android: Attaching Sources to Libraries in Eclipse

Since ADT r17, Android projects in Eclipse have had a nice property: all jars in your libs directory have been automatically picked up by Eclipse projects under a magic “Android Dependencies” entry in the build path. Ant builds use the same conventions, so adding a library is as simple as dropping it into the right directory. Unfortunately, though, this magic came with a major limitation: there was no way to attach sources to these libraries (see issue 27940). This led me to duplicate build path entries manually just to get source attachments.

Good news: in the just-released r20 version of ADT, Google have provided a solution to this problem. It’s not well advertised, but if you check out comment 21 on the aforementioned issue, you see that by creating a properties file for each library, you can tell ADT where to find the sources. So, for example, if you have a jar named:

some-library-1.0.jar

you can create a properties file alongside it named:

some-library-1.0.jar.properties

Within the properties file you can add a src property set to the relative (or absolute, but that’s version-control-unfriendly) path of the source jar, zip or directory. Say, like me, you put it in the libs/src subdirectory, your properties file might look like:

src=src/some-library-1.0-sources.jar

Once you’ve added the properties file, refresh the project in Eclipse and voilĂ : the sources will be attached! A similar property named doc is supported for javadoc attachments.

Although this new feature is very welcome, I feel like it could use a bit more fine tuning. The main issue is the need to create a separate properties file for each jar. I keep my source jars in predictable locations, with conventional names, so I should be able to configure the convention once and have everything Just Work from then on. Heck, I’m not sure why there isn’t a default convention, which I could just follow with no further configuration!

Android JUnit XML Reports: Multiple File Support

Due to popular demand, I’ve added support for multiple output files (one per test suite) to android-junit-report.

For simplicity and efficiency, android-junit-report does not produce files in the exact format used by the Ant JUnit task. In the 1.1 release there are two main differences:

  1. A single report file is produced containing all test suites.
  2. Redundant information, such as the number of cases, failures etc is not added using attributes on the testsuite tag.

It turns out the first of these restrictions caused multiple users issues with tools accustomed to handling a single report file per suite. So in the latest 1.2 release I have added a new multiFile option. When this option is enabled, android-junit-report produces a separate output file for each test suite. This does mean that to retrieve the results from the device you will need to pull a whole directory.

To enable this option from an Ant build, you can override the default run-tests target as follows:

<target name="run-tests" depends="-install-tested-project, install"
       description="Runs tests from the package defined in test.package property">
    <property name="reports.dir" value="${out.dir}/reports"/>
    <property name="files.dir" value="/data/data/${tested.manifest.package}/files"/>
    <echo>Cleaning up previous test reports…</echo>
    <delete dir="${reports.dir}"/>
    <exec executable="${adb}" failonerror="true">
        <arg line="${adb.device.arg}" />
        <arg value="shell" />
        <arg value="rm" />
        <arg value="${files.dir}/*" />
    </exec>
    <echo>Running tests…</echo>
    <exec executable="${adb}" failonerror="true">
        <arg line="${adb.device.arg}"/>
        <arg value="shell" />
        <arg value="am" />
        <arg value="instrument" />
        <arg value="-w" />
        <arg value="-e" />
        <arg value="coverage" />
        <arg value="@{emma.enabled}" />
        <arg value="-e" />
        <arg value="multiFile" />
        <arg value="true" />
        <arg value="${manifest.package}/${test.runner}" />
    </exec>      
    <echo>Downloading XML test reports…</echo>
    <mkdir dir="${reports.dir}"/>
    <exec executable="${adb}" failonerror="true">
        <arg line="${adb.device.arg}"/>
        <arg value="pull" />
        <arg value="${files.dir}" />
        <arg value="${reports.dir}" />
    </exec>
</target>

You can learn more about android-junit-report and/or download the new release using the links below:

Simpler Ant Builds With the Ant Script Library

Introduction

Ant may be unfashionable these days, but it still has its advantages. Key among these are familiarity and simplicity: most Java developers have worked with Ant, and with an Ant build what you get is what you see. A major disadvantage, though, is that Ant provides very little out-of-the-box. When you start a new project, you’ve got a lot of grunt work to endure just to get your code compiled, packaged, and tested. An all-too-common solution, in the grand tradition of make, is to copy a build file from an existing project as an easy starting point.

Over the years, though, Ant has gradually expanded support for creating reusable build file snippets. On top of this a few projects have emerged which aim to simplify and standardise your Ant builds, including:

Today I’ve taken my first proper look at the latter, and so far I like what I see.

The Ant Script Library

In the author Joe Schmetzer’s own words:

The Ant Script Library (ASL) is a collection of re-usable Ant scripts that can be imported into your own projects. The ASL provides a number of pre-defined targets that simplify setting up build scripts for a new project, bringing re-use and consistency to your own Ant scripts.

ASL consists of several Ant XML files, each of which provides a group of related functionality via predefined targets. For example, the asl-java-build.xml file defines targets for compiling and packaging Java code. The asl-java-test.xml file extends this with the ability to run JUnit tests, and so on. Essentially, ASL packages up all the grunt work, allowing you to concentrate on the small tweaks and extra targets unique to your project. The modular structure of ASL, combined with the fact that it is just Ant properties and targets, makes it easy to take what you like and leave the rest.

An Example

Allow me to illustrate with a simple project I have been playing with. This project has a straightforward directory structure:

  • <project root>
    • asl/ – the Ant Script Library
    • build.xml – Ant build file
    • lib/ – Jar file depedencies
    • src/ – Java source files
    • test/ – JUnit-based test source files

To add ASL to my project, I simply downloaded it from the project download page and unpacked it in the asl/ subdirectory of my project1. Then I can start with a very simple build file that supports building my code and running the tests:

<?xml version="1.0" encoding="utf-8"?>
<project name="zutubi-android-ant" default="dist">
    <property name="java-build.src-dir" location="src"/>
    <property name="java-test.src-dir" location="test"/>
    <property name="java-build.lib-dir" location="libs"/>
	
    <property name="asl.dir" value="asl"/>

    <import file="${asl.dir}/asl-java-build.xml"/>
    <import file="${asl.dir}/asl-java-test.xml"/>
</project>

Notice that I am using non-standard source locations, but that is easily tweaked using properties which are fully documented. With this tiny build file, let’s see what targets ASL provides for me:

$ ant -p
Buildfile: build.xml

Main targets:

 clean                 Deletes files generated by the build
 compile               Compiles the java source
 copy-resources        Copies resources in preparation to be packaged in jar
 dist                  Create a distributable for this java project
 generate              Generates source code
 jar                   Create a jar for this java project
 test-all              Runs all tests
 test-integration      Runs integration tests
 test-run-integration  Runs the integration tests
 test-run-unit         Runs the unit tests
 test-unit             Runs unit tests
Default target: dist

It’s delightfully simple!

Adding Reports

It gets better: ASL also provides reporting with tools like Cobertura for coverage, FindBugs for static analysis and so on via its asl-java-report.xml module. The full range of supported reports can be seen in the report-all target:

<target name="report-all"
        depends="report-javadoc, report-tests, report-cobertura, report-jdepend, report-pmd, report-cpd, report-checkstyle, report-findbugs" 
        description="Runs all reports"/>

Having support for several tools out-of-the-box is great. For my project, however, I’d like to keep my dependencies down and I don’t feel that I need all of the reporting. Although the choice of reports is not something that is parameterised by a property, it is still trivial to override by providing your own report-all target. This shows the advantage of everything being plain Ant targets:

<?xml version="1.0" encoding="utf-8"?>
<project name="zutubi-android-ant" default="dist">
    <property name="java-build.src-dir" location="src"/>
    <property name="java-test.src-dir" location="test"/>
    <property name="java-build.lib-dir" location="libs"/>
	
    <property name="asl.dir" value="asl"/>

    <import file="${asl.dir}/asl-java-build.xml"/>
    <import file="${asl.dir}/asl-java-test.xml"/>
    <import file="${asl.dir}/asl-java-report.xml"/>
    
    <target name="report-all"
            depends="report-javadoc, report-tests, report-cobertura, report-pmd, report-checkstyle" 
            description="Runs all reports"/>
</project>

Here I’ve included the java-report module, but defined my own report-all target that depends on just the reports I want. This keeps things simple, and allows me to trim out a bunch of ASL dependencies I don’t need.

Conclusion

I’ve known of ASL and such projects for a while, but this is the first time I’ve actually given one a go. Getting started was pleasantly simple, as was applying the small tweaks I needed. So next time you’re tempted to copy an Ant build file, give ASL a shot: you won’t regret it!



1 In this case I downloaded the full tarball including dependencies, which seemed on the large side (21MB!) but in fact can be easily trimmed by removing the pieces you don’t need. Alternatively, you can start with the basic ASL install (sans dependencies) and it can pull down libraries for you. Sweet :).

Android Testing: XML Reports for Continuous Integration

Summary

This post introduces the Android JUnit Report Test Runner, a custom instrumentation test runner for Android that produces XML test reports. Using this runner you can integrate your Android test results with tools that understand the Ant JUnit task XML format, e.g. the Pulse Continuous Integration Server.

The motivation and details of the runner are discussed below. For the impatient: simply head on over to the project home page on GitHub and check out the README.

Introduction

If you’ve been following my recent posts you’ll know that I’ve been figuring out the practical aspects of testing Android applications. And if you’ve been following for longer, you might know that my day job is development of the Pulse Continuous Integration Server. So it should come as no surprise that in my latest foray into the world of Android testing I sought to bring the two together :).

Status Quo

Out of the box, the Android SDK supports running functional tests on a device or emulator via instrumentation. Running within Eclipse, you get nice integrated feedback. Unfortunately, though, there are no real options for integrating with other tools such as continuous integration servers. Test output from the standard Ant builds is designed for human consumption, and lacks the level of detail I’d like to see in my build reports.

The Solution

On the upside, having access to the Android source makes it possible to examine how the current instrumentation works, and therefore how it can be customised. I found that the default InstrumentationTestRunner may be fairly easily extended to hook in extra test listeners. So I’ve implemented a custom JUnitReportTestRunner that does just that, with a listener that generates a test report in XML format. The format is designed to be largely compatible with the output of the Ant JUnit task’s XML formatter — the most widely supported format in the Java world. Tools like Pulse can read in this format to give rich test reporting.

How It Works

As mentioned, the JUnitReportTestRunner extends the default InstrumentationTestRunner, so it can act as a drop-in replacement. The custom runner acts identically to the default, with the added side-effect of producing an XML report.

For consistency with the SDK’s support for generating coverage reports, the XML report is generated in the file storage area of the target application. The default report location is something like:

/data/data/<tested application package>/files/junit-report.xml

on the device. To retrieve the report, you can use adb pull, typically as part of your scripted build.

Using the Runner

Full details on using the runner are provided in the README on the project home page. Briefly:

  • Add the android-junit-report-<version>.jar to the libraries for your test application.
  • Replace all occurrences of android.test.InstrumentationTestRunner with com.zutubi.android.junitreport.JUnitReportTestRunner:
    • In the android:name attribute of the instrumentation tag in you test application’s AndroidManifest.xml.
    • In the test.runner property in the Ant build for your test application (before calling the Android setup task).
    • In the Instrumentation runner field of all Android JUnit Run Configurations in your Eclipse project.
  • Add logic to your Ant build to run adb pull to retrieve the report after the tests are run.

As an example for retrieving the report in your Ant build:

<target name="fetch-test-report">
    <echo>Downloading XML test report...</echo>
    <mkdir dir="${reports.dir}"/>
    <exec executable="${adb}" failonerror="true">
        <arg line="${adb.device.arg}"/>
        <arg value="pull" />
        <arg value="/data/data/${tested.manifest.package}/files/junit-report.xml" />
        <arg value="${reports.dir}/junit-report.xml" />
    </exec>
</target>

In the Wild

You can see a complete example of this in action in my simple DroidScope Android application. The custom runner is applied in the droidscope-test application in the test/ subdirectory. You can even see the test results being picked up by Pulse on our demo server. Note that some of the tests are pure unit tests, which are run on a regular JVM, whereas others are run with the custom runner on an emulator. It’s nice for all the results to be collected together!

Android Functional Testing vs Dependency Injection

I commonly use Dependency Injection (DI) to create testable Java code. Dependency injection is simple: instead of having your objects find their own dependencies, you pass them in via the constructor or a setter. One key advantage of this is the ability to easily substitute in stub or mock dependencies during testing.

Naturally, as I started working on an Android application, I tried to apply the same technique. Problems arose when I tried to combine DI with the Android SDK’s Testing and Instrumentation support. In particular, I am yet to find a suitable way to combine DI with functional testing of Android activities via ActivityInstrumentationTestCase2. When testing an activity using the instrumentation support, injection of dependencies is foiled by a couple of factors:

  1. Constructor injection is impossible, as activities are constructed by the framework. I experimented with various ways of creating the Activity myself, but was unable to maintain a connection with the Android system for true functional testing.
  2. Setter injection is fragile, as activities are started by the framework as soon as they are created. There is no time to set stub dependencies between the instantiation of the Activity and its activation.

Not ready to give DI away, I scoured the web for existing solutions to this problem. Although I did find some DI libraries with Android support (notably Guice no AOP and roboguice which builds upon it), the only testing support I found was restricted to unit tests. Although roboguice has support for Activities, it relies on being able to obtain a Guice Injector from somewhere — which just shifts the problem by one level of indirection.

Given how complex any DI solution was going to become (if indeed it is possible at all) I decided to step back and consider alternatives. A classic alternative to DI is the Service Locator pattern: where objects ask a central registry for their dependencies. Martin Fowler’s article Inversion of Control Containers and the Dependency Injection pattern compares and contrasts the two patterns in some detail. Most importantly: a Service Locator still allows you to substitute in different implementations of dependencies at test time. The main downside is each class is dependent on the central registry — which can make them harder to reuse. As I’m working with Activities that are unlikely to ever be reused outside of their current application, this is no big deal.

Implementation-wise, I went with the simplest registry that works for me. I found it convenient to use my project’s Application implementation as the registry. In production, the Application onCreate callback is used to create all of the standard dependency implementations. These dependencies are accessed via simple static getters. Static setters are exposed to allow tests to drop in whatever alternative dependencies they desire. A contrived example:

public class MyApplication extends Application
{
    private static IService service;
    private static ISettings settings;

    @Override
    public void onCreate()
    {
        super.onCreate();
        if (service == null)
        {
            service = new ServiceImpl();
        }
        
        if (settings == null)
        {
            SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(getApplicationContext());
            settings = new PreferencesSettings(preferences);
        }
    }
    
    public static IService getService()
    {
        return service;
    }

    public static void setService(IService s)
    {
        service = s;
    }
    
    public static ISettings getSettings()
    {
        return settings;
    }
    
    public static void setSettings(ISettings s)
    {
        settings = s;
    }
}

I access the dependencies via the registry in my Activity’s onCreate callback:

public class MyActivity extends Activity
{
    private IService service;
    private ISettings settings;

    @Override
    public void onCreate(Bundle savedInstanceState)
    {
        super.onCreate(savedInstanceState);

        service = MyApplication.getService();
        settings = MyApplication.getSettings();

        setContentView(R.layout.main);
        // ...
    }

    // ...
}

And I wire in my fake implementations in my functional test setUp:

public class MyActivityTest extends ActivityInstrumentationTestCase2<MyActivity>
{
    private MyActivity activity;

    public MyActivityTest()
    {
        super("com.zutubi.android.example", MyActivity.class);
    }

    @Override
    protected void setUp() throws Exception
    {
        super.setUp();        
        MyApplication.setService(new FakeService());
        MyApplication.setSettings(new FakeSettings());
        activity = getActivity();
    }
    
    public void testSomething() throws Throwable
    {
        // ...
    }

After all of the angst over DI, this solution is delightful in its simplicity. It also illustrates that static is not always a dirty word when it comes to testing!

Are Temp Files Slowing Your Builds Down?

Lately one of our Pulse agents has been bogged down, to the extent that some of our heavier acceptance tests started to genuinely time out. Tests failing due to environmental factors can lead to homicidal mania, so I’ve been trying to diagnose what is going on before someone gets hurt!

The box in question runs Windows Vista, and I noticed while poking around that some disk operations were very slow. In fact, deleting even a handful of files via Explorer took so long that I gave up (we’re talking hours here). About this time I fired up the Reliability and Performance Manager that comes with Vista (Control Panel > Administrative Tools). I noticed that there was constant disk activity, and a lot of it centered around C:\$MFT — the NTFS Master File Table.

I had already pared back the background tasks on this machine: the Recycle Bin was disabled, Search Indexing was turned off and Defrag ran on a regular schedule. So why was my file system so dog slow? The answer came when I looked into the AppData\Local\Temp directory for the user running the Pulse agent. The directory was filled with tens of thousands of entries, many of which were directories that themselves contained many files.

The junk that had built up in this directory was quite astounding. Although some of it can be explained by tests that don’t clean up after themselves, I believe a lot of the junk came from tests that had to be killed forcefully without time to clean up. It was also evident that every second component we were using was part of the problem – Selenium, JFreechart, JNA, Ant and Ivy all joined the party.

So, how to resolve this? Of course any tests that don’t clean up after themselves should be fixed. But in reality this won’t always work — especially given the fact that Windows will not allow open files to be deleted. So the practical solution is to regularly clean out the temporary directory. In fact, it’s quite easy to set up a little Pulse project that will do just that, and let Pulse do the work of scheduling it via a cron trigger. With Pulse in control of the scheduling there is no risk the cleanup will overlap with another build.

A more general solution is to start with a guaranteed-clean environment in the first place. After all, acceptance tests have a habit of messing with a machine in other ways too. Re-imaging the machine after each build, or using a virtual machine that can be restored to a clean state, is a more reliable way to avoid the junk. Pulse is actually designed to allow reimaging/rebooting of agents to be done in a post-stage hook — the agent management code on the master allows for agents to go offline at this point, and not try to reuse them until their status can be confirmed by a later ping.