a little madness

A man needs a little madness, or else he never dares cut the rope and be free -Nikos Kazantzakis

Zutubi

Archive for the ‘Agile’ Category

Pulse Continuous Integration Server 2.4 Released!

Happy days: we’ve now released Pulse 2.4! Thanks to all those that provided feedback during the Pulse 2.4 beta period. Here’s a recap of the major updates in this release:

  • Mercurial support: in the form of a new plugin.
  • Maven 3 support: including a command, post-processor and resource discovery.
  • Agents page updates: with graphical status and more convenient navigation.
  • Reworked agent status tab: with more build links and efficient live updates.
  • New agent history tab: quickly browse all builds that involved an agent.
  • Reworked server activity tab: showing build stages nested under active builds.
  • Pause server: admins can pause the build queue, so all triggers are ignored.
  • New server history tab: showing build history across all projects.
  • Restyled info and messages tabs: for both the agents and server sections.
  • Improved process termination: to make terminating builds more reliable.
  • Kill build action: for quicker build termination forgoing full clean up.
  • Improved changelist views: these views have been reworked in the new style.
  • Pinned builds: mark builds that should never be deleted or cleaned.
  • Templated field actions: easily find or revert to an inherited value.
  • Introduce parent refactoring: adjust your template hierarchy over time.
  • Pluggable resource discovery: automatically locate build tools and libraries.
  • Subversion changelist support: easily submit a changelist as a personal build.
  • … and more: extra UI touches, improved performance, more plugin support implementations and more.

The new in 2.4 page on our website has more details and a few screenshots. Or you can simply download and try Pulse 2.4 for free.

Pulse 2.4 Release Candidate

After a few iterations of Pulse 2.4 beta builds, we’ve finally reached a stable enough state to declare our first release candidate. Since the original 2.4 beta post, we haven’t just been squishing bugs, but have come up with several more improvements:

  • New agent history tab: quickly browse all builds that involved an agent.
  • Pause server: admins can pause the build queue, so all triggers are ignored.
  • Improved changelist views: these views have been reworked in the new style.
  • Pinned builds: mark builds that should never be deleted or cleaned.
  • Subversion changelist support: easily submit a changelist as a personal build.
  • Faster browse view: most of these optimisations also appear in Pulse 2.3.
  • Improved process termination: to make terminating builds more reliable.

We know from feedback that these changes will be popular! Check them out yourself: release candidate builds are available from the Beta Program page.

Android JUnit XML Reports: Multiple File Support

Due to popular demand, I’ve added support for multiple output files (one per test suite) to android-junit-report.

For simplicity and efficiency, android-junit-report does not produce files in the exact format used by the Ant JUnit task. In the 1.1 release there are two main differences:

  1. A single report file is produced containing all test suites.
  2. Redundant information, such as the number of cases, failures etc is not added using attributes on the testsuite tag.

It turns out the first of these restrictions caused multiple users issues with tools accustomed to handling a single report file per suite. So in the latest 1.2 release I have added a new multiFile option. When this option is enabled, android-junit-report produces a separate output file for each test suite. This does mean that to retrieve the results from the device you will need to pull a whole directory.

To enable this option from an Ant build, you can override the default run-tests target as follows:


Cleaning up previous test reports...







Running tests...














Downloading XML test reports...








You can learn more about android-junit-report and/or download the new release using the links below:

Android JUnit XML Reports: Now With Test Durations

I’ve been planning to add test case durations to the XML reports generated by android-junit-report for some time. This morning, however, the magic of open source caught up with me. I received a pull request from another GitHub user who had implemented durations already!

So, with thanks to Tim from todoroo, there is a new release of android-junit-report which outputs the duration (in seconds) for each test case. This matches the output produced by the regular Ant JUnit report task, and thus should be compatible with all tools that read the format. Durations are not added to test suites, for the same reason the test case counts are not: it would require buffering. For my own use this is no big deal, because Pulse will sum the times of all cases to give an idea of the total suite time when it is not provided directly.

You can grab this new version of (1.1, build 4) from the GitHub downloads page. Or, as always, you can access the latest release build directly from the build server (click on “jar” in the “featured artifacts” table on the right of the page).

Pulse Continuous Integration Server 2.2!

Big news today: Pulse 2.2 has graduated to stable! This release includes a stack of new features and improvements, including:

  • Build UI overhaul: all tabs improved and restyled.
  • New logs tab: making it easier to access stage logs.
  • Featured artifacts: choose which artifacts should appear prominently.
  • Build navigator: easily move forward and backward through history.
  • Working copy browser: view working copies for in progress builds.
  • Move refactoring: move projects and agents in the template hierarchy.
  • Template navigation: navigate directly up and down a hierarchy.
  • Subscription by label: subscribe to notifications by project groups.
  • Agent executing stages: see what all agents are building at a glance.
  • Subversion exports: for faster and smaller builds.
  • Performance improvements: key for larger installations.

See the new in 2.2 page for full details.

In conjunction with this release, we’ve also given our website a complete overhaul. The new site has a fresher look, and communicates the key features of Pulse more directly. The updates also include some new features:

  • RSS feeds for news items.
  • Links to our latest blog posts on the front page.
  • An improved buying process, allowing multiple licenses to be purchased in one transaction.
  • Self-service renewal payments – just enter your current license key and go!
  • A more user-friendly downloads page.

We hope you enjoy the new release, and the new site. And please, let us know what you think!

Simpler Ant Builds With the Ant Script Library

Introduction

Ant may be unfashionable these days, but it still has its advantages. Key among these are familiarity and simplicity: most Java developers have worked with Ant, and with an Ant build what you get is what you see. A major disadvantage, though, is that Ant provides very little out-of-the-box. When you start a new project, you’ve got a lot of grunt work to endure just to get your code compiled, packaged, and tested. An all-too-common solution, in the grand tradition of make, is to copy a build file from an existing project as an easy starting point.

Over the years, though, Ant has gradually expanded support for creating reusable build file snippets. On top of this a few projects have emerged which aim to simplify and standardise your Ant builds, including:

Today I’ve taken my first proper look at the latter, and so far I like what I see.

The Ant Script Library

In the author Joe Schmetzer’s own words:

The Ant Script Library (ASL) is a collection of re-usable Ant scripts that can be imported into your own projects. The ASL provides a number of pre-defined targets that simplify setting up build scripts for a new project, bringing re-use and consistency to your own Ant scripts.

ASL consists of several Ant XML files, each of which provides a group of related functionality via predefined targets. For example, the asl-java-build.xml file defines targets for compiling and packaging Java code. The asl-java-test.xml file extends this with the ability to run JUnit tests, and so on. Essentially, ASL packages up all the grunt work, allowing you to concentrate on the small tweaks and extra targets unique to your project. The modular structure of ASL, combined with the fact that it is just Ant properties and targets, makes it easy to take what you like and leave the rest.

An Example

Allow me to illustrate with a simple project I have been playing with. This project has a straightforward directory structure:

  • <project root>
    • asl/ – the Ant Script Library
    • build.xml – Ant build file
    • lib/ – Jar file depedencies
    • src/ – Java source files
    • test/ – JUnit-based test source files

To add ASL to my project, I simply downloaded it from the project download page and unpacked it in the asl/ subdirectory of my project1. Then I can start with a very simple build file that supports building my code and running the tests:

<?xml version="1.0" encoding="utf-8"?>
<project name="zutubi-android-ant" default="dist">
    <property name="java-build.src-dir" location="src"/>
    <property name="java-test.src-dir" location="test"/>
    <property name="java-build.lib-dir" location="libs"/>
	
    <property name="asl.dir" value="asl"/>

    <import file="${asl.dir}/asl-java-build.xml"/>
    <import file="${asl.dir}/asl-java-test.xml"/>
</project>

Notice that I am using non-standard source locations, but that is easily tweaked using properties which are fully documented. With this tiny build file, let’s see what targets ASL provides for me:

$ ant -p
Buildfile: build.xml

Main targets:

 clean                 Deletes files generated by the build
 compile               Compiles the java source
 copy-resources        Copies resources in preparation to be packaged in jar
 dist                  Create a distributable for this java project
 generate              Generates source code
 jar                   Create a jar for this java project
 test-all              Runs all tests
 test-integration      Runs integration tests
 test-run-integration  Runs the integration tests
 test-run-unit         Runs the unit tests
 test-unit             Runs unit tests
Default target: dist

It’s delightfully simple!

Adding Reports

It gets better: ASL also provides reporting with tools like Cobertura for coverage, FindBugs for static analysis and so on via its asl-java-report.xml module. The full range of supported reports can be seen in the report-all target:

<target name="report-all"
        depends="report-javadoc, report-tests, report-cobertura, report-jdepend, report-pmd, report-cpd, report-checkstyle, report-findbugs" 
        description="Runs all reports"/>

Having support for several tools out-of-the-box is great. For my project, however, I’d like to keep my dependencies down and I don’t feel that I need all of the reporting. Although the choice of reports is not something that is parameterised by a property, it is still trivial to override by providing your own report-all target. This shows the advantage of everything being plain Ant targets:

<?xml version="1.0" encoding="utf-8"?>
<project name="zutubi-android-ant" default="dist">
    <property name="java-build.src-dir" location="src"/>
    <property name="java-test.src-dir" location="test"/>
    <property name="java-build.lib-dir" location="libs"/>
	
    <property name="asl.dir" value="asl"/>

    <import file="${asl.dir}/asl-java-build.xml"/>
    <import file="${asl.dir}/asl-java-test.xml"/>
    <import file="${asl.dir}/asl-java-report.xml"/>
    
    <target name="report-all"
            depends="report-javadoc, report-tests, report-cobertura, report-pmd, report-checkstyle" 
            description="Runs all reports"/>
</project>

Here I’ve included the java-report module, but defined my own report-all target that depends on just the reports I want. This keeps things simple, and allows me to trim out a bunch of ASL dependencies I don’t need.

Conclusion

I’ve known of ASL and such projects for a while, but this is the first time I’ve actually given one a go. Getting started was pleasantly simple, as was applying the small tweaks I needed. So next time you’re tempted to copy an Ant build file, give ASL a shot: you won’t regret it!


1 In this case I downloaded the full tarball including dependencies, which seemed on the large side (21MB!) but in fact can be easily trimmed by removing the pieces you don’t need. Alternatively, you can start with the basic ASL install (sans dependencies) and it can pull down libraries for you. Sweet :).

Android Testing: XML Reports for Continuous Integration

Summary

This post introduces the Android JUnit Report Test Runner, a custom instrumentation test runner for Android that produces XML test reports. Using this runner you can integrate your Android test results with tools that understand the Ant JUnit task XML format, e.g. the Pulse Continuous Integration Server.

The motivation and details of the runner are discussed below. For the impatient: simply head on over to the project home page on GitHub and check out the README.

Introduction

If you’ve been following my recent posts you’ll know that I’ve been figuring out the practical aspects of testing Android applications. And if you’ve been following for longer, you might know that my day job is development of the Pulse Continuous Integration Server. So it should come as no surprise that in my latest foray into the world of Android testing I sought to bring the two together :).

Status Quo

Out of the box, the Android SDK supports running functional tests on a device or emulator via instrumentation. Running within Eclipse, you get nice integrated feedback. Unfortunately, though, there are no real options for integrating with other tools such as continuous integration servers. Test output from the standard Ant builds is designed for human consumption, and lacks the level of detail I’d like to see in my build reports.

The Solution

On the upside, having access to the Android source makes it possible to examine how the current instrumentation works, and therefore how it can be customised. I found that the default InstrumentationTestRunner may be fairly easily extended to hook in extra test listeners. So I’ve implemented a custom JUnitReportTestRunner that does just that, with a listener that generates a test report in XML format. The format is designed to be largely compatible with the output of the Ant JUnit task’s XML formatter — the most widely supported format in the Java world. Tools like Pulse can read in this format to give rich test reporting.

How It Works

As mentioned, the JUnitReportTestRunner extends the default InstrumentationTestRunner, so it can act as a drop-in replacement. The custom runner acts identically to the default, with the added side-effect of producing an XML report.

For consistency with the SDK’s support for generating coverage reports, the XML report is generated in the file storage area of the target application. The default report location is something like:

/data/data/<tested application package>/files/junit-report.xml

on the device. To retrieve the report, you can use adb pull, typically as part of your scripted build.

Using the Runner

Full details on using the runner are provided in the README on the project home page. Briefly:

  • Add the android-junit-report-<version>.jar to the libraries for your test application.
  • Replace all occurrences of android.test.InstrumentationTestRunner with com.zutubi.android.junitreport.JUnitReportTestRunner:
    • In the android:name attribute of the instrumentation tag in you test application’s AndroidManifest.xml.
    • In the test.runner property in the Ant build for your test application (before calling the Android setup task).
    • In the Instrumentation runner field of all Android JUnit Run Configurations in your Eclipse project.
  • Add logic to your Ant build to run adb pull to retrieve the report after the tests are run.

As an example for retrieving the report in your Ant build:

<target name="fetch-test-report">
    <echo>Downloading XML test report...</echo>
    <mkdir dir="${reports.dir}"/>
    <exec executable="${adb}" failonerror="true">
        <arg line="${adb.device.arg}"/>
        <arg value="pull" />
        <arg value="/data/data/${tested.manifest.package}/files/junit-report.xml" />
        <arg value="${reports.dir}/junit-report.xml" />
    </exec>
</target>

In the Wild

You can see a complete example of this in action in my simple DroidScope Android application. The custom runner is applied in the droidscope-test application in the test/ subdirectory. You can even see the test results being picked up by Pulse on our demo server. Note that some of the tests are pure unit tests, which are run on a regular JVM, whereas others are run with the custom runner on an emulator. It’s nice for all the results to be collected together!

Android Testing: Using Pure Unit Tests

Introduction

The Android SDK comes with support for testing, allowing tests to be run on an Android device (or emulator) via instrumentation. This is useful for functional tests that require a realistic environment, but for the majority of tests it is overkill. The instrumentation and emulation layers add complexity to the process, making tests much slower to run and harder to debug.

The good news is that there is no need to run most of your tests via instrumentation. Because Android applications consist of regular Java code, it is possible to isolate much of the implementation from the Android environment. In fact, if you’ve separated concerns in your application already, it’s likely that large parts of it are already independent of the Android APIs. Those sections of your code can be tested on a regular JVM, using the rich ecosystem of tools available for unit testing.

Unit Testing Requirements

To put this idea into practice, I set out the following requirements for unit testing my Android application:

  1. The unit tests should run on a regular JVM, with no dependency on the Android APIs or tools.
  2. It should be possible to run the tests within Eclipse.
  3. It should be possible to run tests using Ant.
  4. Running tests via Ant should produce reports suitable for use with a Continuous Integration server.

These requirements allow the tests to be run quickly within the development environment, and on every commit on a build server.

Adding a Unit Testing Project

In keeping with my existing Android project setup, I decided to use an additional project specifically for unit testing. To recap, in the original setup I had two projects:

  1. The main project: containing the application itself.
  2. The test project: containing an Android test project for instrumentation testing, in a test/ subdirectory of the root.

Both projects had Ant build files and Eclipse projects. Similar to the use of a test/ subdirectory for instrumentation tests, I added my new unit test project in a unit/ subdirectory of the root. As with the other projects, the source code for the unit tests lives in a src/ subdirectory, giving the following overall layout:

my-app/
    src/        - main application source
    test/
        src/    - functional tests
    unit/
        src/    - unit tests

Creating the Eclipse project for unit testing was trivial: I just added a new Java Project named my-app-unit. I then edited the build path of this project to depend on my main my-app project, so that I could build against the code under test.

Testing Libraries

The main tool required for this setup is a unit testing framework. I decided to go with JUnit 4 as it is well supported in Eclipse, Ant and CI servers. (JUnit is also used by the instrumentation testing support in the Android SDK.) In addition, for mocking I am a fan of Mockito. Note, though, that the beauty of using pure Java tests is you can use any of the myriad of mocking (and other) libraries out there.

For consistency with the existing projects, I added the JUnit and Mockito jars to a libs/ subdirectory of the unit project. I then added those jars to the build path of my Eclipse project, and I was ready to implement some tests!

A Trivial Test

To make sure the setup works, you can try adding a trivial JUnit 4 test case:

package com.zutubi.android.myapp;

import static org.junit.Assert.*;

import org.junit.Test;

public class MyAppTest
{
    @Test
    public void testWorld()
    {
        assertEquals(2, 1 + 1);
    }
}

If all is well you should be able to run this in Eclipse as a JUnit test case. Once you have this sanity test passing, you can proceed to some Real Tests.

Adding an Ant Build

Setting up an Ant build took a little more effort than for the original projects, as their build files import Android rules from the SDK. For the unit tests, I wrote a simple build file from scratch, trying to keep within the conventions established by the Android rules:

<?xml version="1.0" encoding="UTF-8"?>
<project name="my-app-unit" default="test">
    <property name="source.dir" value="src"/>
    <property name="libs.dir" value="libs"/>

    <property name="out.dir" value="build"/>
    <property name="classes.dir" value="${out.dir}/classes"/>
    <property name="reports.dir" value="${out.dir}/reports"/>
    <property name="tested.dir" value=".."/>
    <property name="tested.classes.dir" value="${tested.dir}/build/classes"/>
    <property name="tested.libs.dir" value="${tested.dir}/libs"/>
    
    <path id="compile.classpath">
        <fileset dir="${libs.dir}" includes="*.jar"/>
        <fileset dir="${tested.libs.dir}" includes="*.jar"/>
        <pathelement location="${tested.classes.dir}"/>
    </path>

    <path id="run.classpath">
        <path refid="compile.classpath"/>
        <pathelement location="${classes.dir}"/>
    </path>
    
    <target name="clean">
        <delete dir="${out.dir}"/>
    </target>
    
    <target name="-init">
    	<mkdir dir="${out.dir}"/>
    	<mkdir dir="${classes.dir}"/>
    	<mkdir dir="${reports.dir}"/>
    </target>
    
    <target name="-compile-tested">
        <subant target="compile" buildpath="${tested.dir}"/>
    </target>
    
    <target name="compile" depends="-init,-compile-tested">
        <javac target="1.5" debug="true" destdir="${classes.dir}">
            <src path="${source.dir}"/>
            <classpath refid="compile.classpath"/>
        </javac>
    </target>
    
    <target name="run-tests" depends="compile">
        <junit printsummary="yes" failureproperty="test.failure">
            <classpath refid="run.classpath"/>
            
            <formatter type="xml"/>
            
            <batchtest todir="${reports.dir}">
                <fileset dir="${source.dir}" includes="**/*Test.java"/>
            </batchtest>
        </junit>
        
        <fail message="One or more test cases failed" if="test.failure"/>
    </target>
</project>

The run-tests target in this build file compiles all of the unit test code against the libraries in the unit test project, plus the classes and libraries from the project under test. It then runs all JUnit tests in classes that have names ending with Test, printing summarised results and producing full XML reports in build/reports/. These XML reports are ideal for integrating your results with a CI server (Pulse in my case, of course!).

Wrap Up

The Android SDK support for testing is useful for functional tests, but too slow and cumbersome for rapid-feedback unit testing. However, there is nothing to stop you from isolating the pure Java parts of your application and testing them separately. In fact this is one of those rare win-wins: by clean design of your code you also get access to all the speed and tool support of testing on a regular JVM!

Android Functional Testing vs Dependency Injection

I commonly use Dependency Injection (DI) to create testable Java code. Dependency injection is simple: instead of having your objects find their own dependencies, you pass them in via the constructor or a setter. One key advantage of this is the ability to easily substitute in stub or mock dependencies during testing.

Naturally, as I started working on an Android application, I tried to apply the same technique. Problems arose when I tried to combine DI with the Android SDK’s Testing and Instrumentation support. In particular, I am yet to find a suitable way to combine DI with functional testing of Android activities via ActivityInstrumentationTestCase2. When testing an activity using the instrumentation support, injection of dependencies is foiled by a couple of factors:

  1. Constructor injection is impossible, as activities are constructed by the framework. I experimented with various ways of creating the Activity myself, but was unable to maintain a connection with the Android system for true functional testing.
  2. Setter injection is fragile, as activities are started by the framework as soon as they are created. There is no time to set stub dependencies between the instantiation of the Activity and its activation.

Not ready to give DI away, I scoured the web for existing solutions to this problem. Although I did find some DI libraries with Android support (notably Guice no AOP and roboguice which builds upon it), the only testing support I found was restricted to unit tests. Although roboguice has support for Activities, it relies on being able to obtain a Guice Injector from somewhere — which just shifts the problem by one level of indirection.

Given how complex any DI solution was going to become (if indeed it is possible at all) I decided to step back and consider alternatives. A classic alternative to DI is the Service Locator pattern: where objects ask a central registry for their dependencies. Martin Fowler’s article Inversion of Control Containers and the Dependency Injection pattern compares and contrasts the two patterns in some detail. Most importantly: a Service Locator still allows you to substitute in different implementations of dependencies at test time. The main downside is each class is dependent on the central registry — which can make them harder to reuse. As I’m working with Activities that are unlikely to ever be reused outside of their current application, this is no big deal.

Implementation-wise, I went with the simplest registry that works for me. I found it convenient to use my project’s Application implementation as the registry. In production, the Application onCreate callback is used to create all of the standard dependency implementations. These dependencies are accessed via simple static getters. Static setters are exposed to allow tests to drop in whatever alternative dependencies they desire. A contrived example:

public class MyApplication extends Application
{
    private static IService service;
    private static ISettings settings;

    @Override
    public void onCreate()
    {
        super.onCreate();
        if (service == null)
        {
            service = new ServiceImpl();
        }
        
        if (settings == null)
        {
            SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(getApplicationContext());
            settings = new PreferencesSettings(preferences);
        }
    }
    
    public static IService getService()
    {
        return service;
    }

    public static void setService(IService s)
    {
        service = s;
    }
    
    public static ISettings getSettings()
    {
        return settings;
    }
    
    public static void setSettings(ISettings s)
    {
        settings = s;
    }
}

I access the dependencies via the registry in my Activity’s onCreate callback:

public class MyActivity extends Activity
{
    private IService service;
    private ISettings settings;

    @Override
    public void onCreate(Bundle savedInstanceState)
    {
        super.onCreate(savedInstanceState);

        service = MyApplication.getService();
        settings = MyApplication.getSettings();

        setContentView(R.layout.main);
        // ...
    }

    // ...
}

And I wire in my fake implementations in my functional test setUp:

public class MyActivityTest extends ActivityInstrumentationTestCase2<MyActivity>
{
    private MyActivity activity;

    public MyActivityTest()
    {
        super("com.zutubi.android.example", MyActivity.class);
    }

    @Override
    protected void setUp() throws Exception
    {
        super.setUp();        
        MyApplication.setService(new FakeService());
        MyApplication.setSettings(new FakeSettings());
        activity = getActivity();
    }
    
    public void testSomething() throws Throwable
    {
        // ...
    }

After all of the angst over DI, this solution is delightful in its simplicity. It also illustrates that static is not always a dirty word when it comes to testing!

Pulse Continuous Integration Server 2.2 Beta!

Great news: today the latest incarnation of Pulse, version 2.2, went beta! In this release we’ve focused primarily on usability, largely in the build reporting UI. A new build navigation widget allows you to easily step forwards and backwards in your build history – while sticking to the same build tab. All of the build tabs themselves have been overhauled with new styling and layout. Here’s a sneak peak at the artifacts tab, for example:

Artifacts Tab

Artifacts Tab

It not only shows additional information, with greater clarity, but also allows you to sort and filter artifacts so you can find the file you are after. Other UI changes go beyond style too – for example the new build summary tab shows related links and featured artifacts for the build. More information, and screenshots, are available on the new in 2.2 page.

We’ve also squeezed in some less obvious updates, such as:

  • The much-requested ability to move projects and agents in the template hierarchy.
  • Convenient navigation up and down the template hierarchy.
  • The ability to subscribe to projects by label.
  • An option to use subversion exports for smaller and faster builds.
  • Improved cleanup of persistent working directories (when requesting a clean build).
  • Performance improvements for large configuration sets.

The first beta build, Pulse 2.2.0, is available for download now. We’d love you to give it a spin and let us know what you think!