a little madness

A man needs a little madness, or else he never dares cut the rope and be free -Nikos Kazantzakis

Zutubi

Archive for the ‘Testing’ Category

Android Testing: Using Pure Unit Tests

Introduction

The Android SDK comes with support for testing, allowing tests to be run on an Android device (or emulator) via instrumentation. This is useful for functional tests that require a realistic environment, but for the majority of tests it is overkill. The instrumentation and emulation layers add complexity to the process, making tests much slower to run and harder to debug.

The good news is that there is no need to run most of your tests via instrumentation. Because Android applications consist of regular Java code, it is possible to isolate much of the implementation from the Android environment. In fact, if you’ve separated concerns in your application already, it’s likely that large parts of it are already independent of the Android APIs. Those sections of your code can be tested on a regular JVM, using the rich ecosystem of tools available for unit testing.

Unit Testing Requirements

To put this idea into practice, I set out the following requirements for unit testing my Android application:

  1. The unit tests should run on a regular JVM, with no dependency on the Android APIs or tools.
  2. It should be possible to run the tests within Eclipse.
  3. It should be possible to run tests using Ant.
  4. Running tests via Ant should produce reports suitable for use with a Continuous Integration server.

These requirements allow the tests to be run quickly within the development environment, and on every commit on a build server.

Adding a Unit Testing Project

In keeping with my existing Android project setup, I decided to use an additional project specifically for unit testing. To recap, in the original setup I had two projects:

  1. The main project: containing the application itself.
  2. The test project: containing an Android test project for instrumentation testing, in a test/ subdirectory of the root.

Both projects had Ant build files and Eclipse projects. Similar to the use of a test/ subdirectory for instrumentation tests, I added my new unit test project in a unit/ subdirectory of the root. As with the other projects, the source code for the unit tests lives in a src/ subdirectory, giving the following overall layout:

my-app/
    src/        - main application source
    test/
        src/    - functional tests
    unit/
        src/    - unit tests

Creating the Eclipse project for unit testing was trivial: I just added a new Java Project named my-app-unit. I then edited the build path of this project to depend on my main my-app project, so that I could build against the code under test.

Testing Libraries

The main tool required for this setup is a unit testing framework. I decided to go with JUnit 4 as it is well supported in Eclipse, Ant and CI servers. (JUnit is also used by the instrumentation testing support in the Android SDK.) In addition, for mocking I am a fan of Mockito. Note, though, that the beauty of using pure Java tests is you can use any of the myriad of mocking (and other) libraries out there.

For consistency with the existing projects, I added the JUnit and Mockito jars to a libs/ subdirectory of the unit project. I then added those jars to the build path of my Eclipse project, and I was ready to implement some tests!

A Trivial Test

To make sure the setup works, you can try adding a trivial JUnit 4 test case:

package com.zutubi.android.myapp;

import static org.junit.Assert.*;

import org.junit.Test;

public class MyAppTest
{
    @Test
    public void testWorld()
    {
        assertEquals(2, 1 + 1);
    }
}

If all is well you should be able to run this in Eclipse as a JUnit test case. Once you have this sanity test passing, you can proceed to some Real Tests.

Adding an Ant Build

Setting up an Ant build took a little more effort than for the original projects, as their build files import Android rules from the SDK. For the unit tests, I wrote a simple build file from scratch, trying to keep within the conventions established by the Android rules:

<?xml version="1.0" encoding="UTF-8"?>
<project name="my-app-unit" default="test">
    <property name="source.dir" value="src"/>
    <property name="libs.dir" value="libs"/>

    <property name="out.dir" value="build"/>
    <property name="classes.dir" value="${out.dir}/classes"/>
    <property name="reports.dir" value="${out.dir}/reports"/>
    <property name="tested.dir" value=".."/>
    <property name="tested.classes.dir" value="${tested.dir}/build/classes"/>
    <property name="tested.libs.dir" value="${tested.dir}/libs"/>
    
    <path id="compile.classpath">
        <fileset dir="${libs.dir}" includes="*.jar"/>
        <fileset dir="${tested.libs.dir}" includes="*.jar"/>
        <pathelement location="${tested.classes.dir}"/>
    </path>

    <path id="run.classpath">
        <path refid="compile.classpath"/>
        <pathelement location="${classes.dir}"/>
    </path>
    
    <target name="clean">
        <delete dir="${out.dir}"/>
    </target>
    
    <target name="-init">
    	<mkdir dir="${out.dir}"/>
    	<mkdir dir="${classes.dir}"/>
    	<mkdir dir="${reports.dir}"/>
    </target>
    
    <target name="-compile-tested">
        <subant target="compile" buildpath="${tested.dir}"/>
    </target>
    
    <target name="compile" depends="-init,-compile-tested">
        <javac target="1.5" debug="true" destdir="${classes.dir}">
            <src path="${source.dir}"/>
            <classpath refid="compile.classpath"/>
        </javac>
    </target>
    
    <target name="run-tests" depends="compile">
        <junit printsummary="yes" failureproperty="test.failure">
            <classpath refid="run.classpath"/>
            
            <formatter type="xml"/>
            
            <batchtest todir="${reports.dir}">
                <fileset dir="${source.dir}" includes="**/*Test.java"/>
            </batchtest>
        </junit>
        
        <fail message="One or more test cases failed" if="test.failure"/>
    </target>
</project>

The run-tests target in this build file compiles all of the unit test code against the libraries in the unit test project, plus the classes and libraries from the project under test. It then runs all JUnit tests in classes that have names ending with Test, printing summarised results and producing full XML reports in build/reports/. These XML reports are ideal for integrating your results with a CI server (Pulse in my case, of course!).

Wrap Up

The Android SDK support for testing is useful for functional tests, but too slow and cumbersome for rapid-feedback unit testing. However, there is nothing to stop you from isolating the pure Java parts of your application and testing them separately. In fact this is one of those rare win-wins: by clean design of your code you also get access to all the speed and tool support of testing on a regular JVM!

Android Functional Testing vs Dependency Injection

I commonly use Dependency Injection (DI) to create testable Java code. Dependency injection is simple: instead of having your objects find their own dependencies, you pass them in via the constructor or a setter. One key advantage of this is the ability to easily substitute in stub or mock dependencies during testing.

Naturally, as I started working on an Android application, I tried to apply the same technique. Problems arose when I tried to combine DI with the Android SDK’s Testing and Instrumentation support. In particular, I am yet to find a suitable way to combine DI with functional testing of Android activities via ActivityInstrumentationTestCase2. When testing an activity using the instrumentation support, injection of dependencies is foiled by a couple of factors:

  1. Constructor injection is impossible, as activities are constructed by the framework. I experimented with various ways of creating the Activity myself, but was unable to maintain a connection with the Android system for true functional testing.
  2. Setter injection is fragile, as activities are started by the framework as soon as they are created. There is no time to set stub dependencies between the instantiation of the Activity and its activation.

Not ready to give DI away, I scoured the web for existing solutions to this problem. Although I did find some DI libraries with Android support (notably Guice no AOP and roboguice which builds upon it), the only testing support I found was restricted to unit tests. Although roboguice has support for Activities, it relies on being able to obtain a Guice Injector from somewhere — which just shifts the problem by one level of indirection.

Given how complex any DI solution was going to become (if indeed it is possible at all) I decided to step back and consider alternatives. A classic alternative to DI is the Service Locator pattern: where objects ask a central registry for their dependencies. Martin Fowler’s article Inversion of Control Containers and the Dependency Injection pattern compares and contrasts the two patterns in some detail. Most importantly: a Service Locator still allows you to substitute in different implementations of dependencies at test time. The main downside is each class is dependent on the central registry — which can make them harder to reuse. As I’m working with Activities that are unlikely to ever be reused outside of their current application, this is no big deal.

Implementation-wise, I went with the simplest registry that works for me. I found it convenient to use my project’s Application implementation as the registry. In production, the Application onCreate callback is used to create all of the standard dependency implementations. These dependencies are accessed via simple static getters. Static setters are exposed to allow tests to drop in whatever alternative dependencies they desire. A contrived example:

public class MyApplication extends Application
{
    private static IService service;
    private static ISettings settings;

    @Override
    public void onCreate()
    {
        super.onCreate();
        if (service == null)
        {
            service = new ServiceImpl();
        }
        
        if (settings == null)
        {
            SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(getApplicationContext());
            settings = new PreferencesSettings(preferences);
        }
    }
    
    public static IService getService()
    {
        return service;
    }

    public static void setService(IService s)
    {
        service = s;
    }
    
    public static ISettings getSettings()
    {
        return settings;
    }
    
    public static void setSettings(ISettings s)
    {
        settings = s;
    }
}

I access the dependencies via the registry in my Activity’s onCreate callback:

public class MyActivity extends Activity
{
    private IService service;
    private ISettings settings;

    @Override
    public void onCreate(Bundle savedInstanceState)
    {
        super.onCreate(savedInstanceState);

        service = MyApplication.getService();
        settings = MyApplication.getSettings();

        setContentView(R.layout.main);
        // ...
    }

    // ...
}

And I wire in my fake implementations in my functional test setUp:

public class MyActivityTest extends ActivityInstrumentationTestCase2<MyActivity>
{
    private MyActivity activity;

    public MyActivityTest()
    {
        super("com.zutubi.android.example", MyActivity.class);
    }

    @Override
    protected void setUp() throws Exception
    {
        super.setUp();        
        MyApplication.setService(new FakeService());
        MyApplication.setSettings(new FakeSettings());
        activity = getActivity();
    }
    
    public void testSomething() throws Throwable
    {
        // ...
    }

After all of the angst over DI, this solution is delightful in its simplicity. It also illustrates that static is not always a dirty word when it comes to testing!

Pulse Continuous Integration Server 2.2 Beta!

Great news: today the latest incarnation of Pulse, version 2.2, went beta! In this release we’ve focused primarily on usability, largely in the build reporting UI. A new build navigation widget allows you to easily step forwards and backwards in your build history – while sticking to the same build tab. All of the build tabs themselves have been overhauled with new styling and layout. Here’s a sneak peak at the artifacts tab, for example:

Artifacts Tab

Artifacts Tab

It not only shows additional information, with greater clarity, but also allows you to sort and filter artifacts so you can find the file you are after. Other UI changes go beyond style too – for example the new build summary tab shows related links and featured artifacts for the build. More information, and screenshots, are available on the new in 2.2 page.

We’ve also squeezed in some less obvious updates, such as:

  • The much-requested ability to move projects and agents in the template hierarchy.
  • Convenient navigation up and down the template hierarchy.
  • The ability to subscribe to projects by label.
  • An option to use subversion exports for smaller and faster builds.
  • Improved cleanup of persistent working directories (when requesting a clean build).
  • Performance improvements for large configuration sets.

The first beta build, Pulse 2.2.0, is available for download now. We’d love you to give it a spin and let us know what you think!

Zero To Continuous Integration in Three Minutes

A large part of our focus with Pulse revolves around saving time. We started Pulse with the belief that it shouldn’t be so hard to set up a continuous integration server, nor should it take so much effort to maintain. With that in mind, I’ve highlighted the main ways we achieve simplicity and maintainability in Pulse in two new demo videos:

  • Getting Started With Pulse: in which I start from scratch, installing Pulse, adding a new project and running a first build in under three minutes. By watching the video, you’ll see that it is unabridged, and I did nothing but follow the simple steps laid out in front of me.
  • Templated Configuration: in which I demonstrate how Pulse’s unique templated configuration system saves you time configuring and (especially) maintaining your continuous integration server. Templates make CI DRY.

We focus on saving time simply because it adds a lot of value to Pulse. Our customers tell us that simplicity, maintainability and dedicated support are the main reasons they chose Pulse to manage their builds. Give it a go yourself: you can get started in no time ;).

Article: Optimise Your Acceptance Tests

In a similar vein to my previous post, I’ve revived some old posts about acceptance testing — and made significant additions. The end result is a new article:

Many developers have a love-hate relationship with automated acceptance tests. One major sticking point is the time acceptance tests take to execute, which can easily cause a blow out of project build times. In this article I’ll review 7 successful techniques we’ve put to work in our own projects to optimise our acceptance testing.

You can read the full article at zutubi.com.

Pulse 2.1.11: Get More From Your Build Agents

The latest Pulse 2.1 beta build, 2.1.11, has just been freshly baked. This build includes several new features and improvements. Prominent among them is a new “statistics” tab for agents. This tab lists various figures such as the number of recipes the agent executes each day and how long the average recipe keeps the agent busy. Statistics are also shown for agent utilisation, including a pie chart that makes it easy to visualise:

agent utilisation chart

This allows you to see if you are getting the most out of your agent machines. If you do notice a machine is underutilised, another new feature could help identify the cause: compatibility information for projects and agents. Pulse matches builds to agents by considering if the resources required for the project are all available on the agent. Now when you configure requirements, Pulse shows you which agents those requirements are compatible with. On the flip side, when configuring an agent’s available resources, Pulse shows you which projects those resources satisfy.

Other highlights in this build:

  • Optional compression of large build logs (on by default).
  • Visual indicators of which users are logged in, and last access times for all users.
  • Support for Subversion 1.6 working copies for personal builds.
  • Actions can now be performed on all descendants of a project or agent template (e.g. disable all agents with one click).
  • New options to terminate a build early if a critical stage or number of stages have already failed.
  • The system/agent info tabs now show the Pulse process environment (visible to administrators only).
  • Use of bare git repositories on the Pulse master to save disk space.

Yes, we have been busy :). Get over to our website and download the beta now — it’s free to try, and a free upgrade for customers with current support contracts!

Boost.Test XML Reports with Boost.Build

My previous post Using Boost.Test with Boost.Build illustrated how to build and run Boost.Tests tests with the Boost.Build build system. For my own purposes I wanted to take this one step further by integrating Boost.Test results with continuous integration builds in Pulse.

To do this, I needed to get Boost.Test to produce XML output, at the right level of detail, which can be read by Pulse. This is another topic I have covered to some extent before: the key part being to pass the arguments “–log_format=XML –log_level=test_suite” to the Boost.Test binaries. The missing link is how to achieve this using Boost.Build’s run task. Recall that the syntax for the run task is as follows:

rule run (
    sources + :
    args * :
    input-files * :
    requirements * :
    target-name ? :
    default-build * )

Notice in particular that you can pass arguments just after the sources. So I updated my Jamfile to the following:

using testing ;
lib boost_unit_test_framework ;
run NumberTest.cpp /libs/number//number boost_unit_test_framework
    : --log_format=XML --log_level=test_suite
    ;

and lo, the test output was now in XML format:

$ cat bin/NumberTest.test/gcc-4.4.1/debug/NumberTest.output 
<TestLog><TestSuite name="Number"><TestSuite name="NumberSuite"><TestCase name="checkPass"><TestingTime>0</TestingTime></TestCase><TestCase name="checkFailure"><Error file="NumberTest.cpp" line="15">check Number(2).add(2) == Number(5) failed [4 != 5]</Error><TestingTime>0</TestingTime></TestCase></TestSuite></TestSuite></TestLog>
*** 1 failure detected in test suite "Number"

EXIT STATUS: 201

The output will not exactly win awards: it has no <?xml …?> declaration, no formatting, and thanks to Boost.Test contains trailing junk. We’ve made sure that the processing in Pulse 2.1 takes care of this, though.

If you are a Pulse user looking to integrate Pulse and Boost.Test, you might also be interested in a new Cookbook article that I’ve written up on this topic.

Using Boost.Test with Boost.Build

In my earlier post C++ Unit Testing With Boost.Test I used make to build my sample code — largely because that is what I am more familiar with. If you’re using Boost for testing, though, you should also consider using it for building. From what I’ve seen you get a lot of functionality for free with Boost.Build if you’re willing to climb the learning curve. In order to help, I’ve put together a simple tutorial that combines Boost.Test and Boost.Build.

Prerequisites

In this tutorial I’m assuming you have Boost installed already. If not, you can refer to my earlier post or the Boost Getting Started Guide.

Installing Boost.Build

If, like me, you installed Boost by using the package manager on your Linux box, you may still not have Boost.Build installed. On Debian-based systems, Boost.Build requires two extra packages:

$ sudo apt-get install bjam boost-build

The bjam package installs Boost’s variant of the jam build tool, whereas the boost-build package installs a set of bjam configurations that form the actual Boost.Build system.

If you’re not lucky enough to have a boost-build package or equivalent, you can get a pre-built bjam binary and sources for Boost.Build, see the official documentation for details.

Once you have everything set up, you should be able to run bjam –version and see similar to the following output:

$ bjam --version
Boost.Build V2 (Milestone 12)
Boost.Jam 03.1.16

If you don’t see details of the Boost.Build version then it is likely you have only installed bjam and not the full Boost.Build system.

Sample Projects

To demonstrate Boost.Build’s support for multiple projects in a single tree, I split my sample code into two pieces: a simple library, and the test code itself. The library consists of a single Number class, which is an entirely contrived wrapper around an int. The test code exercises this library, and thus needs to link against it.

Boost.Build isn’t particularly fussy about how you lay out your projects, so I went for a simple structure:

$ ls -R
.:
Jamroot  number  test

./number:
Jamfile  Number.cpp  Number.hpp

./test:
Jamfile  NumberTest.cpp

The Jamroot and Jamfiles are build files used by Boost.Build. They are in the same format — the difference in name is used to indicate the top level of the project. Boost.Build subprojects inherit configuration from parent projects by searching up the directory tree for a Jamfile, and will stop when a Jamroot is reached.

Top Level

The top level Jamroot file is incredibly simple in this case:

use-project /libs/number : number ;

In fact this line isn’t even strictly necessary, but it is good practice. It assigns the symbolic name “/libs/number” to the project in the “number” subdirectory. It’s overkill for such a simple example, but this abstraction means our test project will have no dependency on the exact location of the number library. If we refactored and moved the library into a subdirectory called “math”, then we would only need to update the Jamroot.

Number Library

As mentioned above, the number library is a contrived wrapper around an int that I created simply for illustration. The interface for this library is defined in Number.hpp:

#ifndef MY_LIBRARY_H
#define MY_LIBRARY_H

#include <iostream>

class Number
{
public:
  Number(int value);

  bool operator==(const Number& other) const;

  Number add(const Number& other) const;
  Number subtract(const Number& other) const;

  int getValue() const;

private:
  int value;
};

std::ostream& operator<<(std::ostream& output, const Number& n);

#endif

Of greater interest is the Jamfile used to build the library:

project : usage-requirements <include>. ;
lib number : Number.cpp ;

Note that the single “lib” line is all that is required to build the library. The lib rule is one of the core rules provided by Boost.Build, and follows its common syntax:

rule rule-name (
     main-target-name :
     sources + :
     requirements * :
     default-build * :
     usage-requirements * )

So in this case we are instructing Boost.Build to create a library named “number” from the sources “Number.cpp”.

The project declaration, which adds usage-requirements, is a convenience for consumers of this library. This tells the build system that any project that uses the number library should have this directory “.” added to its include path. This makes it easy for those projects to include Number.hpp.

We can build the library by running bjam in the number directory:

$  bjam
...found 12 targets...
...updating 5 targets...
MkDir1 ../number/bin
MkDir1 ../number/bin/gcc-4.4.1
MkDir1 ../number/bin/gcc-4.4.1/debug
gcc.compile.c++ ../number/bin/gcc-4.4.1/debug/Number.o
gcc.link.dll ../number/bin/gcc-4.4.1/debug/libnumber.so
...updated 5 targets...

Note that by default Boost.Build produces a dynamic library, and outputs the built artifacts into configuration-specific subdirectories.

Test Project

Finally, our test project consists of a single source file, NumberTest.cpp, with a single test suite:

#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE Number
#include <boost/test/unit_test.hpp>
#include <Number.hpp>

BOOST_AUTO_TEST_SUITE(NumberSuite)

BOOST_AUTO_TEST_CASE(checkPass)
{
  BOOST_CHECK_EQUAL(Number(2).add(2), Number(4));
}

BOOST_AUTO_TEST_CASE(checkFailure)
{
  BOOST_CHECK_EQUAL(Number(2).add(2), Number(5));
}

BOOST_AUTO_TEST_SUITE_END()

Note the definition of BOOST_TEST_DYN_LINK: this is essential to link against the Boost.Test dynamic library. Other than that the code is fairly self explanatory.

Again, the Jamfile is what we are really interested in here:

using testing ;
lib boost_unit_test_framework ;
run NumberTest.cpp /libs/number//number boost_unit_test_framework ;

Starting from the top, the “using testing” line includes Boost.Build’s support for Boost.Test. This support includes rules for building and running tests; for example it defines the “run” rule which is used later in the file.

The “lib” line declares a pre-built library (note that it has no sources) named “boost_unit_test_framework”. We use this later for linking against the Boost.Test dynamic library.

Finally, the “run” rule is used to define how to build an run a Boost.Test executable. The syntax for this rule is:

rule run (
    sources + :
    args * :
    input-files * :
    requirements * :
    target-name ? :
    default-build * )

In our sources we include both the source file and the two libraries that we require. Note that we refer to the number project using the symbolic name declared in our Jamroot.

To build and run the tests, we simply execute bjam in the test directory:

$ bjam
...found 29 targets...
...updating 8 targets...
MkDir1 bin
MkDir1 bin/NumberTest.test
MkDir1 bin/NumberTest.test/gcc-4.4.1
MkDir1 bin/NumberTest.test/gcc-4.4.1/debug
gcc.compile.c++ bin/NumberTest.test/gcc-4.4.1/debug/NumberTest.o
gcc.link bin/NumberTest.test/gcc-4.4.1/debug/NumberTest
testing.capture-output bin/NumberTest.test/gcc-4.4.1/debug/NumberTest.run
====== BEGIN OUTPUT ======
Running 2 test cases...
NumberTest.cpp(18): error in "checkFailure": check Number(2).add(2) == Number(5) failed [4 != 5]

*** 1 failure detected in test suite "Number"

EXIT STATUS: 201
====== END OUTPUT ======
&lt;snipped diagnostics&gt;
...failed testing.capture-output bin/NumberTest.test/gcc-4.4.1/debug/NumberTest.run...
...failed updating 1 target...
...skipped 1 target...
...updated 6 targets...

Note that the build fails as I have deliberately created a failing test case. The full output is somewhat longer due to the diagnostics given.

Wrap Up

That’s it! The impressive part is how simple it is to build two projects with an interdependency and run a test suite. In total the three build files include just six lines! And I haven’t even explored the fact that Boost.Build allows you to easily build across multiple platforms using various toolchains and configurations.

The hardest part is working through enough of the documentation to find out the few lines you need — hopefully this tutorial goes some way to removing that barrier.

Fencing Selenium With Xephyr

Earlier in the year I put Selenium in a cage using Xnest. This allows me to run browser-popping tests in the background without disturbing my desktop or (crucially) stealing my focus.

On that post Rohan stopped by to mention a nice alternative to Xnest: Xephyr. As the Xephyr homepage will tell you:

Xephyr is a kdrive based X Server which targets a window on a host X Server as its framebuffer. Unlike Xnest it supports modern X extensions ( even if host server doesn’t ) such as Composite, Damage, randr etc (no GLX support now). It uses SHM Images and shadow framebuffer updates to provide good performance. It also has a visual debugging mode for observing screen updates.

It sounded sweet, but I hadn’t tried it out until recently, on a newer box where I didn’t already have Xnest setup. The good news is the setup is as simple as with Xnest in my prior post:

  1. Install Xephyr: which runs an X server inside a window:
    $ sudo apt-get install xserver-xephyr
  2. Install a simple window manager: again, for old times’ sake, I’ve gone for fvwm:
    $ sudo apt-get install fvwm
  3. Start Xephyr: choose an unused display number (most standard setups will already be using 0) — I chose 1. As with Xnest, the -ac flag turns off access control, which you might want to be more careful about. My choice of window size is largely arbitrary:
    $ Xephyr :1 -screen 1024×768 -ac &
  4. Set DISPLAY: so that subsequent X programs connect to Xephyr, you need to set the environment variable DISPLAY to whatever you passed as the first argument to Xephyr above:
    $ export DISPLAY=:1
  5. Start your window manager: to manage windows in your nested X instance:
    $ fvwm &
  6. Run your tests: however you normally would:
    $ ant accept.master

Then just sit back and watch the browsers launched by Selenium trapped in the Xephyr window. Let’s see them take your focus now!

Pulse 2.1 Beta Rolls On

We’ve reached another significant milestone in the Pulse 2.1 beta: the release of 2.1.9. This latest build rolls up a stack of fixes, improvements and new features. Some of the much-anticipated improvements include:

  • Support for NAnt in the form of a command and post-processor.
  • Support for reading NUnit XML reports.
  • Support for reading QTestlib XML reports.
  • The ability to mark unstable tests as “expected” failures: they still look ugly (so fix them!) but won’t fail your build.
  • Better visibility of what is currently building on an agent.
  • New refactoring actions to “pull up” and “push down” configuration in the template hierarchy.
  • The ability to specify Perforce client views directly in Pulse.

I’ll expand upon some of these in later posts. In addition we’ve made great progress on the new project dependencies support, which should be both easier to use and more reliable in this build.

We’d love you to download Pulse 2.1 and let us know what you think!