a little madness

A man needs a little madness, or else he never dares cut the rope and be free -Nikos Kazantzakis

Zutubi

Archive for August, 2006

JUnit V TestNG: Managing external dependencies

We have many components in our product that rely on external packages being available before tests will pass. For example, our make integration requires that make is installed, and our cvs integration tests require the cvs test server to be reachable.

The problem is that some of these resources will not always available on the box running the tests. This is often the case when we are running a build on a new box / environment for the first time. What would be very nice is if the test framework could warn me if a resource is not available, and only run the tests that have a chance of succeeding, ie, those where the required resources were available.

JUnit

Since we use JUnit for testing, I had a look to see what I could do to solve this issue.

The simplest solution is to Assert.fail during the setup/before method if the resource is not available:

@Before public void checkResource()
{
  if (isResourceNotAvailable())
  {
    Assert.fail("resource not available.");
  }
}

This results in a:

java.lang.AssertionError: resource not available.
        at org.junit.Assert.fail(Assert.java:69)
        at junit4.DependencyTest.checkResource(
        at sun.reflect.NativeMethodAccessorImpl.i
        at sun.reflect.NativeMethodAccessorImpl.i
        at sun.reflect.DelegatingMethodAccessorI
        at org.junit.internal.runners.BeforeAndAfte
        at org.junit.internal.runners.BeforeAndAfte
        at org.junit.internal.runners.BeforeAndAfte
        at org.junit.internal.runners.TestMethodRu
        at org.junit.internal.runners.TestMethodRu
        at org.junit.internal.runners.TestClassMeth
        at org.junit.internal.runners.TestClassMeth
        at org.junit.internal.runners.TestClassRunn
        at org.junit.internal.runners.BeforeAndAfte
        at org.junit.internal.runners.TestClassRunn
        at junit.framework.JUnit4TestAdapter.run(

for each test that requires the unavailable resource. This is certainly effective, but a little crude, resulting in a lot of meaningless strack traces when all I need to know is that X is not available.

An alternative, in JUnit3 at least, would be to implement a custom TestDecorator that would only execute the tests it wraps if the required resources are available.

pulbic class CheckResourceDecorator extends TestDecorator
{
  public CheckResourceDecorator(ResourceAwareTest test)
  {
    super(test);
  }

  public void run(final TestResult result) {
    Protectable p= new Protectable() {
      public void protect() throws Exception {
        if (isResourceAvailable())
        {
          basicRun(result);
        }
        else
        {
          System.err.println("resource not available.");
        }
      }
    };
    result.runProtected(this, p);
  }

  private boolean isResourceAvailable()
  {
    return ((ResourceAwareTest)getTest()).isResourceAvailable();
  }
}

public interface ResourceAwareTest extends Test
{
  public boolean isResourceAvailable();
}

This has the advantage of avoiding the stack traces, and allows us to control the number of error messages printed. This is certainly not as noisy as the first approach, but now we need to wrap all of our tests in this decorator, and does not seem to work well with the JUnit 4 suites.

TestNG

I have heard some good things about TestNG lately, so how does it handle this problem?

TestNG supports the concept of test dependencies. That is, if testB depends on testA, then testB will not run until testA has run successfully.

So, what we can do is define a method to check for a resource, and make tests that require that resource dependent on that method.

@Test public void checkResourceX()
{
  // if resource x is not available, then fail.
  Assert.fail("resource X not found.");
}

@Test(dependsOnMethods = {"checkResourceX"})
public void testThatRequiresResourceX()
{
    // run the tests.
}

The result is that if the resource is not available, you get a single failure with details “resource X not found”, with tests that depend on that resource will be marked as skipped. That is pretty clean outcome. We get no duplicate failures and a short error message:

java.lang.AssertionError: resource X not found.
        at org.testng.Assert.fail(Assert.java:76)
        at testng.dependency.DependencyTest

Short and definately Sweet.

Conclusion

Given these results, I certainly prefer the TestNG solution. The output is clear and the association of dependencies to tests is defined with each test case and on a per test basis, not on the per class basis as with JUnit. Now if only I was using TestNG :). Maybe it is time to run the JUnitToTestNGConverter?

I am curious to know if other people run into this issue? And if so, how do you solve it? Maybe you do one of the above, or have written custom annotations or extended JUnit. Please let me know, I would love to hear about it.

Incremental schema upgrades using Hibernate

I have been inspired by recent discussions on upgrade frameworks to show how hibernate can be used to provide simple incremental database schema maintenance. Database schema maintenance is one of the more difficult aspects of upgrading applications, particularly when the application supports multiple databases, so I am very happy that hibernate helps out during upgrades.

SchemaUpdate

Hibernate provides a class called SchemaUpdate that is able to synchronise a set of hibernate mappings with a database schema. The following code snippet shows how easy it is:

// manually setup the hibernate configuration
Configuration config = new Configuration();

Properties props = new Properties();
props.put("hibernate.dialect", "org.hibernate.dialect.HSQLDialect");
props.put("hibernate.connection.provider_class",
 "com.zutubi.pulse.upgrade.tasks.UpgradeTaskConnectionProvider");

// slight hack to provide hibernate with access to
// the configured datasource via a static variable
// on our ConnectionProvider implementation.
UpgradeTaskConnectionProvider.dataSource = dataSource;

// use spring to help load the classpath resources.
for (String mapping : mappings)
{
  ClassPathResource resource =
                new ClassPathResource(mapping);
  config.addInputStream(resource.getInputStream());
}

// run the schema update.
new SchemaUpdate(config, props).execute(true, true);

This example uses the spring ClassPathResource to load the mappings file from the classpath, and the UpgradeTaskConnectionProvider to inject a datasource into the process.

.hbm.xml fragments

This by itself is not overly interesting. What people usually do not realise is that the mappings files do not need to hold your entire schema. When making incremental changes to your schema, all you need in the mappings are those incremental changes. This comes in very handy when you have lots of mappings to manage.

For example. You have the following mapping of a user:

<class name="com.zutubi.pulse.model.User" table="USER">

    <id name="id" type="java.lang.Long" column="ID">
        <generator class="hilo"/>
    </id>

    <property name="login" column="LOGIN" type="string"/>
    <property name="name" column="NAME" type="string"/>

</class>

Some time later, you want to store a password field with this user. By passing the following mapping to the SchemaUpdate, it will add that column to your existing table, leaving the existing schema as it is.

<class name="com.zutubi.pulse.model.User" table="USER">

  <id name="id" type="java.lang.Long" column="ID">
    <generator class="hilo"/>
  </id>
       
  <property name="pass" column="PASS" type="string"/>

</class>

You still need to ensure that the mapping file is valid, hence the inclusion of the ID field in the second mapping.

Versioning

So, to support incremental schema upgrades within your application, you will need to keep two sets of hibernate mapping files. The first will be the latest version of your mappings. This is what is used for new installations. The second will be a set of versioned mapping fragments as described above.

You will need to version them so that you can track which fragments you need to apply and in which order, based on the version of the schema you are upgrading from. I use directory names like build_010101 to store my schema fragments and a properties file to store the current schema version. Other people use a special table in the database to hold the current schema version. Use which ever is most appropriate to your situation.

Generating upgrade SQL

For those of you that do not want or can not allow Hibernate to run the schema update, you can use the following code to generate the SQL that Hibernate would otherwise execute:

Dialect dialect = Dialect.getDialect(props);
Connection connection = dataSource.getConnection();
DatabaseMetadata meta =
    new DatabaseMetadata(connection, dialect);
String[] createSQL =
    config.generateSchemaUpdateScript(dialect, meta);

This code would replace the last line in the first example.

Things to remember about SchemaUpdate

Okay, so just a couple of final things to be aware of with hibernates schema update.

The hibernate schema update will:

  • create a new table
  • add a new column

The hibernate schema update will not:

  • drop a table
  • drop a column
  • change a constraint on a column
  • add a column with a not-null constraint to an existing table

Final tip

Oh, and the class name that you provide in the update mapping can be anything you want. It is not checked, which is great, otherwise you would need to handle versioning of your class files as well.

Happy upgrading!

Article: The Road To Build Enlightenment

Each day, every developer on a software team interacts with the build system several times. Despite this, many teams underplay the importance of an effective build system. Is your build system working for you, or do you merely tolerate its existence? In this article, we will journey through build system requirements starting from the most basic through to those that make the whole development process more efficient.

Read the The Road To Build Enlightenment at zutubi.com.

Windows Scripts Are All Malicious

Well, if they try to do anything useful anyway. Last week I was working on improvements to the Windows service installation for Pulse. The existing service installation was handled by a pretty simple batch file wrapping the service executable. However, this method had some major limitations, mostly related to location of a compatible JVM. I needed to expand the logic in the batch script considerably as part of the solution to these limitations.

That’s when I realised: logic and batch scripts don’t really go together. In fact, scripting and batch scripts don’t really go together. Even something as simple as reading the output of an executed command is bizarre in batch (there is an obscure for loop syntax to do it). Fed up with the archaic nature and limitations of batch files, I went looking for an alternative. I had heard of powershell (aka Monad), but of course it is not yet widely installed. So I turned to Windows Script Host, which has been around since Windows 98. I hadn’t used it before, but I discovered you could write scripts in JScript (Javascript-like) and easily access the filesystem and registry, so it seemed like a good fit.

In fact, apart from the pain of navigating MSDN documentation online, my foray into WSH started quite promisingly. Then, just as I was really starting to get somewhere, Norton Antivirus chimed in. Apparently, this script that I was running was trying to do something “malicious”. Whenever I tried to do something moderately useful in the script, like access file or registry information, Norton would warn me in no uncertain terms. Brilliant. No matter that I was running the script directly, just as I could run any executable that could do equally “malicious” things. I suppose Symantec doesn’t care about false positives; that might take effort. Instead, WSH has been rendered useless to software distributors, except for those willing to have their customers warned that their software is “malicious” by a virus scanner.

In the end I implemented the solution as a standalone C++ program, because at least that way I knew I could get it to work. It’s a sad state of affairs when I am forced to write a few hundred lines of C++ when a 50 line script should do. That’s what happens when a platform is completely devoid of useful scripting options.

Free Small Team Licenses for Pulse!

We are pleased to announce the immediate availability of free Small Team liceses for the Pulse continuous integration server. Small Team licenses are fully-featured licenses for up to two users and two projects on a single server.

We decided to make these licenses available for a few reasons:

  • During our careers we’ve worked for small teams without the budget for the best tools. Although Pulse is inexpensive, we don’t want it to be out of the reach of these teams while they are just starting out.
  • We have had interest from current users regarding a cheaper license for home use. Is free cheap enough for you? 😉
  • We drive the development of Pulse via user feedback. More users means more feedback and a better product in the long term. Adding a new class of users to the mix will help all of our customers.

So, get your Small Team license today and enjoy using Pulse!

If Java Could Have Just One C++ Feature…

I have been immersed in Java for a while now, but having worked in C++ for years before, there is one big thing I miss: destructors. Especially in a language with exceptions, destructors are a massive time and error saver for resource management.

Having garbage collection is nice and all, but the fact is that we deal a multitude of resources and need to collect them all. How do we do this in Java? The Hard Way: we need to know that streams, database connections etc need to be closed, and we need to explicitly close them:

FileInputStream f = new FileInputStream("somefile");
// Do some stuff.
f.close();

Of course, with exceptions it gets worse. We need to guarantee that the stream is closed even if an exception is thrown, leading to the oft-seen pattern:

FileInputStream f = null;
try
{
    // Do some stuff
}
finally
{
    if(f != null)
    {
        try
        {
            f.close();
        }
        catch(IOException e)
        {
            // Frankly, my dear…
        }
    }
}

The noise is just incredible. A common way to reduce the noise is to use a utility function to do the null check and close, but noise still remains. Repeating the same try/finally pattern everywhere is also mind-numbing, and it can be easily forgotten leading to incorrect code.

In C++, this problem is solved elegantly using the Resource Acquisition Is Initialisation (RAII) pattern. This pattern dictates that resources should be acquired in a constructor and disposed of in the corresponding destructor. Combined with the deterministic destruction semantics for objects placed on the stack, this pattern removes the need for manual cleanup and with it the possbility of mistakes:

{
    std::ifstream f("somefile");
    // Do some stuff
}

Where has all the cleanup gone? It is where it should be: in the destructor for std::ifstream. The destructor is called automatically when the object goes out of scope (even if the block is exited due to an uncaught exception). The ability to create value types and place them on the stack is a more general advantage of C++, but Java can close the gap with smarter compilers1.

Interestingly, C# comes in half way between Java and C++ on this matter. In C#, you can employ a using statement to ensure cleanup occurs:

using (TextReader r = File.OpenText("log.txt")) {
    // Do some stuff
}

In this case the resource type must implement System.IDisposable, and IDispose is guaranteed to be called on the object at the end of the using statement. The using statement in C# is pure syntactic sugar for the try/finally pattern we bash out in Java every day.

What’s the answer for Java?2 Well, something similar to using would be a good start, but I do feel like we should be able to do better. If we’re going to add sugar why not let us define our own with a full-blown macro system? Difficult yes, but perhaps easier than always playing catch up? An alternative is to try and retrofit destructors into the language3. It is possible to mix both garbage collection and destructors, as shown in C++/CLI4. However, I don’t see an elegant way to do so that improves upon what using brings. If you do, then let us all know!


1 it appears that Mustang already has some of the smarts such as escape analysis.
2 if you’re the one down the back who shouted “finalizers”: you can leave anytime you want as long as it’s now!
3 I said NOW!
4See also Herb Suttor’s excellent post on the topic Destructors vs. GC? Destructors + GC!.

Do the Simplest Thing That Will Knock Their Socks Off

It appears that I am not the only one wondering if the pragmatic keep it real approach to software development will result in programs that are missing that something extra, those features that you can not do without the moment you find them.

Kathy writes:

Our users will tell us where the pain is. Our users will drive incremental improvements. But the user community can’t do the revolutionary innovation for us. That’s up to us.

Dave takes a more direct approach when he writes:

It does seem, though, that agile teams will be less likely to either prioritize or implement some of the more subtle touches of the kind that the Wired article discusses. When forced to choose between the features “Send email”, and “Implement graceful date column resizing”, guess which one is likely to get short shrift.

An excellent example of this type of feature, a feature that you didn’t know you needed until you saw it, is iTunes smart shuffle. It “allows you to control how likely you are to hear multiple songs in a row by the same artist or from the same album”.

itunes-smartshuffle.png

I don’t know how I managed without it.

I can not help but wonder whether this feature would have been added by the agile keep it real approach? It is not an essential feature, and I would not have noticed if it was not there. But the fact that it is makes me a very happy user.