a little madness

A man needs a little madness, or else he never dares cut the rope and be free -Nikos Kazantzakis

Zutubi

Archive for October, 2006

Pulse 1.2 M1: Test Before You Commit

Phew, it’s been a busy time, but finally we have the first milestone build of Pulse 1.2 ready to go! The headline feature for this release is the ability to run personal builds. A personal build takes your local changes and applies them to a pulseâ„¢ build without them being submitted to your SCM first. This allows you to test your changes before submitting them to version control.

Other major features in this release include:

  • Reports: each pulseâ„¢ project now has its own “reports” page, which displays build data for the project visually. Currently, the reports show trends over time for:
    • Build results
    • Tests run per build
    • Build time
    • Stage execution time
  • Windows System Tray Notification: a new Pulse client, Stethoscope, sits in your system tray allowing you to see your project health at a glance. You can configure Stethoscope to monitor both personal builds and project builds for your selected projects. If you like, Stethoscope will pop up a message whenever a build completes.
  • Customisable Notifications: don’t like the format of your notification emails or instant messages? In pulseâ„¢ 1.2, the notification templates can be customised using FreeMarker.
  • Automatic Agent Upgrades: we go to great effort to make pulseâ„¢ easy to install, upgrade and maintain. That is why in pulseâ„¢ 1.2 we have made the upgrade process even simpler by adding automatic upgrades for agent machines. Now, after you upgrade your main pulseâ„¢ server, your agents will be automatically upgraded for you!
  • Resource Configuration Wizard: on the same theme of keeping things simple, we have also added a new resource configuration wizard. This wizard makes it easy for you to configure common build dependencies, such as Java Development Kits and build tools (ant, make, etc). We have also improved the resource auto-discovery code to detect resource versions for you: in many cases you won’t even need the wizard!
  • Anonymous Signup: you can now optionally allow users to sign up to pulseâ„¢ themselves. This lessens the burden on the pulseâ„¢ administrator by removing the need for them to create accounts. It is also perfect for public-facing servers (e.g. open source projects) where interested parties can sign up for read-only access but with their own dashboard and preferences.

Grab a milestone build now from our Early Access Program page and try it out!

SQL schema upgrades a thing of the past?

I would like to draw your attention to the recent release of the new Java Persistence API for the Berkeley DB. In short, the Berkeley DB is designed to be a very efficient embedded database. With no SQL query support, you instead talk directly to its internal BTree via a very simple API. A very good write up is available at the server side for those interested in the full details.

This style of persistence mechanism is certainly not for everyone. If you want adhoc query support for writing reports and extracting data, look elsewhere.

If you don’t, then Berkeley should be considered when defining the persistence architecture of your next project, just don’t tell your DBAs. Not only is it reportedly very fast, due largely to the lack of SQL style interface, but it also supports transactions, hot backups and hot failover, all of the things that help you sleep at night. However, what has me intrigued is idea of not having to deal with SQL schema migration.

I consider Schema migration to be one of the more tedious and yet non-trivial tasks that is required by any application that employs relational persistence. Yes, Hibernate makes this task somewhat easier to deal with. However, even with Hibernate, you will still need to roll up your sleeves and write some SQL to handle the migration of the data.

Managing schema migration within the Berkeley DB is different. Where as previously you extracted the data via SQL, converted it and then updated the DB via SQL, with Berkeley, you just convert the data in Plain Old Java. They have some examples in there javadoc that gives a reasonable idea of what is involved. Below is one of these examples, a case where the Person object’s address field is split out into a new Address object with 4 fields. The core of the work is done by the convert method:

public Object convert(Object fromValue) {

    // Parse the old address and populate the new address fields
     
    String oldAddress = (String) fromValue;
    Map<String,Object> addressValues =
                                  new HashMap<String,Object>();
    addressValues.put("street", parseStreet(oldAddress));
    addressValues.put("city", parseCity(oldAddress));
    addressValues.put("state", parseState(oldAddress));
    addressValues.put("zipCode", parseZipCode(oldAddress));

    // Return new raw Address object
     
    return new RawObject(addressType, addressValues, null);
}

Personally, I think this is a great improvement. Now, if only I had been aware of this at the start of this project, things might be a little different, and faster, and some other good stuff as well.

So are schema upgrades a thing of the past? Maybe not, but they don’t have to be a part of every project.

Lava Lamps: Still Cool

Well, it may not be a new idea, but it’s still a fun one (and still useful!). Todd reports a colleague hooking up Pulse notifications to a pair of lava lamps. Full write up is here. Maybe we should make a boxed set? :).

The Best Tech Interview Question?

People regularly write about good and bad tech interview questions. Certain companies are famous for their interviews, there are books and websites full of such questions, and there are countless blogs on the topic. Still, many of the questions I have heard asked and seen quoted in blogs are just plain ordinary. Heck, even questions I have used myself look worse to me as time goes on.

Certain common question types are useful, such as:

  • Simple coding tasks: as a quick screen. If the candidate can’t reverse a list, it’s an immediate no.
  • Problem-solving puzzles: made famous by Microsoft. A good puzzle or two can be a fun way to find out how the candidate approaches a problem.

However, the quick screen doesn’t do the hard job of separating decent coders and great coders. And it can be hard to draw conclusions from a puzzle question: is the candidate just a fan of puzzles (i.e. clued in to the tricks)? I think part of the problem is we need to look at the issue a slightly different way. These questions are trying to emulate the challenges of the job, but they fall short in two key ways:

  1. To fit the task into a 10 minute question, it is scaled down to a toy size. Nice for an exercise, but not very realistic.
  2. The interviewers themselves know the answer, giving them a position of power over the candidate. They may be biased towards their known answer (despite a unique alternative from the candidate). They could either lead or intimidate the candidate from this position.

So what kind of question can be used to overcome these shortfalls? Simple: a question with no answer. Or, at least, a question that you don’t yet have an answer to. Every day on the job we run across these problems, and have to solve them. It’s the essence of the job. And we solve them together, not as individuals, and certainly not with an all-knowing interviewer smugly sitting on the perfect solution.

Sure, this sort of question is not nice and clean to ask. As the interviewer, you have to be willing to be as out of depth as the candidate. The benefit is you are now working on a level playing field together with the candidate, just as you would if they were hired. This both adds realism and allows you to assess their ability to fit in with the team. You can brainstorm solutions, debate alternatives, hit dead ends and make breakthroughs together. You may have no answer, you may come across many, it’s not really important. Tomorrow always brings a new problem.

So how do you come up with these questions? One possibility is harvesting them from real life examples. Is there a problem you are struggling with today? If it makes sense in isolation, it could be a perfect candidate. Is there a design issue you run across every now and again, and never seem to have a good answer? Note it down. If you’re really stuck, try what we did a couple of times at my last company. Pick a random question out of a programming competition archive: one that you have never seen before. Take it into the interview blind, don’t have a head start on the candidate. Some realism is lost, but many of the benefits remain.

Happy interviewing!

Annotation Patterns: The Meta Squared Pattern

Over the past couple of weeks, I have been working on adding plugin support to pulse. One of the primary requirements is to provide as much functionality out of the box as possible, allowing plugin writers to focus on the features, not support code. For example, if a plugin requires configuration, then the plugin developer should define what input is required, what validation rules should be applied, but not have to worry about the details of rendering and validating a web form. But more on that later.

Whilst doing some research into what was already being done, I came across an approach to using annotations that I have since found very useful. For reference, the project was the excellent trails project from Brian Topping.

The approach centres around defining the annotation, and annotating the annotation with a reference to its handler.

@Constraint(RequiredValidator.class)
public @interface Required
{
}

In the code example above, you can see that the Required annotation has a reference to the RequiredValidator, its handler.

By adding a simple annotation processor that searches for annotated annotations and executes the handler, you have the basis of a very flexible meta data processing facility. The core of the flexibility here is that a custom annotation does not need to be registered with the processor. The information normally conveyed by the registration process is embedded in the annotation.

Lets see how this works in practice by applying it to the validation domain by implementing custom Name validation rules.

Firstly, the validator:

public class NameValidator extends RegexValidator
{
  public NameValidator()
  {
    setPattern("[a-zA-Z0-9][-a-zA-Z0-9_. ]*");
  }
}

Then the annotation:

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Constraint(NameValidator.class)
public @interface Name
{
}

And finally, we apply it to our business object:

@Name
  public void setMyName(String name)
  {
    this.name = name;
  }

The core of the annotation processor would look like:

private void processConstraints(Method method, Object obj)
{
  for (Annotation annotation : method.getAnnotations())
  {
    Constraint constraint = annotation.annotationType().
                     getAnnotation(Constraint.class);
    if (constraint != null)
    {
      Class constraintHandlerClass = constraint.value()
      // we now have the handler, now we just need to
      // apply it to the instance being validated.
    }
  }
}

private boolean isConstraint(Annotation annotation)
{
  return annotation.annotationType().
           getAnnotation(Constraint.class) != null;
}

We are also applying this same technique to the plugin frameworks’ automatic form generation, and anywhere else where we want plugin authors to be able to use annotations to customise the default behaviour of the framework.