a little madness

A man needs a little madness, or else he never dares cut the rope and be free -Nikos Kazantzakis

Zutubi

Archive for February, 2008

Acceptance Testing: Getting to the Gravy Phase

Pulse 2.0 has a completely overhauled configuration UI. The current 1.2 UI uses standard web forms, whereas the new UI is ExtJS-based and makes heavy use of AJAX. This makes for a huge improvement in usability, unfortunately at the cost of making all of our existing acceptance tests for the UI redundant! So, we needed to go back to square one with our acceptance test suite. It was a painful process, but going through it again reminded me of something important:

The setup cost of your first few acceptance tests is high, but after that it is all gravy.

The process from zero to gravy is similar for most projects, so I’ve summarised the phases we experienced below.

Phase 1: Choosing the Technology

The first step was to switch out jWebUnit for Selenium. Although jWebUnit can execute much of the Javascript in our new UI, it falls down in two key ways:

  1. Executing much or even most of the Javascript is not good enough – it makes writing important tests impossible.
  2. It does nothing to test real-world differences in the various Javascript engines that are the source of many bugs.

As Selenium drives the actual browser, you can test anything that will run in the browser and can also test compatibility. The main (well known) drawback is that Selenium is slow. As important as speed can be, however, accuracy is far more important.

Phase 2: Scripting Setup and Teardown

This is where the slog started. The most accurate way to acceptance test is to start with your actual release artifact (installer, tarball, WAR file, whatever). In our case, this is the Pulse package, which comes in various forms. Before we can actually test the UI, we need to get the Pulse server unpacked, started and setup. When the tests are complete, we also need scripts to stop the server and clean up. This way we can integrate the tests into our build and, of course, run the acceptance tests using our own Pulse installation!

So, a day of scripting later, and we don’t have a single test. Sigh.

Phase 3: The First Test

Now things got serious: writing the first test was painfully hard. I constantly hit snags:

  • How do I use this newfangled Selenium thingy?
  • How do I test a UI that is asynchronous?
  • How can I verify the state of ExtJS forms?

The important thing here was to keep pushing. The amount of effort to get to the first test case is not worth it for that test alone, but I knew it was an investment in knowlege that would help us develop further tests.

Phase 4: Abstraction

As I continued to write the first few test cases, repetition crept in. Verifying the state of a form is similar for all forms. Many tests navigate over the same pages, examining the state in similar ways. The code was ripe for refactoring! I started abstracting away the actual clicks, keystrokes and inspections and building up a model of the application. The classic way to do this is to represent pages and forms as individual classes that provie high-level interfaces. Over time, our tests changed from something like:

type(“username”, “admin”)
type(“password”, “admin”)
goTo(“/admin/projects/”)
waitForPageToLoad()
click(“add.project”)
waitForCondition(“formLoaded”)
type(“name”, “p1”)
type(“description”, “test”)
click(“ok”)
waitForElement(“projects”)
assertTrue(isElementPresent(“project.p1′))

to:

loginasAdmin()
projectsPage.goTo()
wizard = projectsPage.clickAdd()
wizard.next(name, description)
projectsPage.waitFor()
assertTrue(projectsPage.isProjectPresent(name))

and then to:

loginAsAdmin()
addProject(name, description)
assertProjectPresent(name)

Phase 5: Gravy

With the right abstractions in place, adding acceptance tests is now much easier. The tests themselves are also easier to understand and maintain due to their declarative nature. Now we can reap the rewards of those painful early stages!

Conclusion

The process of setting up an acceptance test suite is quite daunting, and initially painful. But if you persevere and constantly look for useful abstractions, you’ll reap the rewards in the long run.

Q: What Sucks More Than Java Generics?

A: Java without generics. Yes, there are many problems with the Java generics implementation. And I seriously hope that the implementation is improved in Java 7. However, it occurs to me that the problems can generally be worked around by either:

  1. Not using generics where the limitations make it too difficult; or
  2. Using unsafe casts.

In both cases, you are no worse off than you were pre-generics, where you had no choice! Despite their limitations, generics have done a lot to reduce painfully repetitive casting and mysterious interfaces (List of what now?). They also enable abstraction of many extremely common functions without losing type information. In these ways I benefit from generics every day, enough to outweigh the frustrations of erasure.

Refactoring vs Merging

Do The Simplest Thing That Could Possibly Work, then Refactor Mercilessly: two well known Extreme Rules. It’s hard to argue with the virtues of refactoring, but if you’ve ever had to manage parallel codelines then you might have a different perspective on “mercilessly”. Many types of refactoring are at odds with parallel codelines because they make merging – an already difficult task – more difficult. In particular, changes that physically move content around amongst files tend to confuse SCMs which take a file-based view of the world (i.e. the majority of them). How can we alleviate this tension? A few possibilities come to mind:

  1. Don’t do parallel development: if you don’t have to merge, you don’t have a problem. Since the act of merging itself is pure overhead to the development process, this is a tempting idea. However, reality (typically of the business kind), dictates that parallel development is necessary to some degree. How do you support customers on older versions? How do you take on large and risky subprojects without derailing the development trunk? These are scenarios where branching is valuable enough to justify the overhead of merging. It is reasonable therefore to try and minimise parallel development, but rarely possible to avoid it completely.
  2. Only refactor on the trunk (or whereever the majority of development is done). This helps to alleviate the problem to some degree, by containing the main divergence to a single codeline. It is also usually the codeline where you would naturally do most of your refactoring, as it is the bleeding edge. However, even merging a small bug fix from a maintenance branch may prove difficult due to code movements on the trunk. And this solution is not much help for active development branches that run parallel to the trunk for some time.
  3. Another Extreme Rule: Integrate Often. It is no coincidence that Perforce-speak for “merge” is “integrate”. Merging is a type of integration, and like any it is less painful if done regularly, before codelines have diverged too far. This is, however, a way to mitigate problems rather than solve them.
  4. Avoid problematic refactoring, such as moving code between files. In some cases it is clear that a certain refactoring is best avoided due to anticipated merging. However, avoiding all problematic refactoring is not a workable long-term strategy. At some point, the maintenance benefits of cleaning up the code will outweigh the merging cost.

In reality we use a combination of these ideas and a dose of common sense to reduce merging problems. However, the tension still exists and can be painful. A technology-base solution to this would of course be ideal, i.e. an SCM that understood refactoring and could take account of it when merging. Unfortunately I know of no such existing SCM, and there are significant barriers to creating one:

  1. Many popular SCMs of today struggle enough with the file-based model that it is hard to see them moving on to such a high level.
  2. File-based, text-diff change management applies genericly to a wide range of tasks. Any tool with deeper knowledge of actual file contents would likely trade off some of this genericity.
  3. Any “smart” merging algorithm will still have many corner cases where the right choice is ambiguous, and making these comprehensible to the poor human that needs to resolve them is more difficult when the algorithm is more complicated.

On a more positive note, there has been some welcome innovation in the SCM space recently. Perhaps I have overlooked solutions that exist now, or perhaps the growing competition will spur innovation in this area. Either way, if you have any ideas, let me know!

Zutubi London Office

Wondering why everything has gone quiet over here? Well, all should become clear now: I have just completed a move from rainy Sydney to sunny London1. Combine the Christmas break with an overseas move and you have a recipe for zero blog posts!

The good news is that I am back in action in a new home in central London (Baker Street area). So, for now at least, Zutubi is operating in both Sydney (Daniel – GMT+11) and London (me – GMT) – the company never sleeps!

I intend on travelling to quite a bit of the UK and continental Europe while we are living here. This will hopefully give me the opportunity to meet some of our European customers. Let me know if you are in the area, and perhaps we can arrange to hook up over the coming months at some point.


1 Yes, it really has been quite sunny and mild since we got here, quite to our surprise! Reports from back home tell of plenty more rain down that way.