a little madness

A man needs a little madness, or else he never dares cut the rope and be free -Nikos Kazantzakis

Zutubi

Archive for December, 2006

UnitTest++: The New Choice for C++ Unit Testing?

In an earlier post on C++ Unit Testing Frameworks, I came across a relatively new framework by the name of UnitTest++. At first glance, this framework appealed to me for a couple of reasons:

  • Unlike most of the other frameworks, it is relatively recent, and in development
  • One of the originators of the project is the author of the best comparison of C++ unit testing libraries online. The experience of reviewing several other frameworks should inform the design of a new framework.

So, I’ve decided to take a closer look. I’ll start in this post with the basics: how do we write tests, fixtures and suites in UnitTest++? These are the fundamentals of a unit testing library, and should be very simple to use.

First, we need the UnitTest++ distribution. It is available as a simple tarball from SourceForge. Exploding the tarball gives a basic structure with build files at the top level, and child docs and src directories. To build the library itself, on Linux at least, requires a simple make:


jsankey@shiny:~/tools/UnitTest++$ make
src/AssertException.cpp
src/Test.cpp
...
Creating libUnitTest++.a library...
src/tests/Main.cpp
src/tests/TestAssertHandler.cpp
...
Linking TestUnitTest++...
Running unit tests...
Success: 162 tests passed.
Test time: 0.31 seconds.

The primary output is libUnitTest++.a at the top level. This, along with the header files under src (excluding src/test), forms the redistributables needed to build against UnitTest++ in your own project. It is a little awkward that no binary distributions, nor a “dist” or similar Makefile target are available. However, the source tree is so simple that it is not hard to extract what you need.

Armed with the library, the next step is to create out first test case, and run it. UnitTest++ makes use of macros to simplify creating a new test case. It could hardly get an easier:


#include "UnitTest++.h"

TEST(MyTest)
{
CHECK(true);
}

int main(int, char const *[])
{
return UnitTest::RunAllTests();
}

A test case is created using the TEST macro, which takes the case name as an argument. The macro adds the test case to a global list of cases automatically. The body of the test utilises the CHECK macro to assert conditions under test. Various CHECK* macros are available for common cases. Finally, to actually run the test, we call UnitTest::RunAllTests(). This runs all cases using a default reporter that prints a result summary to standard output:


jsankey@shiny:~/repo/utpp$ ./utpp
Success: 1 tests passed.
Test time: 0.00 seconds.

RunAllTests returns the number of failed cases, so using this as the program exit code works well. If we change the check to CHECK(false), we get a failure report:


jsankey@shiny:~/repo/utpp$ ./utpp
utpp.cpp(9): error: Failure in MyTest: false
FAILURE: 1 out of 1 tests failed (1 failures).
Test time: 0.00 seconds.

The next step is to create a test fixture, which allows us to surround our test cases with shared setup/teardown code. This is achieved in UnitTest++ by building upon standard C++ construction/destruction semantics. To create a fixture, you just create a standard C++ struct. The setup and teardown code go in the struct constructor and destructor respectively. Let’s illustrate how this works:


#include
#include
#include "UnitTest++.h"

struct MyFixture
{
std::string testData;

MyFixture() :
testData("my test data")
{
std::cout << "my setup" << std::endl; } ~MyFixture() { std::cout << "my teardown" << std::endl; } }; TEST_FIXTURE(MyFixture, MyTestCase) { std::cout << testData << std::endl; } int main(int, char const *[]) { return UnitTest::RunAllTests(); }

Instead of the TEST macro, we use TEXT_FIXTURE to create a test case that uses the fixture struct. The example is artificial, but serves to illustrate the order in which the functions are called. Also of interest is how members of the fixture struct are referenced directly by name within the test case. Under the covers, the TEST_FIXTURE macro derives a type from MyTestFixture, making this possible. Running this new program gives the following:


jsankey@shiny:~/repo/utpp$ ./utpp
my setup
my test data
my teardown
Success: 1 tests passed.
Test time: 0.01 seconds.

The setup and teardown wrap execution of the test case, which has simple access to the data in the fixture. By leveraging construction/destruction, the fixture code is both familiar and concise.

The final step is to organise test cases into suites. UnitTest++ again uses macros to simplify the creation of suites. You simply wrap the tests in the SUITE macro:


#include
#include "UnitTest++.h"

SUITE(SuiteOne)
{
TEST(TestOne)
{
std::cout << "SuiteOne::TestOne" << std::endl; } TEST(TestTwo) { std::cout << "SuiteOne::TestTwo" << std::endl; } } SUITE(SuiteTwo) { TEST(TestOne) { std::cout << "SuiteTwo:TestOne" << std::endl; } } int main(int, char const *[]) { return UnitTest::RunAllTests(); }

As shown above, it is possible to have two tests of the same name in different suites. This illustrates the first function of suites: namespacing. Running the above gives:


jsankey@shiny:~/repo/utpp$ ./utpp
SuiteOne::TestOne
SuiteOne::TestTwo
SuiteTwo:TestOne
Success: 3 tests passed.
Test time: 0.01 seconds.

Suites also have another function: they allow you to easily run a group of related tests. We can change our main function to only run SuiteOne (note we also need to include TestReporterStdout.h):


int main(int, char const *[])
{
UnitTest::TestReporterStdout reporter;
return UnitTest::RunAllTests(reporter,
UnitTest::Test::GetTestList(),
"SuiteOne",
0);
}

Running this new main will only execute SuiteOne:


jsankey@shiny:~/repo/utpp$ ./utpp
SuiteOne::TestOne
SuiteOne::TestTwo
Success: 2 tests passed.
Test time: 0.00 seconds.

So there you have it, a taste of the basics in UnitTest++. The most appealing thing about this library is simplicity: you can tell that the authors have made an effort to keep construction of cases, fixtures and suites as easy as possible. This lets you get on with writing the actual test code. In this overview I have not explored all of the details, most notably the various CHECK macros that test for equality, exceptions and so on. However, as it stands UnitTest++ is quite a simple framework, and there is not a whole lot more to it. Although you may need more features than you currently get out of the box, UnitTest++ is young and thus I expect still growing. The simplicity also makes it an easy target for customisation, which is important given the diversity of C++ environments. I'll be keeping an eye on UnitTest++ as it evolves, and recommend you take a look yourself.

AJAX Goodness in Pulse 1.2

Pulse has always used a bit of AJAX (and plain old JavaScript) here and there to make the interface more responsive. For example, there are plenty of instances where you can test new configuration before you save it, without leaving the configuration form (a huge time saver when configuring!). We also try to avoid gratuitous use of AJAX, which seems to be popping up all over the place as the hype takes its toll. However, in Pulse 1.2 we found some key places to introduce AJAX to give users that warm and fuzzy feeling.

My personal favourite is a new widget to customise the columns in build results tables. These tables are used to summarise the most important build information throughout the Pulse UI. Over time, our customers have requested several new pieces of information to be shown in the tables. Adding them all for everyone would lead to information overload, not to mention the required screen real estate. The obvious solution was to make the table columns customisable. This is a prime case where a rich client-side UI is far more usable than a “click-refresh-click-refresh…” approach. The widget we came up with is simple: a bunch of checkboxes to choose the columns to show, and the ability to drag and drop the columns to reorder them:

Using it is a snap, and it just Feels Good. Everything happens client-side until you apply and the changes take effect by an AJAX-refresh of the underlying page.

Another prime candidate for AJAXification was the views for browsing working copies and build artifacts. We already had a treeview in place for browsing directories (e.g. during the setup wizard), and with some work adapted it to these views:

I can not tell you how much faster it is to browse around using these views! The page only loads what is needed when you first hit it, and drilling down is much, much easier.

Pulse Continuous Integration Server 1.2 Goes Beta

Well, we’re pretty pumped today. The latest major release of Pulse has gone beta today, and been promoted to zutubi.com! Many thanks to the customers who rode the bleeding edge of the Pulse 1.2 Early Access Program, your feedback has been invaluable. Now you have a kick arse build server in return :).

Pulse 1.2 is packed with new features, and dozens of those little improvements that just Make Life Better. The list includes:

  • Personal Builds: The headline feature for 1.2, personal builds allow you to submit your changes directly to Pulse for testing before you commit them.
  • Reports: Each pulse project now has its own “reports” page, which displays build data for the project visually.
  • Change Viewers: easily integrate Pulse with change viewers such as Fisheye, P4Web, Trac and ViewVC. Use custom settings to integrate with other viewers.
  • Commit Message Transformers: control how your commit messages appear in Pulse. Link them to your bug tracker, or highlight important information.
  • Customisable Build Columns: choose the fields to view for build results, and reorder them using drag and drop!
  • AJAX-powered browsing: browse your working copies and captured artifacts using a dynamic tree view.
  • “Broken since” Support: when a test has been failing for multiple builds, it is displayed differently. The build where it first failed is just a click away!
  • Windows System Tray Notification: a new Windows client, “Stethoscope”, allows you to see your project status at a glance.
  • Customisable Notifications: you can now override the default notification messages (email or Jabber) by creating your own notification templates.
  • Automatic Agent Upgrades: when the Pulse master server is upgraded, it will automatically upgrade all agent servers.
  • Much, much more: dozens of other minor features and improvements.

I’ve talked a bit about personal builds, and how they combine with distributed builds to make Pulse a must-have development tool. I’ll also post about some of the other new features, and the cool things they let you do.

For now, check out Pulse at zutubi.com. Give it a try for 30 days for free: you’ll be hooked ;).

Learn Your Build Tool. Well.

Many developers seem to look upon software build tools (Ant/Make/Maven/Rake/SCons/etc) with disdain. They see them as painful creatures put on the earth to gnaw at their patience. The developer just wants to get on with Real Work (coding), and not have to deal with such annoying details. This attitude arises for a few reasons:

  • Someone Else’s Problem: some don’t see writing/maintaining build scripts as part of their job. They’re programmers, not build/release engineers.
  • Yet Another Language: build tools (frequently) use a custom language for their scripts. Some resist having to learn Yet Another Language.
  • Real Pain: writing, and particularly debugging build scripts can be a right pain. The language might be restrictive, debugging support limited, the behaviour arcane.
  • It’s Good Enough: at the start of the project someone slapped together a copy-pasted build script that just, feebly, manages to output something resembling a binary. Nobody will ever bother looking at the build system again.
  • Lack Of Time: improving the build system is not a customer requirement. Hence, nobody will ever get around to doing it.

I would argue that this attitude is harmful. I could even refute each point above, one by one. But it doesn’t matter. Even if all these points were valid, there is a single overriding factor: the build system is dog food you have to eat. If you don’t like the taste, why are you putting up with it? So: learn your build tool, and learn it well. Invest a day in improving the build system, and reap the benefits every day thereafter. Keep the knowledge with you, and refactor your build scripts just like your code to make everyone’s life easier.

I can remember many a time that I put off making such improvements for an age. When I finally make the change, the first thought is always: I rock! 🙂 The next thought is: I’m a fool for not doing this sooner!

Article: Reducing the Impact of Broken Builds

The “build” is the current status of any software project, and as such reflects the health, vitality and progress of the project. In this article we first review some of the impacts of a broken build. Those already familiar with the negative impacts of broken builds may wish to skip this lead in. There follows an analysis of various techniques to reduce the frequency and impact of broken builds. These techniques vary from the optimistic to the preventative, and from lightweight to quite intensive.

Read the full article at zutubi.com.

Pulse Continuous Integration Server 1.2 M3

We just keep on punching out the milestones on 1.2 ;). New features in this puppy include:

  • Customisable columns: customise build tables with drag and drop.
  • Manual release support: configure Pulse to prompt for release build properties
  • Post stage actions: hook in after each agent build to cleanup/reboot agents.
  • P4Web support: built in linking to P4Web.
  • LDAP group integration: manage permissions through LDAP
  • Broken since support for tests: differentiate new and existing test failures.
  • Executable projects: run custom build scripts without dropping down to XML.

See the early access page for M3 packages and full details.