a little madness

A man needs a little madness, or else he never dares cut the rope and be free -Nikos Kazantzakis

Zutubi

Archive for the ‘C++’ Category

Boost.Test XML Reports with Boost.Build

My previous post Using Boost.Test with Boost.Build illustrated how to build and run Boost.Tests tests with the Boost.Build build system. For my own purposes I wanted to take this one step further by integrating Boost.Test results with continuous integration builds in Pulse.

To do this, I needed to get Boost.Test to produce XML output, at the right level of detail, which can be read by Pulse. This is another topic I have covered to some extent before: the key part being to pass the arguments “–log_format=XML –log_level=test_suite” to the Boost.Test binaries. The missing link is how to achieve this using Boost.Build’s run task. Recall that the syntax for the run task is as follows:

rule run (
    sources + :
    args * :
    input-files * :
    requirements * :
    target-name ? :
    default-build * )

Notice in particular that you can pass arguments just after the sources. So I updated my Jamfile to the following:

using testing ;
lib boost_unit_test_framework ;
run NumberTest.cpp /libs/number//number boost_unit_test_framework
    : --log_format=XML --log_level=test_suite
    ;

and lo, the test output was now in XML format:

$ cat bin/NumberTest.test/gcc-4.4.1/debug/NumberTest.output 
<TestLog><TestSuite name="Number"><TestSuite name="NumberSuite"><TestCase name="checkPass"><TestingTime>0</TestingTime></TestCase><TestCase name="checkFailure"><Error file="NumberTest.cpp" line="15">check Number(2).add(2) == Number(5) failed [4 != 5]</Error><TestingTime>0</TestingTime></TestCase></TestSuite></TestSuite></TestLog>
*** 1 failure detected in test suite "Number"

EXIT STATUS: 201

The output will not exactly win awards: it has no <?xml …?> declaration, no formatting, and thanks to Boost.Test contains trailing junk. We’ve made sure that the processing in Pulse 2.1 takes care of this, though.

If you are a Pulse user looking to integrate Pulse and Boost.Test, you might also be interested in a new Cookbook article that I’ve written up on this topic.

Using Boost.Test with Boost.Build

In my earlier post C++ Unit Testing With Boost.Test I used make to build my sample code — largely because that is what I am more familiar with. If you’re using Boost for testing, though, you should also consider using it for building. From what I’ve seen you get a lot of functionality for free with Boost.Build if you’re willing to climb the learning curve. In order to help, I’ve put together a simple tutorial that combines Boost.Test and Boost.Build.

Prerequisites

In this tutorial I’m assuming you have Boost installed already. If not, you can refer to my earlier post or the Boost Getting Started Guide.

Installing Boost.Build

If, like me, you installed Boost by using the package manager on your Linux box, you may still not have Boost.Build installed. On Debian-based systems, Boost.Build requires two extra packages:

$ sudo apt-get install bjam boost-build

The bjam package installs Boost’s variant of the jam build tool, whereas the boost-build package installs a set of bjam configurations that form the actual Boost.Build system.

If you’re not lucky enough to have a boost-build package or equivalent, you can get a pre-built bjam binary and sources for Boost.Build, see the official documentation for details.

Once you have everything set up, you should be able to run bjam –version and see similar to the following output:

$ bjam --version
Boost.Build V2 (Milestone 12)
Boost.Jam 03.1.16

If you don’t see details of the Boost.Build version then it is likely you have only installed bjam and not the full Boost.Build system.

Sample Projects

To demonstrate Boost.Build’s support for multiple projects in a single tree, I split my sample code into two pieces: a simple library, and the test code itself. The library consists of a single Number class, which is an entirely contrived wrapper around an int. The test code exercises this library, and thus needs to link against it.

Boost.Build isn’t particularly fussy about how you lay out your projects, so I went for a simple structure:

$ ls -R
.:
Jamroot  number  test

./number:
Jamfile  Number.cpp  Number.hpp

./test:
Jamfile  NumberTest.cpp

The Jamroot and Jamfiles are build files used by Boost.Build. They are in the same format — the difference in name is used to indicate the top level of the project. Boost.Build subprojects inherit configuration from parent projects by searching up the directory tree for a Jamfile, and will stop when a Jamroot is reached.

Top Level

The top level Jamroot file is incredibly simple in this case:

use-project /libs/number : number ;

In fact this line isn’t even strictly necessary, but it is good practice. It assigns the symbolic name “/libs/number” to the project in the “number” subdirectory. It’s overkill for such a simple example, but this abstraction means our test project will have no dependency on the exact location of the number library. If we refactored and moved the library into a subdirectory called “math”, then we would only need to update the Jamroot.

Number Library

As mentioned above, the number library is a contrived wrapper around an int that I created simply for illustration. The interface for this library is defined in Number.hpp:

#ifndef MY_LIBRARY_H
#define MY_LIBRARY_H

#include <iostream>

class Number
{
public:
  Number(int value);

  bool operator==(const Number& other) const;

  Number add(const Number& other) const;
  Number subtract(const Number& other) const;

  int getValue() const;

private:
  int value;
};

std::ostream& operator<<(std::ostream& output, const Number& n);

#endif

Of greater interest is the Jamfile used to build the library:

project : usage-requirements <include>. ;
lib number : Number.cpp ;

Note that the single “lib” line is all that is required to build the library. The lib rule is one of the core rules provided by Boost.Build, and follows its common syntax:

rule rule-name (
     main-target-name :
     sources + :
     requirements * :
     default-build * :
     usage-requirements * )

So in this case we are instructing Boost.Build to create a library named “number” from the sources “Number.cpp”.

The project declaration, which adds usage-requirements, is a convenience for consumers of this library. This tells the build system that any project that uses the number library should have this directory “.” added to its include path. This makes it easy for those projects to include Number.hpp.

We can build the library by running bjam in the number directory:

$  bjam
...found 12 targets...
...updating 5 targets...
MkDir1 ../number/bin
MkDir1 ../number/bin/gcc-4.4.1
MkDir1 ../number/bin/gcc-4.4.1/debug
gcc.compile.c++ ../number/bin/gcc-4.4.1/debug/Number.o
gcc.link.dll ../number/bin/gcc-4.4.1/debug/libnumber.so
...updated 5 targets...

Note that by default Boost.Build produces a dynamic library, and outputs the built artifacts into configuration-specific subdirectories.

Test Project

Finally, our test project consists of a single source file, NumberTest.cpp, with a single test suite:

#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE Number
#include <boost/test/unit_test.hpp>
#include <Number.hpp>

BOOST_AUTO_TEST_SUITE(NumberSuite)

BOOST_AUTO_TEST_CASE(checkPass)
{
  BOOST_CHECK_EQUAL(Number(2).add(2), Number(4));
}

BOOST_AUTO_TEST_CASE(checkFailure)
{
  BOOST_CHECK_EQUAL(Number(2).add(2), Number(5));
}

BOOST_AUTO_TEST_SUITE_END()

Note the definition of BOOST_TEST_DYN_LINK: this is essential to link against the Boost.Test dynamic library. Other than that the code is fairly self explanatory.

Again, the Jamfile is what we are really interested in here:

using testing ;
lib boost_unit_test_framework ;
run NumberTest.cpp /libs/number//number boost_unit_test_framework ;

Starting from the top, the “using testing” line includes Boost.Build’s support for Boost.Test. This support includes rules for building and running tests; for example it defines the “run” rule which is used later in the file.

The “lib” line declares a pre-built library (note that it has no sources) named “boost_unit_test_framework”. We use this later for linking against the Boost.Test dynamic library.

Finally, the “run” rule is used to define how to build an run a Boost.Test executable. The syntax for this rule is:

rule run (
    sources + :
    args * :
    input-files * :
    requirements * :
    target-name ? :
    default-build * )

In our sources we include both the source file and the two libraries that we require. Note that we refer to the number project using the symbolic name declared in our Jamroot.

To build and run the tests, we simply execute bjam in the test directory:

$ bjam
...found 29 targets...
...updating 8 targets...
MkDir1 bin
MkDir1 bin/NumberTest.test
MkDir1 bin/NumberTest.test/gcc-4.4.1
MkDir1 bin/NumberTest.test/gcc-4.4.1/debug
gcc.compile.c++ bin/NumberTest.test/gcc-4.4.1/debug/NumberTest.o
gcc.link bin/NumberTest.test/gcc-4.4.1/debug/NumberTest
testing.capture-output bin/NumberTest.test/gcc-4.4.1/debug/NumberTest.run
====== BEGIN OUTPUT ======
Running 2 test cases...
NumberTest.cpp(18): error in "checkFailure": check Number(2).add(2) == Number(5) failed [4 != 5]

*** 1 failure detected in test suite "Number"

EXIT STATUS: 201
====== END OUTPUT ======
&lt;snipped diagnostics&gt;
...failed testing.capture-output bin/NumberTest.test/gcc-4.4.1/debug/NumberTest.run...
...failed updating 1 target...
...skipped 1 target...
...updated 6 targets...

Note that the build fails as I have deliberately created a failing test case. The full output is somewhat longer due to the diagnostics given.

Wrap Up

That’s it! The impressive part is how simple it is to build two projects with an interdependency and run a test suite. In total the three build files include just six lines! And I haven’t even explored the fact that Boost.Build allows you to easily build across multiple platforms using various toolchains and configurations.

The hardest part is working through enough of the documentation to find out the few lines you need — hopefully this tutorial goes some way to removing that barrier.

Boost.Test Tutorial Sample Code

Given the fact that people seem to have found my short Boost.Test tutorial, I thought they might appreciate having access to the sample code. So I have cleaned up the code, added a README, and created a repository for it on GitHub:

http://github.com/jsankey/boost.test-examples/

The easiest way to access the code if you have git is to clone:

$ git clone git://github.com/jsankey/boost.test-examples.git

If you don’t have git, you can download a zip or tarball from GitHub.

Happy testing!

Boost.Test XML Reports for Continuous Integration

Following on from my Boost.Test primer, the key goal for me was test result reporting in a continuous integration server1. To support this, I needed to produce output from Boost.Test which I could easily consume in a plugin. As it happens, Boost.Test has built in support for producing XML reports, which are easy to parse and therefore integrate with other tools.

What was not obvious, though, was exactly how to produce output with the right level of detail. The most promising parameters in the documentation were report_format and report_level, and indeed these can be used to produce XML — but even the detailed version does not output the reason when a test case fails. It turns out that assertion failures are reported in log output, and the log format can be tweaked to produce XML (reformatted for readability):

$ ./main --log_format=XML
<TestLog>
    <Error file="main.cpp" line="20">check add(2, 2) == 5 failed</Error>
    <Error file="main.cpp" line="25">check add(2, 2) == 1 failed</Error>
</TestLog>

To get output for passing tests, and for test suites, I also had to adjust the log_level to test_suite

$ ./main --log_format=XML --log_level=test_suite
<TestLog>
  <TestSuite name="PulseTest">
    <TestSuite name="VariantsSuite">
      <TestCase name="simplePass">
        <TestingTime>0</TestingTime>
      </TestCase>
      <TestCase name="checkFailure">
        <Error file="main.cpp" line="20">check add(2, 2) == 5 failed</Error>
        <TestingTime>0</TestingTime>
      </TestCase>
...
    </TestSuite>
  </TestSuite>
</TestLog>

Bingo! Now the XML output has all the details required to render the test results nicely in a continuous integration server, provided the server has a plugin to read the XML.


1 In my case obviously Pulse, but XML reports are likely to be the easiest way to integrate with other CI servers too.

C++ Unit Testing With Boost.Test

Recently I implemented a Pulse plugin to read Boost.Test reports and integrate the tests into the build results. As usual, the first step in implementing the plugin was the creation of a small project that uses Boost.Test to produce some real reports to work from. I found the Boost.Test documentation to be detailed, but not so easy to follow when just getting started — so I thought I’d give an overview here.

Step 1: Installation

First you will need to install Boost.Test, or possibly all of Boost if you plan to use more of it. You can download Boost in its entirety from the Boost download page. Then you just need to unpack the archive somewhere appropriate, so you can include the headers and link against built libraries (see the next step).

An even easier option if you are on Linux is to install a package. On Ubuntu (and, I expect, other Debian variants), the desired package is libboost-test-dev:

$ sudo apt-get install libboost-test-dev

The downside of this is the packages are somewhat out of date, the default being built from Boost 1.34.1 (there is also a 1.35 variant available). I have not seen much impact of this when using Boost.Test, but if you need newer Boost libraries then it may be better to compile your own.

Step 2: Choose Your Compilation Model

Unlike many Boost libraries (which are implemented completely as headers), Boost.Test includes a runtime component which you need to link against: the “Program Execution Monitor”. This component includes the main entry point for running your tests, among other things. If you installed Boost from source, you will need to build the library yourself using bjam — the instructions are quite toolchain specific so I won’t go into them here. You can link statically or dynamically, but will need to configure your includes and build appropriately.

The key thing from the point of view of writing and building your tests is to include the right definitions in your source and add the right flags when linking. I opted for dynamic linking against the prebuilt library installed by my Ubuntu package. To achieve this, I needed two things:

  1. To define BOOST_TEST_DYN_LINK before including the Boost.Test headers in my source file.
  2. The addition of: -lboost_unit_test_framework to my linker flags.

With that plumbing out of the way, we can get down to testing something.

Step 3: A First Test Case

For a start, I cooked up an exceptionally useful function to add two ints, and a test case “universeInOrder” to check that it works:

#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE Hello
#include <boost/test/unit_test.hpp>

int add(int i, int j)
{
    return i + j;
}

BOOST_AUTO_TEST_CASE(universeInOrder)
{
    BOOST_CHECK(add(2, 2) == 4);
}

Notice that apart from the BOOST_TEST_DYN_LINK definition, I also had to define a name for my test module via BOOST_TEST_MODULE. The case itself is defined using the BOOST_AUTO_TEST_CASE macro, giving the case name as an argument1. Finally, within the test assertions can be made the BOOST_CHECK macro. Compiling and running the test gives the following:

$ g++ -ohello -lboost_unit_test_framework hello.cpp
$ ./hello
Running 1 test case...

*** No errors detected

Simple enough, my test passes. If I deliberately make it fail, by changing the 4 to a 5, I get:

 $ ./hello
Running 1 test case...
hello.cpp(12): error in "universeInOrder": check add(2, 2) == 5 failed

*** 1 failure detected in test suite "Hello"

Here we start to see the benefits of the library: I get a nice failure message, complete with line number and the expression that failed.

Step 4: More Assertions

Unlike the assertions in many testing libraries, a failed BOOST_CHECK will not exit the test case immediately — the problem is recorded and the case continues. To immediately fail a test, you can use BOOST_REQUIRE instead:

BOOST_AUTO_TEST_CASE(universeInOrder)
{
    BOOST_REQUIRE(add(2, 2) == 4);
}

To just output a warning instead of failing the test, you can use BOOST_WARN. In fact many Boost.Test assertions come in these three variants: CHECK, REQUIRE and WARN.

Richer assertions are also possible, including these notable examples:

  • BOOST_CHECK_MESSAGE: allows you specify a custom failure message as a second argument. You can pass a string, or any type supporting the << operator.
  • BOOST_CHECK_EQUAL: checks two arguments for equality using ==. Improves upon the normal check in the above examples by showing the actual values when the assertion fails.
  • BOOST_CHECK_THROW: checks that an expression causes a specified type of exception to be thrown.

The full list of available assertions for the the version of Boost.Test I am using (1.34.1) can be found here.

Step 5: Suites

Once you have a non-trivial number of test cases, you need to organise them into suites. Note that each module (defined with BOOST_TEST_MODULE) already has a top-level suite named after that module. Further suites can be nested within a module to categorise as necessary. The easiest way to do this is to continue with the auto-registration model, and simply wrap the test cases with new macros to start and end a suite:

#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE Suites
#include <boost/test/unit_test.hpp>

int add(int i, int j)
{
    return i + j;
}

BOOST_AUTO_TEST_SUITE(Maths)

BOOST_AUTO_TEST_CASE(universeInOrder)
{
    BOOST_CHECK(add(2, 2) == 4);
}

BOOST_AUTO_TEST_SUITE_END()

BOOST_AUTO_TEST_SUITE(Physics)

BOOST_AUTO_TEST_CASE(specialTheory)
{
    int e = 32;
    int m = 2;
    int c = 4;

    BOOST_CHECK(e == m * c * c);
}

BOOST_AUTO_TEST_SUITE_END()

In a normal run, you won’t see that the tests have been categorised. To show the suites in the output, you can set the log level to test_suite:

$ ./suites --log_level=test_suite
Running 2 test cases...
Entering test suite "Suites"
Entering test suite "Maths"
Entering test case "universeInOrder"
Leaving test case "universeInOrder"
Leaving test suite "Maths"
Entering test suite "Physics"
Entering test case "specialTheory"
Leaving test case "specialTheory"
Leaving test suite "Physics"
Leaving test suite "Suites"

*** No errors detected

Step 6: Fixtures

To add common setup and teardown code around your cases, Boost.Test supports fixtures. These take advantage of C++’s own mechanism for setup and teardown – construction and destruction. Indeed, you can easily add a “fixture” to a test case be just defining a type with the appropriate constructor and destructor and allocating one on the stack at the start of the case. This is repetitious, however, and not terribly explicit. From my experience the nicest way is to organise your tests into suites so that you can use one fixture per suite, and then just use BOOST_FIXTURE_TEST_SUITE in place of BOOST_AUTO_TEST_SUITE:

#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE Fixtures
#include <boost/test/unit_test.hpp>

struct Massive
{
    int m;

    Massive() : m(2)
    {
        BOOST_TEST_MESSAGE("setup mass");
    }

    ~Massive()
    {
        BOOST_TEST_MESSAGE("teardown mass");
    }
};

BOOST_FIXTURE_TEST_SUITE(Physics, Massive)

BOOST_AUTO_TEST_CASE(specialTheory)
{
    int e = 32;
    int c = 4;

    BOOST_CHECK(e == m * c * c);
}

BOOST_AUTO_TEST_CASE(newton2)
{
    int f = 10;
    int a = 5;

    BOOST_CHECK(f == m * a);
}

BOOST_AUTO_TEST_SUITE_END()

Note that the test cases can refer directly to the public “m” member of the fixture type — in the background inheritance is at work, so protected members are also directly accessible. If you run this with logging, you can see that the fixture runs for each case:

$ ./fixtures --log_level=test_suite
Running 2 test cases...
Entering test suite "Fixtures"
Entering test suite "Physics"
Entering test case "specialTheory"
setup mass
teardown mass
Leaving test case "specialTheory"
Entering test case "newton2"
setup mass
teardown mass
Leaving test case "newton2"
Leaving test suite "Physics"
Leaving test suite "Fixtures"

*** No errors detected

Conclusion

I hope that gives you a decent starting point for using Boost.Test. As I mentioned in the beginning, there is plenty more documentation available at the source — it’s just daunting due to its size. Happy testing!

Update

Sample code is now available at GitHub.


1 – It is also possible to define free functions containing your tests and register them manually, although I find the automatic method simpler.

Pulse + UnitTest++

Since I have previously stated my admiration for UnitTest++ as a unit testing framework for C++, I guess it was high time I added direct support for it in Pulse. Well, as of Pulse 1.2.24, there is now a UnitTest++ post-processor that will slurp your UnitTest++ test results directly into Pulse. If you’re looking to do continuous integration for C++, then Pulse + UnitTest++ is a killer combo ;).

UnitTest++: Reports

In my previous post, UnitTest++: The New Choice for C++ Unit Testing?, I gave a basic introduction to UnitTest++. Moving beyond the basics, you will often want to integrate UnitTest++ into your development process. A key to this is being able to generate test reports in a format amenable to integration. The default output of UnitTest++ is a simple, human-readable summary of the test results:


jsankey@shiny:~/repo/utpp$ ./utpp
utpp.cpp(9): error: Failure in MyTest: false
FAILURE: 1 out of 1 tests failed (1 failures).
Test time: 0.00 seconds

This is fine for the developer, but not so easy to process in code for integration purposes. Luckily, the reporting mechanism is not fixed. The test runner accepts an instance of type TestReporter to use for reporting results. The default reporter, TestReporterStdout, produces the developer-friendly output shown above. A second reporter is also included in the UnitTest++ distribution: XmlTestReporter. As its name suggests, this reporter outputs results in XML format, which is much easier to digest in code.

Using the XmlTestReporter is easy. Just construct one with an output stream and pass it to the test runner when executing the tests:


#include
#include "UnitTest++.h"
#include "XmlTestReporter.h"

int main(int, char const *[])
{
std::ofstream f("tests.xml");
UnitTest::XmlTestReporter reporter(f);
return UnitTest::RunAllTests(reporter,
UnitTest::Test::GetTestList(),
NULL,
0);
}

The results in this case are saved to a file named “tests.xml”. An example XML report is shown below:










The report is easily interpreted, and can be easily parsed for integration in to other systems. A prime use case would be integration with a continuous integration server, such as Pulse, which is why I am looking into the reports myself :).

Finally, if the XML format is not suitable for your purposes, you can also create your own test reporters. The interface is compact and easily implemented.

UnitTest++: The New Choice for C++ Unit Testing?

In an earlier post on C++ Unit Testing Frameworks, I came across a relatively new framework by the name of UnitTest++. At first glance, this framework appealed to me for a couple of reasons:

  • Unlike most of the other frameworks, it is relatively recent, and in development
  • One of the originators of the project is the author of the best comparison of C++ unit testing libraries online. The experience of reviewing several other frameworks should inform the design of a new framework.

So, I’ve decided to take a closer look. I’ll start in this post with the basics: how do we write tests, fixtures and suites in UnitTest++? These are the fundamentals of a unit testing library, and should be very simple to use.

First, we need the UnitTest++ distribution. It is available as a simple tarball from SourceForge. Exploding the tarball gives a basic structure with build files at the top level, and child docs and src directories. To build the library itself, on Linux at least, requires a simple make:


jsankey@shiny:~/tools/UnitTest++$ make
src/AssertException.cpp
src/Test.cpp
...
Creating libUnitTest++.a library...
src/tests/Main.cpp
src/tests/TestAssertHandler.cpp
...
Linking TestUnitTest++...
Running unit tests...
Success: 162 tests passed.
Test time: 0.31 seconds.

The primary output is libUnitTest++.a at the top level. This, along with the header files under src (excluding src/test), forms the redistributables needed to build against UnitTest++ in your own project. It is a little awkward that no binary distributions, nor a “dist” or similar Makefile target are available. However, the source tree is so simple that it is not hard to extract what you need.

Armed with the library, the next step is to create out first test case, and run it. UnitTest++ makes use of macros to simplify creating a new test case. It could hardly get an easier:


#include "UnitTest++.h"

TEST(MyTest)
{
CHECK(true);
}

int main(int, char const *[])
{
return UnitTest::RunAllTests();
}

A test case is created using the TEST macro, which takes the case name as an argument. The macro adds the test case to a global list of cases automatically. The body of the test utilises the CHECK macro to assert conditions under test. Various CHECK* macros are available for common cases. Finally, to actually run the test, we call UnitTest::RunAllTests(). This runs all cases using a default reporter that prints a result summary to standard output:


jsankey@shiny:~/repo/utpp$ ./utpp
Success: 1 tests passed.
Test time: 0.00 seconds.

RunAllTests returns the number of failed cases, so using this as the program exit code works well. If we change the check to CHECK(false), we get a failure report:


jsankey@shiny:~/repo/utpp$ ./utpp
utpp.cpp(9): error: Failure in MyTest: false
FAILURE: 1 out of 1 tests failed (1 failures).
Test time: 0.00 seconds.

The next step is to create a test fixture, which allows us to surround our test cases with shared setup/teardown code. This is achieved in UnitTest++ by building upon standard C++ construction/destruction semantics. To create a fixture, you just create a standard C++ struct. The setup and teardown code go in the struct constructor and destructor respectively. Let’s illustrate how this works:


#include
#include
#include "UnitTest++.h"

struct MyFixture
{
std::string testData;

MyFixture() :
testData("my test data")
{
std::cout << "my setup" << std::endl; } ~MyFixture() { std::cout << "my teardown" << std::endl; } }; TEST_FIXTURE(MyFixture, MyTestCase) { std::cout << testData << std::endl; } int main(int, char const *[]) { return UnitTest::RunAllTests(); }

Instead of the TEST macro, we use TEXT_FIXTURE to create a test case that uses the fixture struct. The example is artificial, but serves to illustrate the order in which the functions are called. Also of interest is how members of the fixture struct are referenced directly by name within the test case. Under the covers, the TEST_FIXTURE macro derives a type from MyTestFixture, making this possible. Running this new program gives the following:


jsankey@shiny:~/repo/utpp$ ./utpp
my setup
my test data
my teardown
Success: 1 tests passed.
Test time: 0.01 seconds.

The setup and teardown wrap execution of the test case, which has simple access to the data in the fixture. By leveraging construction/destruction, the fixture code is both familiar and concise.

The final step is to organise test cases into suites. UnitTest++ again uses macros to simplify the creation of suites. You simply wrap the tests in the SUITE macro:


#include
#include "UnitTest++.h"

SUITE(SuiteOne)
{
TEST(TestOne)
{
std::cout << "SuiteOne::TestOne" << std::endl; } TEST(TestTwo) { std::cout << "SuiteOne::TestTwo" << std::endl; } } SUITE(SuiteTwo) { TEST(TestOne) { std::cout << "SuiteTwo:TestOne" << std::endl; } } int main(int, char const *[]) { return UnitTest::RunAllTests(); }

As shown above, it is possible to have two tests of the same name in different suites. This illustrates the first function of suites: namespacing. Running the above gives:


jsankey@shiny:~/repo/utpp$ ./utpp
SuiteOne::TestOne
SuiteOne::TestTwo
SuiteTwo:TestOne
Success: 3 tests passed.
Test time: 0.01 seconds.

Suites also have another function: they allow you to easily run a group of related tests. We can change our main function to only run SuiteOne (note we also need to include TestReporterStdout.h):


int main(int, char const *[])
{
UnitTest::TestReporterStdout reporter;
return UnitTest::RunAllTests(reporter,
UnitTest::Test::GetTestList(),
"SuiteOne",
0);
}

Running this new main will only execute SuiteOne:


jsankey@shiny:~/repo/utpp$ ./utpp
SuiteOne::TestOne
SuiteOne::TestTwo
Success: 2 tests passed.
Test time: 0.00 seconds.

So there you have it, a taste of the basics in UnitTest++. The most appealing thing about this library is simplicity: you can tell that the authors have made an effort to keep construction of cases, fixtures and suites as easy as possible. This lets you get on with writing the actual test code. In this overview I have not explored all of the details, most notably the various CHECK macros that test for equality, exceptions and so on. However, as it stands UnitTest++ is quite a simple framework, and there is not a whole lot more to it. Although you may need more features than you currently get out of the box, UnitTest++ is young and thus I expect still growing. The simplicity also makes it an easy target for customisation, which is important given the diversity of C++ environments. I'll be keeping an eye on UnitTest++ as it evolves, and recommend you take a look yourself.

C++ Unit Testing Frameworks

I like to keep an eye on various build and testing tools, for potential integration with Pulse. As such, I’ve started to amass some links to unit testing tools/resources for C++, where there are many competing options:

  • Exploring the C++ Unit Testing Framework Jungle: the most comprehensive article I’ve found on the topic, even though it is getting old.
  • CppUnit: probably the best know xUnit port, supported in Pulse already. Does a decent job.
  • Boost.Test: part of the well known set of Boost libraries. Naturally has dependencies on Boost.
  • CppUnitLite: a minimalistic rewrite of CppUnit, intended to be simpler and more portable. May be appropriate if you don’t mind getting your hands dirty to add the features you need.
  • Nano Cpp Unit: more an exercise in illustrating the barest testing framework than a usable framework itself.
  • Unit++: pitched as a more “C++ like” xUnit port. Documentation is thin on the ground, and I don’t have any practical experience with it.
  • CxxTest: takes the novel approach of using Perl to generate a test runner, simplifying the code. Also relatively portable, although of course you will need Perl.
  • TUT: a simple, template-based framework distributed as a single header file. Reasonable portablility (given a modern compiler). Lacking some features of other frameworks, but without any dependencies.
  • cutee: another framework aimed at simplicity. Looks to have test case creation down to its simplest form. Documentation is thin.
  • CppTest: you guessed it: another framework aimed to be simple and portable. Supports a few output formats out of the box, including HTML reports.
  • UnitTest++: co-authored by the author of the article above, this framework hopes to combine the best ideas from others. Documentation is non-existant, but on the plus side it is one of the more modern frameworks (developed this year, not 2004!).
  • QtUnit: probably a good option if you were using Qt, but is officially unmaintained. Mind you, most of the other frameworks are also dormant.

That’s all I have gathered so far. I have to say, there are a lot of players out there, but little action. A few interesting ideas, but no framework seems to be a clear leader. I hope to get some time to play in depth a bit more, in which case I will flesh out more details.

If Java Could Have Just One C++ Feature…

I have been immersed in Java for a while now, but having worked in C++ for years before, there is one big thing I miss: destructors. Especially in a language with exceptions, destructors are a massive time and error saver for resource management.

Having garbage collection is nice and all, but the fact is that we deal a multitude of resources and need to collect them all. How do we do this in Java? The Hard Way: we need to know that streams, database connections etc need to be closed, and we need to explicitly close them:


FileInputStream f = new FileInputStream("somefile");
// Do some stuff.
f.close();

Of course, with exceptions it gets worse. We need to guarantee that the stream is closed even if an exception is thrown, leading to the oft-seen pattern:


FileInputStream f = null;
try
{
// Do some stuff
}
finally
{
if(f != null)
{
try
{
f.close();
}
catch(IOException e)
{
// Frankly, my dear...
}
}
}

The noise is just incredible. A common way to reduce the noise is to use a utility function to do the null check and close, but noise still remains. Repeating the same try/finally pattern everywhere is also mind-numbing, and it can be easily forgotten leading to incorrect code.

In C++, this problem is solved elegantly using the Resource Acquisition Is Initialisation (RAII) pattern. This pattern dictates that resources should be acquired in a constructor and disposed of in the corresponding destructor. Combined with the deterministic destruction semantics for objects placed on the stack, this pattern removes the need for manual cleanup and with it the possbility of mistakes:


{
std::ifstream f("somefile");
// Do some stuff
}

Where has all the cleanup gone? It is where it should be: in the destructor for std::ifstream. The destructor is called automatically when the object goes out of scope (even if the block is exited due to an uncaught exception). The ability to create value types and place them on the stack is a more general advantage of C++, but Java can close the gap with smarter compilers1.

Interestingly, C# comes in half way between Java and C++ on this matter. In C#, you can employ a using statement to ensure cleanup occurs:


using (TextReader r = File.OpenText("log.txt")) {
// Do some stuff
}

In this case the resource type must implement System.IDisposable, and IDispose is guaranteed to be called on the object at the end of the using statement. The using statement in C# is pure syntactic sugar for the try/finally pattern we bash out in Java every day.

What’s the answer for Java?2 Well, something similar to using would be a good start, but I do feel like we should be able to do better. If we’re going to add sugar why not let us define our own with a full-blown macro system? Difficult yes, but perhaps easier than always playing catch up? An alternative is to try and retrofit destructors into the language3. It is possible to mix both garbage collection and destructors, as shown in C++/CLI4. However, I don’t see an elegant way to do so that improves upon what using brings. If you do, then let us all know!


1 it appears that Mustang already has some of the smarts such as escape analysis.
2 if you’re the one down the back who shouted “finalizers”: you can leave anytime you want as long as it’s now!
3 I said NOW!
4See also Herb Suttor’s excellent post on the topic Destructors vs. GC? Destructors + GC!.