Archive for the ‘Agile’ Category
Using Boost.Test with Boost.Build
In my earlier post C++ Unit Testing With Boost.Test I used make to build my sample code — largely because that is what I am more familiar with. If you’re using Boost for testing, though, you should also consider using it for building. From what I’ve seen you get a lot of functionality for free with Boost.Build if you’re willing to climb the learning curve. In order to help, I’ve put together a simple tutorial that combines Boost.Test and Boost.Build.
Prerequisites
In this tutorial I’m assuming you have Boost installed already. If not, you can refer to my earlier post or the Boost Getting Started Guide.
Installing Boost.Build
If, like me, you installed Boost by using the package manager on your Linux box, you may still not have Boost.Build installed. On Debian-based systems, Boost.Build requires two extra packages:
$ sudo apt-get install bjam boost-build
The bjam package installs Boost’s variant of the jam build tool, whereas the boost-build package installs a set of bjam configurations that form the actual Boost.Build system.
If you’re not lucky enough to have a boost-build package or equivalent, you can get a pre-built bjam binary and sources for Boost.Build, see the official documentation for details.
Once you have everything set up, you should be able to run bjam -version and see similar to the following output:
$ bjam --version Boost.Build V2 (Milestone 12) Boost.Jam 03.1.16
If you don’t see details of the Boost.Build version then it is likely you have only installed bjam and not the full Boost.Build system.
Sample Projects
To demonstrate Boost.Build’s support for multiple projects in a single tree, I split my sample code into two pieces: a simple library, and the test code itself. The library consists of a single Number class, which is an entirely contrived wrapper around an int. The test code exercises this library, and thus needs to link against it.
Boost.Build isn’t particularly fussy about how you lay out your projects, so I went for a simple structure:
$ ls -R .: Jamroot number test ./number: Jamfile Number.cpp Number.hpp ./test: Jamfile NumberTest.cpp
The Jamroot and Jamfiles are build files used by Boost.Build. They are in the same format — the difference in name is used to indicate the top level of the project. Boost.Build subprojects inherit configuration from parent projects by searching up the directory tree for a Jamfile, and will stop when a Jamroot is reached.
Top Level
The top level Jamroot file is incredibly simple in this case:
use-project /libs/number : number ;
In fact this line isn’t even strictly necessary, but it is good practice. It assigns the symbolic name “/libs/number” to the project in the “number” subdirectory. It’s overkill for such a simple example, but this abstraction means our test project will have no dependency on the exact location of the number library. If we refactored and moved the library into a subdirectory called “math”, then we would only need to update the Jamroot.
Number Library
As mentioned above, the number library is a contrived wrapper around an int that I created simply for illustration. The interface for this library is defined in Number.hpp:
#ifndef MY_LIBRARY_H
#define MY_LIBRARY_H
#include <iostream>
class Number
{
public:
Number(int value);
bool operator==(const Number& other) const;
Number add(const Number& other) const;
Number subtract(const Number& other) const;
int getValue() const;
private:
int value;
};
std::ostream& operator<<(std::ostream& output, const Number& n);
#endif
Of greater interest is the Jamfile used to build the library:
project : usage-requirements <include>. ; lib number : Number.cpp ;
Note that the single “lib” line is all that is required to build the library. The lib rule is one of the core rules provided by Boost.Build, and follows its common syntax:
rule rule-name (
main-target-name :
sources + :
requirements * :
default-build * :
usage-requirements * )
So in this case we are instructing Boost.Build to create a library named “number” from the sources “Number.cpp”.
The project declaration, which adds usage-requirements, is a convenience for consumers of this library. This tells the build system that any project that uses the number library should have this directory “.” added to its include path. This makes it easy for those projects to include Number.hpp.
We can build the library by running bjam in the number directory:
$ bjam ...found 12 targets... ...updating 5 targets... MkDir1 ../number/bin MkDir1 ../number/bin/gcc-4.4.1 MkDir1 ../number/bin/gcc-4.4.1/debug gcc.compile.c++ ../number/bin/gcc-4.4.1/debug/Number.o gcc.link.dll ../number/bin/gcc-4.4.1/debug/libnumber.so ...updated 5 targets...
Note that by default Boost.Build produces a dynamic library, and outputs the built artifacts into configuration-specific subdirectories.
Test Project
Finally, our test project consists of a single source file, NumberTest.cpp, with a single test suite:
#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE Number
#include <boost/test/unit_test.hpp>
#include <Number.hpp>
BOOST_AUTO_TEST_SUITE(NumberSuite)
BOOST_AUTO_TEST_CASE(checkPass)
{
BOOST_CHECK_EQUAL(Number(2).add(2), Number(4));
}
BOOST_AUTO_TEST_CASE(checkFailure)
{
BOOST_CHECK_EQUAL(Number(2).add(2), Number(5));
}
BOOST_AUTO_TEST_SUITE_END()
Note the definition of BOOST_TEST_DYN_LINK: this is essential to link against the Boost.Test dynamic library. Other than that the code is fairly self explanatory.
Again, the Jamfile is what we are really interested in here:
using testing ; lib boost_unit_test_framework ; run NumberTest.cpp /libs/number//number boost_unit_test_framework ;
Starting from the top, the “using testing” line includes Boost.Build’s support for Boost.Test. This support includes rules for building and running tests; for example it defines the “run” rule which is used later in the file.
The “lib” line declares a pre-built library (note that it has no sources) named “boost_unit_test_framework”. We use this later for linking against the Boost.Test dynamic library.
Finally, the “run” rule is used to define how to build an run a Boost.Test executable. The syntax for this rule is:
rule run (
sources + :
args * :
input-files * :
requirements * :
target-name ? :
default-build * )
In our sources we include both the source file and the two libraries that we require. Note that we refer to the number project using the symbolic name declared in our Jamroot.
To build and run the tests, we simply execute bjam in the test directory:
$ bjam ...found 29 targets... ...updating 8 targets... MkDir1 bin MkDir1 bin/NumberTest.test MkDir1 bin/NumberTest.test/gcc-4.4.1 MkDir1 bin/NumberTest.test/gcc-4.4.1/debug gcc.compile.c++ bin/NumberTest.test/gcc-4.4.1/debug/NumberTest.o gcc.link bin/NumberTest.test/gcc-4.4.1/debug/NumberTest testing.capture-output bin/NumberTest.test/gcc-4.4.1/debug/NumberTest.run ====== BEGIN OUTPUT ====== Running 2 test cases... NumberTest.cpp(18): error in "checkFailure": check Number(2).add(2) == Number(5) failed [4 != 5] *** 1 failure detected in test suite "Number" EXIT STATUS: 201 ====== END OUTPUT ====== <snipped diagnostics> ...failed testing.capture-output bin/NumberTest.test/gcc-4.4.1/debug/NumberTest.run... ...failed updating 1 target... ...skipped 1 target... ...updated 6 targets...
Note that the build fails as I have deliberately created a failing test case. The full output is somewhat longer due to the diagnostics given.
Wrap Up
That’s it! The impressive part is how simple it is to build two projects with an interdependency and run a test suite. In total the three build files include just six lines! And I haven’t even explored the fact that Boost.Build allows you to easily build across multiple platforms using various toolchains and configurations.
The hardest part is working through enough of the documentation to find out the few lines you need — hopefully this tutorial goes some way to removing that barrier.
Fencing Selenium With Xephyr
Earlier in the year I put Selenium in a cage using Xnest. This allows me to run browser-popping tests in the background without disturbing my desktop or (crucially) stealing my focus.
On that post Rohan stopped by to mention a nice alternative to Xnest: Xephyr. As the Xephyr homepage will tell you:
Xephyr is a kdrive based X Server which targets a window on a host X Server as its framebuffer. Unlike Xnest it supports modern X extensions ( even if host server doesn’t ) such as Composite, Damage, randr etc (no GLX support now). It uses SHM Images and shadow framebuffer updates to provide good performance. It also has a visual debugging mode for observing screen updates.
It sounded sweet, but I hadn’t tried it out until recently, on a newer box where I didn’t already have Xnest setup. The good news is the setup is as simple as with Xnest in my prior post:
- Install Xephyr: which runs an X server inside a window:
$ sudo apt-get install xserver-xephyr
- Install a simple window manager: again, for old times’ sake, I’ve gone for fvwm:
$ sudo apt-get install fvwm
- Start Xephyr: choose an unused display number (most standard setups will already be using 0) — I chose 1. As with Xnest, the -ac flag turns off access control, which you might want to be more careful about. My choice of window size is largely arbitrary:
$ Xephyr :1 -screen 1024×768 -ac &
- Set DISPLAY: so that subsequent X programs connect to Xephyr, you need to set the environment variable DISPLAY to whatever you passed as the first argument to Xephyr above:
$ export DISPLAY=:1
- Start your window manager: to manage windows in your nested X instance:
$ fvwm &
- Run your tests: however you normally would:
$ ant accept.master
Then just sit back and watch the browsers launched by Selenium trapped in the Xephyr window. Let’s see them take your focus now!
Pulse 2.1 Beta Rolls On
We’ve reached another significant milestone in the Pulse 2.1 beta: the release of 2.1.9. This latest build rolls up a stack of fixes, improvements and new features. Some of the much-anticipated improvements include:
- Support for NAnt in the form of a command and post-processor.
- Support for reading NUnit XML reports.
- Support for reading QTestlib XML reports.
- The ability to mark unstable tests as “expected” failures: they still look ugly (so fix them!) but won’t fail your build.
- Better visibility of what is currently building on an agent.
- New refactoring actions to “pull up” and “push down” configuration in the template hierarchy.
- The ability to specify Perforce client views directly in Pulse.
I’ll expand upon some of these in later posts. In addition we’ve made great progress on the new project dependencies support, which should be both easier to use and more reliable in this build.
We’d love you to download Pulse 2.1 and let us know what you think!
Are Temp Files Slowing Your Builds Down?
Lately one of our Pulse agents has been bogged down, to the extent that some of our heavier acceptance tests started to genuinely time out. Tests failing due to environmental factors can lead to homicidal mania, so I’ve been trying to diagnose what is going on before someone gets hurt!
The box in question runs Windows Vista, and I noticed while poking around that some disk operations were very slow. In fact, deleting even a handful of files via Explorer took so long that I gave up (we’re talking hours here). About this time I fired up the Reliability and Performance Manager that comes with Vista (Control Panel > Administrative Tools). I noticed that there was constant disk activity, and a lot of it centered around C:\$MFT — the NTFS Master File Table.
I had already pared back the background tasks on this machine: the Recycle Bin was disabled, Search Indexing was turned off and Defrag ran on a regular schedule. So why was my file system so dog slow? The answer came when I looked into the AppData\Local\Temp directory for the user running the Pulse agent. The directory was filled with tens of thousands of entries, many of which were directories that themselves contained many files.
The junk that had built up in this directory was quite astounding. Although some of it can be explained by tests that don’t clean up after themselves, I believe a lot of the junk came from tests that had to be killed forcefully without time to clean up. It was also evident that every second component we were using was part of the problem – Selenium, JFreechart, JNA, Ant and Ivy all joined the party.
So, how to resolve this? Of course any tests that don’t clean up after themselves should be fixed. But in reality this won’t always work — especially given the fact that Windows will not allow open files to be deleted. So the practical solution is to regularly clean out the temporary directory. In fact, it’s quite easy to set up a little Pulse project that will do just that, and let Pulse do the work of scheduling it via a cron trigger. With Pulse in control of the scheduling there is no risk the cleanup will overlap with another build.
A more general solution is to start with a guaranteed-clean environment in the first place. After all, acceptance tests have a habit of messing with a machine in other ways too. Re-imaging the machine after each build, or using a virtual machine that can be restored to a clean state, is a more reliable way to avoid the junk. Pulse is actually designed to allow reimaging/rebooting of agents to be done in a post-stage hook — the agent management code on the master allows for agents to go offline at this point, and not try to reuse them until their status can be confirmed by a later ping.
CITCON Paris 2009: Mocks, CI Servers and Acceptance Testing
Following up on my previous post about CITCON Paris, I thought I’d post a few points about each of the other sessions I attended.
Mock Objects
I went along to this session as a chance to hear about mock objects from the perspective of someone involved in their development, Steve Freeman. If you’ve read my Four Simple Rules for Mocking, you’ll know I’m not too keen on setting expectations, or even on verification. I mainly use mocking libraries for stubbing. Martin Fowler’s article Mocks Aren’t Stubs had make me think that Steve would hold the opposite view:
The classical TDD style is to use real objects if possible and a double if it’s awkward to use the real thing. So a classical TDDer would use a real warehouse and a double for the mail service. The kind of double doesn’t really matter that much.
A mockist TDD practitioner, however, will always use a mock for any object with interesting behavior. In this case for both the warehouse and the mail service.
So my biggest takeaway from this topic was that Steve’s view was more balanced and pragmatic than Fowler’s quote suggests. At a high level he explained well how his approach to design and implementation leads to the use of expectations in his tests. I still have my reservations, but was convinced that I should at least take a look at Steve’s new book (which is free online, so I can try a chapter or two before opting for a dead tree version).
A few more concrete pointers can be found in the session notes. A key one for me is to not mock what you don’t own, but to define your own interfaces for interacting with external systems (and then mock those interfaces).
The Future of CI Servers
I wasn’t too keen on this topic, but since it is my business, I felt compelled. I actually proposed a similar topic at my first CITCON back in Sydney and found it a disappointing session then, so my expectations were low. Apart from the less interesting probing of features on the market already, conversation did wander onto the more interesting challenge of scaling development teams.
The agile movement recognises the two main challenges (and opportunities) in software development are people and change. So it was interesting to hear this recast as wanting to return to our “hacker roots” — where we could code away in a room without the challenges of communication, integration and so on. Ideas such as using information radiators to bring a “small team” feel to large and/or distributed teams were mentioned. A less tangible thought was some kind of frequent but subtle feedback of potential integration issues. Most of the time you could code away happily, but in the background your tools would be constantly keeping an eye out for potential problems. What I like about this is the subtlety angle: given the benefits it’s easy to think that more feedback is always better, without thinking of the cost (e.g. interruption of flow).
Acceptance Testing
This year it seemed like every other session involved acceptance testing somehow. Not terribly surprising I guess since it is a very challenging area both technically and culturally. As I missed most of these sessions, they are probably better captured by other posts:
- Top 10 reasons why teams fail with Acceptance Testing by Gojko
- CITCON Paris 2009 by Antony
- CITCON Europe 2009 Sessions on the wiki
One idea I would call attention to is growing a custom, targeted solution for your project. I believe it was Steve Freeman that drew attention to an example in the Eclipse MyFoundation Portal project. If you drill down you can see use cases represented in a custom swim lane layout.
Water Cooler Discussions
Of course a great aspect of the conference is the random discussions you fall into with other attendees. One particular discussion (with JtF) has given me a much-needed kick up the backside. We were talking about the problems with trying to use acceptance tests to make up for a lack of unit testing. This is a tempting approach on projects that don’t have a testable design and infrastructure in place — it’s just easier to start throwing tests on top of your external UI.
Even though I knew all the drawbacks of this approach, I had to confess that this is essentially what has happened with the JavaScript code in Pulse. We started adding AJAX to the Pulse UI in bits and pieces without putting the infrastructure in place to test this code in isolation. Fast forward to today and we have a considerable amount of JavaScript code which is primarily tested via Selenium. So we’re now going to get serious about unit testing this code, which will simultaneously improve our coverage and reduce our build times.
Conclusion
To wrap up, after returning from Paris I plan to:
- Give expectations a fair hearing, by reading Steve’s book.
- Look for ways to improve our own information radiators to help connect Zutubi Sydney and London.
- Get serious about unit testing our JavaScript code.
- Get PJ and JtF to swap the dates for CITCON Asia/Pacific and Europe next year so I can get to both instead of neither!
If I succeed at 4 (sadly not likely!) then I’ll certainly be back next year!
CITCON Paris 2009
As mentioned Daniel and I both attended CITCON Paris the weekend before last. I’ve not had a chance to post a follow up yet as we also took the opportunity to eat the legs off every duck in France (well, we tried).
Firstly a huge thanks to PJ, Jeff, Eric and all the other volunteers for another great conference. Thanks again to Eric and Guillaume for acting as local guides on Saturday night. As always, the open spaces format and mix of attendees delivered a great day. It was also great to see a few familiar faces from the year before in Amsterdam (and a familiar shirt thanks to Ivan
).
This year I proposed and facilitated a single topic: Distributed SCM in the Corporate World. I finally added a full write-up on the conference wiki earlier in the week for those who are interested. For the impatient, here are my take-aways from the session:
- Of the distributed SCMs, there is not much traction in the corporate world just yet, although git appears to have gained a foothold. (Obviously our sample size is small, but I also expect CITCON attendees to be closer to the edge than the average team.)
- Where distributed SCMs are used, the topology is still like the centralised model. However, the ability to easily clone and move changes between repositories presents opportunities to work around issues like painful networks (contrast this to special proxy servers which are needed in similar scenarios with centralised SCMs).
- The people using git liked it primarily for its more flexible workflow and better merging. It’s conceivable to have this in the centralised model too, but no single centralised contender was mentioned.
- So far the use of distributed SCMs didn’t seem to have practical implications for CI – probably due to the use of a centralised topology.
Looks like we’re still waiting to see more creative use of distributed SCMs in corporate projects – perhaps it is something worth revisiting in future conferences. I hope to post on some of the other sessions I attended at a later date.
Zutubi @ CITCON Paris 2009
Any excuse is good enough to get me to Paris, especially while it is only a train ride away. Daniel has actually been tempted all the way from Sydney!1 So you’ll find us both at CITCON Europe 2009 tomorrow night and Saturday. We’re both looking forward to a great weekend, after nothing but positive experiences at previous events. Hopefully we’ll even get a few questions about the new Pulse 2.1 Beta while we’re there!
–
1 Although combining it with a well-deserved holiday may have been a factor…
Pulse Continuous Integration Server 2.1 Beta!
Exciting news: today we’ve pushed the latest version of Pulse, namely 2.1, to beta! This is the culmination of months of hard work on a ton of new features and improvements, including:
- Project dependency support.
- Easier multi-command projects.
- Personal build improvements.
- Fine-grained cleanup rules.
- Built-in reference documentation.
- Pluggable commands (build tool support).
- A simpler, faster configuration UI.
The new features are described in more detail on the 2.1 beta page. The largest are the first two: dependencies and multi-command projects.
Project Dependencies
The ability to deliver artifacts from one build to another is a long-standing feature request. Pulse 2.1 supports this as part of a larger dependencies feature. Essentially you can declare one project to be dependent on another, allowing the downstream project to use artifacts built by the upstream one. Artifacts are delivered through an internal artifact repository.
The dependencies feature goes beyond artifact delivery. It also includes smarter triggering for dependent projects, the ability to rebuild a full dependency tree and a new “dependencies” tab which allows you to visualise the dependency graph.
Dependency support is built on top of Apache Ivy. Our aim is for interoperability with existing tools like Ivy and Maven, but without being Java-specific.
Multi-Command Projects
We’ve always had support for multi-command projects in the Pulse build core. However, to access this full flexibility you previously had to write an XML “pulse file” by hand. As of 2.1, the configuration GUI exposes the full flexibility of the underlying build core. This allows you to define multiple recipes per-project, each of which can have multiple commands. All of the advanced command options once restricted to XML files are now also accessible in the GUI.
A key feature related to this is the ability to plug in new commands (e.g. to support a new build tool), and have the plugin seamlessly integrated into the add project wizard. If you plug in support for a command, you get simplified wizard creation of single-command projects using your plugin for free.
Give It A Go!
You can download Pulse today to try it out. Free licenses are available for evaluation, open source projects and small teams.
Are You Running Your Software Project Like an Investment Bank?
Recent experience has shown a wee problem with the way investment banks incentivise1 their workforce. The so-called bonus culture encourages excessive risk-taking optimised for short-term gain. This suits the employee, who pockets a succession of massive bonuses before the bottom eventually falls out (and spectacularly). When the inevitable happens, said employee can take a year or two off, collecting rare South American reptiles (or whatever else takes their fancy), before returning at the beginning of the next bubble.
What has this got to do with software? Allow me to stretch to not one, but three analogies…
Lack of Feedback
The shifting of pain caused by the credit crunch is self-evident: around the world governments are propping up failing banks, and workers from unrelated industries are losing their jobs. The root problem is allowing people to create pain without them feeling the effects themselves. This lack of feedback allows people to optimise selfishly, consciously or unconsciously. Analogous situations arrive in software projects when developers are isolated from their downstream users. For example:
- A setup which places responsibility for quality on a dedicated QA team, rather than across the whole project, allows developers to disclaim responsibility for quality issues. This in turn allows them to churn out high volumes of poor quality code without paying a toll for it. That is, until the day when the bubble bursts, and they realise they are encumbered with an unmaintainable pile of spaghetti which defies improvement.
- Every layer placed between developers and their end users muffles critical feedback. Opportunities to make simple fixes that could vastly improve the user experience are missed, as the message never gets through all the layers. The users are stuck with the pain, and the developers remain blissfully ignorant.
The solution: shorten feedback loops wherever possible. Make feedback faster, easier, more visible and act on it.
Metric Abuse
Metrics are a wonderful and useful tool. Indeed, they can be an important type of the feedback called for above. But they also have a dark side: they can easily be abused to create flawed incentive systems.
Probably the worst abuse of metrics is trying to measure “productivity”. In investment banking, taking profit as productivity leads to long term issues due to unsustainable risk. This is analogous to measuring developer productivity by the quantity of code (or stories, or whatever) produced. This doesn’t take into account the quality of the resultant software, not just in terms of code quality, but other intangibles like the user experience. You might ship a lot of features, but eventually your software will become so buggy and complicated that people will stop using it. “No problem”, I hear you you say, “just use quality metrics too!”. But these exhibit the same issues: any indirect measurement is flawed, and ultimately leads to people optimising the metric rather than their productivity.
The solution: don’t pretend productivity is a number. Life just isn’t that simple: you need to pay attention to the bigger picture.
Technical Debt
Apologies for mixing my financial metaphors, but my final analogy to draw revolves around Technical Debt. Although we might recognise and acknowledge that short term wins can cost more in the long run, sometimes practicality dictates taking a short cut. In the real world we do need to make compromises, whether we like it or not. This gets out of hand, though, when we don’t acknowledge the debt we are building up. You can try, like the banks, to push your debt to the side – hiding it away in obscurity. But the debt is still there, and it’s not getting any easier to pay off.
The solution: if you take a short cut, acknowledge the debt you are creating. After the real-world crisis has passed, start paying it back immediately.
–
1 Bingo!
Ready to Test: Maven + TestNG + Hamcrest + Mockito
I’m no Maven fanboy, but for a new, small Java project the ultra-fast setup time is compelling. So, for my latest little application, I decided to give it a go. Sadly, the default Maven archetype lumped my project with JUnit 3.8.1. Boo. And although the TestNG website mentions an alternative archetype, it appears to have disappeared off the face of the internet.
Luckily, dragging my project into the present wasn’t difficult. Along the way I also added my essential testing libraries: Hamcrest for matchers and Mockito for mocking (well, stubbing, but that’s another story). For posterity’s sake, and for others that share my testing tastes, here’s how it’s done.
Requirements
I’m assuming that you have Maven 2 installed already. If not, it’s trivial to:
- Download the latest (2.1.0 at time of writing); and
- Install it according to the instructions provided.
You can check if you have Maven ready to go by running:
Apache Maven 2.1.0 (r755702; 2009-03-18 19:10:27+0000)
Java version: 1.6.0_12
Java home: /usr/local/java/jdk1.6.0_12/jre
Default locale: en_AU, platform encoding: UTF-8
OS name: “linux” version: “2.6.28-11-generic” arch: “amd64″ Family: “unix”
Bootstrap
With the lack of an available alternative, I found it easiest to start with the default archetype. To create a new project, run something like:
Remember that if you’ve just installed Maven it will take this opportunity to download the internet. Be patient. If you’re new to Maven, you might also want to check out the 5 minute guide which walks through this in more detail.
Check that your bare application has been created:
$ find . -type f
./src/main/java/com/mycompany/app/App.java
./src/test/java/com/mycompany/app/AppTest.java
./pom.xml
$ mvn package
…
$ java -cp target/my-app-1.0-SNAPSHOT.jar com.mycompany.app.App
Hello World!
Add the Dependencies
Next you need to update the project POM to add the testing libraries as dependencies to your build. This involves three changes:
- Removing the default dependency on JUnit.
- Adding new dependencies for TestNG, Hamcrest and Mockito to the “test” scope.
- Configuring the compiler to accept Java 5 source.
The last step is necessary as the Maven default setting assumes Java 1.3 source, which apart from being ancient doesn’t support goodies such as annotations that are required for the new testing libraries. Your updated pom.xml file should look something like:
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany.app</groupId>
<artifactId>my-app</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>my-app</name>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>5.8</version>
<scope>test</scope>
<classifier>jdk15</classifier>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-all</artifactId>
<version>1.8.0-rc2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-all</artifactId>
<version>1.1</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.5</source>
<target>1.5</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
I’ve used the latest available versions of each of the libraries in this example — tweak them to suit your current reality.
Update the Sample Test
Now you’re ready to try updating the sample test case to use the trinity of TestNG, Hamcrest and Mockito. The easiest way to do this is to get Maven to generate a project for your IDE, e.g.
or:
Fire up your chosen IDE, open the AppTest class, and edit it to exercise all of the dependencies:
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.equalTo;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
import org.testng.annotations.Test;
import java.util.Random;
public class AppTest
{
@Test
public void testApp()
{
Random mockRandom = mock(Random.class);
when(mockRandom.nextInt()).thenReturn(42);
assertThat(mockRandom.nextInt(), equalTo(42));
}
}
What are you waiting for? Try it out:
…
Results :
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO] ————————————————————————
[INFO] BUILD SUCCESSFUL
[INFO] ————————————————————————
[INFO] Total time: 18 seconds
[INFO] Finished at: Wed Jun 24 16:11:48 BST 2009
[INFO] Final Memory: 20M/100M
[INFO] ————————————————————————
If you got this far, then everything you need is in place. Now you just have to … implement your project!
Extra Credit
If you poke about a bit, you will also find that the maven surefire plugin, which manages the tests, generates some reports by default. Along with HTML output, it also produces a JUnit-like XML report at:
This report is ideal for integration with a continuous integration server (in my case Pulse, naturally, but many will support it).
Happy testing!
You are currently browsing the archives for the Agile category.
