Archive for the ‘Zutubi’ Category
A large part of our focus with Pulse revolves around saving time. We started Pulse with the belief that it shouldn’t be so hard to set up a continuous integration server, nor should it take so much effort to maintain. With that in mind, I’ve highlighted the main ways we achieve simplicity and maintainability in Pulse in two new demo videos:
- Getting Started With Pulse: in which I start from scratch, installing Pulse, adding a new project and running a first build in under three minutes. By watching the video, you’ll see that it is unabridged, and I did nothing but follow the simple steps laid out in front of me.
- Templated Configuration: in which I demonstrate how Pulse’s unique templated configuration system saves you time configuring and (especially) maintaining your continuous integration server. Templates make CI DRY.
We focus on saving time simply because it adds a lot of value to Pulse. Our customers tell us that simplicity, maintainability and dedicated support are the main reasons they chose Pulse to manage their builds. Give it a go yourself: you can get started in no time .
The latest Pulse 2.1 beta build, 2.1.11, has just been freshly baked. This build includes several new features and improvements. Prominent among them is a new “statistics” tab for agents. This tab lists various figures such as the number of recipes the agent executes each day and how long the average recipe keeps the agent busy. Statistics are also shown for agent utilisation, including a pie chart that makes it easy to visualise:
This allows you to see if you are getting the most out of your agent machines. If you do notice a machine is underutilised, another new feature could help identify the cause: compatibility information for projects and agents. Pulse matches builds to agents by considering if the resources required for the project are all available on the agent. Now when you configure requirements, Pulse shows you which agents those requirements are compatible with. On the flip side, when configuring an agent’s available resources, Pulse shows you which projects those resources satisfy.
Other highlights in this build:
- Optional compression of large build logs (on by default).
- Visual indicators of which users are logged in, and last access times for all users.
- Support for Subversion 1.6 working copies for personal builds.
- Actions can now be performed on all descendants of a project or agent template (e.g. disable all agents with one click).
- New options to terminate a build early if a critical stage or number of stages have already failed.
- The system/agent info tabs now show the Pulse process environment (visible to administrators only).
- Use of bare git repositories on the Pulse master to save disk space.
Yes, we have been busy . Get over to our website and download the beta now — it’s free to try, and a free upgrade for customers with current support contracts!
We’ve reached another significant milestone in the Pulse 2.1 beta: the release of 2.1.9. This latest build rolls up a stack of fixes, improvements and new features. Some of the much-anticipated improvements include:
- Support for NAnt in the form of a command and post-processor.
- Support for reading NUnit XML reports.
- Support for reading QTestlib XML reports.
- The ability to mark unstable tests as “expected” failures: they still look ugly (so fix them!) but won’t fail your build.
- Better visibility of what is currently building on an agent.
- New refactoring actions to “pull up” and “push down” configuration in the template hierarchy.
- The ability to specify Perforce client views directly in Pulse.
I’ll expand upon some of these in later posts. In addition we’ve made great progress on the new project dependencies support, which should be both easier to use and more reliable in this build.
We’d love you to download Pulse 2.1 and let us know what you think!
I went along to this session as a chance to hear about mock objects from the perspective of someone involved in their development, Steve Freeman. If you’ve read my Four Simple Rules for Mocking, you’ll know I’m not too keen on setting expectations, or even on verification. I mainly use mocking libraries for stubbing. Martin Fowler’s article Mocks Aren’t Stubs had make me think that Steve would hold the opposite view:
The classical TDD style is to use real objects if possible and a double if it’s awkward to use the real thing. So a classical TDDer would use a real warehouse and a double for the mail service. The kind of double doesn’t really matter that much.
A mockist TDD practitioner, however, will always use a mock for any object with interesting behavior. In this case for both the warehouse and the mail service.
So my biggest takeaway from this topic was that Steve’s view was more balanced and pragmatic than Fowler’s quote suggests. At a high level he explained well how his approach to design and implementation leads to the use of expectations in his tests. I still have my reservations, but was convinced that I should at least take a look at Steve’s new book (which is free online, so I can try a chapter or two before opting for a dead tree version).
A few more concrete pointers can be found in the session notes. A key one for me is to not mock what you don’t own, but to define your own interfaces for interacting with external systems (and then mock those interfaces).
The Future of CI Servers
I wasn’t too keen on this topic, but since it is my business, I felt compelled. I actually proposed a similar topic at my first CITCON back in Sydney and found it a disappointing session then, so my expectations were low. Apart from the less interesting probing of features on the market already, conversation did wander onto the more interesting challenge of scaling development teams.
The agile movement recognises the two main challenges (and opportunities) in software development are people and change. So it was interesting to hear this recast as wanting to return to our “hacker roots” — where we could code away in a room without the challenges of communication, integration and so on. Ideas such as using information radiators to bring a “small team” feel to large and/or distributed teams were mentioned. A less tangible thought was some kind of frequent but subtle feedback of potential integration issues. Most of the time you could code away happily, but in the background your tools would be constantly keeping an eye out for potential problems. What I like about this is the subtlety angle: given the benefits it’s easy to think that more feedback is always better, without thinking of the cost (e.g. interruption of flow).
This year it seemed like every other session involved acceptance testing somehow. Not terribly surprising I guess since it is a very challenging area both technically and culturally. As I missed most of these sessions, they are probably better captured by other posts:
- Top 10 reasons why teams fail with Acceptance Testing by Gojko
- CITCON Paris 2009 by Antony
- CITCON Europe 2009 Sessions on the wiki
One idea I would call attention to is growing a custom, targeted solution for your project. I believe it was Steve Freeman that drew attention to an example in the Eclipse MyFoundation Portal project. If you drill down you can see use cases represented in a custom swim lane layout.
Water Cooler Discussions
Of course a great aspect of the conference is the random discussions you fall into with other attendees. One particular discussion (with JtF) has given me a much-needed kick up the backside. We were talking about the problems with trying to use acceptance tests to make up for a lack of unit testing. This is a tempting approach on projects that don’t have a testable design and infrastructure in place — it’s just easier to start throwing tests on top of your external UI.
To wrap up, after returning from Paris I plan to:
- Give expectations a fair hearing, by reading Steve’s book.
- Look for ways to improve our own information radiators to help connect Zutubi Sydney and London.
- Get PJ and JtF to swap the dates for CITCON Asia/Pacific and Europe next year so I can get to both instead of neither!
If I succeed at 4 (sadly not likely!) then I’ll certainly be back next year!
As mentioned Daniel and I both attended CITCON Paris the weekend before last. I’ve not had a chance to post a follow up yet as we also took the opportunity to eat the legs off every duck in France (well, we tried).
Firstly a huge thanks to PJ, Jeff, Eric and all the other volunteers for another great conference. Thanks again to Eric and Guillaume for acting as local guides on Saturday night. As always, the open spaces format and mix of attendees delivered a great day. It was also great to see a few familiar faces from the year before in Amsterdam (and a familiar shirt thanks to Ivan ).
This year I proposed and facilitated a single topic: Distributed SCM in the Corporate World. I finally added a full write-up on the conference wiki earlier in the week for those who are interested. For the impatient, here are my take-aways from the session:
- Of the distributed SCMs, there is not much traction in the corporate world just yet, although git appears to have gained a foothold. (Obviously our sample size is small, but I also expect CITCON attendees to be closer to the edge than the average team.)
- Where distributed SCMs are used, the topology is still like the centralised model. However, the ability to easily clone and move changes between repositories presents opportunities to work around issues like painful networks (contrast this to special proxy servers which are needed in similar scenarios with centralised SCMs).
- The people using git liked it primarily for its more flexible workflow and better merging. It’s conceivable to have this in the centralised model too, but no single centralised contender was mentioned.
- So far the use of distributed SCMs didn’t seem to have practical implications for CI – probably due to the use of a centralised topology.
Looks like we’re still waiting to see more creative use of distributed SCMs in corporate projects – perhaps it is something worth revisiting in future conferences. I hope to post on some of the other sessions I attended at a later date.
Any excuse is good enough to get me to Paris, especially while it is only a train ride away. Daniel has actually been tempted all the way from Sydney!1 So you’ll find us both at CITCON Europe 2009 tomorrow night and Saturday. We’re both looking forward to a great weekend, after nothing but positive experiences at previous events. Hopefully we’ll even get a few questions about the new Pulse 2.1 Beta while we’re there!
1 Although combining it with a well-deserved holiday may have been a factor…
Exciting news: today we’ve pushed the latest version of Pulse, namely 2.1, to beta! This is the culmination of months of hard work on a ton of new features and improvements, including:
- Project dependency support.
- Easier multi-command projects.
- Personal build improvements.
- Fine-grained cleanup rules.
- Built-in reference documentation.
- Pluggable commands (build tool support).
- A simpler, faster configuration UI.
The new features are described in more detail on the 2.1 beta page. The largest are the first two: dependencies and multi-command projects.
The ability to deliver artifacts from one build to another is a long-standing feature request. Pulse 2.1 supports this as part of a larger dependencies feature. Essentially you can declare one project to be dependent on another, allowing the downstream project to use artifacts built by the upstream one. Artifacts are delivered through an internal artifact repository.
The dependencies feature goes beyond artifact delivery. It also includes smarter triggering for dependent projects, the ability to rebuild a full dependency tree and a new “dependencies” tab which allows you to visualise the dependency graph.
Dependency support is built on top of Apache Ivy. Our aim is for interoperability with existing tools like Ivy and Maven, but without being Java-specific.
We’ve always had support for multi-command projects in the Pulse build core. However, to access this full flexibility you previously had to write an XML “pulse file” by hand. As of 2.1, the configuration GUI exposes the full flexibility of the underlying build core. This allows you to define multiple recipes per-project, each of which can have multiple commands. All of the advanced command options once restricted to XML files are now also accessible in the GUI.
A key feature related to this is the ability to plug in new commands (e.g. to support a new build tool), and have the plugin seamlessly integrated into the add project wizard. If you plug in support for a command, you get simplified wizard creation of single-command projects using your plugin for free.
Give It A Go!
You can download Pulse today to try it out. Free licenses are available for evaluation, open source projects and small teams.
I’ve just released another Pulse 2.0 build, and the new features are still coming . In this case, I have added the long-anticipated ability to capture custom metrics with your build results, then report trends for these metrics over time. You can capture any benchmark of your build that you can imagine, then have Pulse chart it for you. All sorts of charts are supported, with flexible configuration.
As an example, imagine capturing some simple performance measures as numbers. You can then configure Pulse to show them in a line chart:
Pulse stores the metric values with each build, so you can easily see how performance is changing over time:
Most importantly, if performance drops, it is easily visible exactly when it happened, making it much easier to figure out why. You can learn more about how to configure custom reports in the Cookbook.
As part of this change, I also tweaked the default reports that Pulse generates for you. Now all of these reports are configured in the same way as the custom ones above, allowing you to change them to suit your preference. Not only that, but more reports come “out of the box”:
Sweet! Interested in continuous monitoring of your project’s performance? Try Pulse 2.0 today with a free small team, open source or evaluation license.
The Pulse 2.0 feature train keeps on rolling. Latest stop: using Pulse to communicate responsibility for a build.
Breaking the build is something to avoid, but even more important is the reaction when it inevitably does break. The key is fixing it fast, before it starts breaking the team’s flow. The worst possible scenarios are:
- Everyone assumes somebody else is responsible, so nobody fixes it. The build stays broken.
- Multiple people assume responsibility without talking to each other. Effort is wasted as they all work on fixing the same problem.
Both of these scenarios can be fixed by the same solution: communication. Responsibility for fixing the build needs to be taken by one person, quickly, and communicated to the rest of the team. If nobody has taken responsibility, everyone needs to be aware.
Pulse now supports this directly via the take responsibility feature. When you see the build is broken, you can click a link to take responsibility, optionally adding a comment:
Everybody can see who is responsible, both on the project home page, and the summary pages for all builds for the project:
Only one person can be responsible at a time, so there’s no confusion. It’s up to the person responsible to decide when their job is done — although you can optionally have responsibility automatically cleared when a build completes successfully (a pretty good indicator that it’s fixed!).
So, start communicating today, and stop wasting time! You can download Pulse and try a free evaluation today. Happy building!
Since we initially announced Pulse 2.0 four odd months ago, most of our new feature work has naturally been poured in to Pulse 2.1 (currently in early access). This doesn’t mean, however, that Pulse 2.0 has been standing still. On the contrary, as feedback comes in from customers we continue to tweak — mostly by the addition of minor fixes and features that each make life with Pulse a little better.
Enough improvements have piled up that I thought it would be worthwhile to round up the more important ones, especially for the sake of existing customers wondering what the latest versions can offer. Even the smallest features (implementation-wise) can have a big impact, depending on your usage pattern. Here it goes…
- Reworked dashboard and browse views: both of these views have been rewritten to be more dynamic, and they now allow you to save the collapsed state of groups using a simple toolbar.
- Greater dashboard configurability: more control over which groups and projects are shown in which order on your dashboard.
- Links from configuration pages to reports: you could always jump quickly to a project or agent’s configuration, now you can jump back easily too.
- Better log time-stamping: a stamp on every line of output for easier diagnosis of build issues.
- Free-form property descriptions: document your build properties for the benefit of your users.
- Option to run hooks for personal builds: you can now choose on a per-hook basis whether the hook should be run for personal builds.
- Import support for pulse files: manage complex pulse files by breaking them down into import-able pieces.
- Trigger-specific properties: you can pass values through to your build depending on how it was triggered, including remote API triggers.
- Support for new Trac versions: updates to Trac integration for compatibility with the latest Trac release.
- Support for Boost.Test: a new plugin to integrate test results from the popular Boost C++ library.
- Support for skipped test cases: test reporting now understands that tests may be skipped and displays them accordingly.
- Improved Perforce integration: including better client management, building at a label, and support for ticket authentication.
We hope these features make Pulse just that little bit nicer to work with, and look forward to delivering some even bigger features in 2.1. If you want a slice of the action, download the latest Pulse release now!
You are currently browsing the archives for the Zutubi category.