Evergreen's continuous integration servers - past, present, and future
Posted on Mon 14 March 2011 in Libraries
tldr version: the Evergreen project now has a continuous integration server and build farm and needs testcases to make the best use of that infrastructure to help us provide higher-quality releases in the future.
Evergreen buildbot - past
Back in November 2009, Evergreen developer Shawn Boyette launched the Evergreen buildbot - a continuous integration server that ran basic tests with every commit to the OpenSRF and Evergreen repositories and created nightly tarballs of the code. It was a promising start towards a system that would provide us with instant feedback about the state of our code - at least as much as we had tests for it. Unfortunately, the server ran for only a few months before disappearing when Shawn parted ways with Equinox in early 2010.
I always thought it was a shame we had lost this piece of the development infrastructure, but Equinox had offered accounts on a server for anyone in the Evergreen community interested in taking on the task of setting up a new continuous integration test server - and through the rest of 2010, nobody stepped up to take on that responsibility. Most of us were busy developing and testing Evergreen 2.0, I suspect. So, in January of 2011, when I had a bit of breathing room, I scoped out the current state of continuous integration frameworks and discovered that the buildbot project (no relation to Shawn's code, other than a serendipitous name) was written in Python and therefore was much more approachable to me than the other leading alternative, Hudson... so I wrote up my findings and a quick proposal.
Evergreen buildbot - present
A few days later I had the buildbot running on the server provided by Equinox, providing reports on the status of the OpenSRF builds on Ubuntu Lucid. After putting out a call to the community for build servers to provide coverage for Evergreen on different operating systems, I had enough responses to focus my mind on improving the Evergreen build. Evergreen now has the same standard layout for Perl modules that we adopted a year ago for OpenSRF, along with some basic sanity tests in Perl (such as are there any syntax errors in this module?).
So, thanks to Equinox for providing the testing server that serves as the mothership for controlling all of the build tests. And many thanks to the University of Prince Edward Island Robertson Library and the Georgia Public Library Service for providing build servers for the build farm. We now have Evergreen test coverage on the Ubuntu Lucid and Debian Squeeze Linux distributions (huzzah) and OpenSRF test coverage on Ubuntu Lucid. If you have an interest in getting test coverage for a different distribution and have a server to spare, please feel free to contact me and we can get your server added to the build farm.
Checking build status
You can check the current state of the code for various OpenSRF and Evergreen branches at any time by visiting the Evergreen buildbot page and choosing one of the menu options.
Recent builds provides a simple list of the success or failure of the 20 most recent builds.
Waterfall, on the other hand, provides the detailed status of every tested combination of Linux distribution and code branch.
Evergreen buildbot - future
We still have work to do to deliver on the promise of the buildbot. Most important, I think, is that a continuous integration server can only run the tests that it has been given - and we have not given it many tests.
It kills me that people discovered some fairly fundamental problems with the Evergreen 2.0 release (some recent examples include most identifier searches not working and limitations with Unicode in patron names). Now that we have a continuous integration server, we need a testing framework so that it becomes easy to add tests along the lines of "Import a set of sample bibliographic records, then run the following sets of searches (ISSN and ISBN with and without hyphens; EAN; UPC...) and ensure that the returned results match these expected results". It should be a human's job to set up that automated test once so that we're forever confident in the future that we're not screwing up those basic features, no matter what we change in our database schema or underlying code.
Now, there are very few people that can currently create that sort of a test. There might be none at the moment, in fact, because we need that previously mentioned testing framework to be sorted out and integrated into the buildbot.
However, in the short term we can create these testing scenarios so that humans can reproduce them during testing blitzes, until such time as we have the testing framework sorted out and can begin automating these tests.
Otherwise, I fear that we'll go into the Evergreen 2.1 alpha/beta/release candidate cycle and get reports from testing that indicate that all is well - but only because some of the more complex tasks haven't actually been attempted - and we'll find ourselves scrambling once again after the release to fix problems that become evident when sites actually start moving to the release.
Beyond tests, we need to teach it to create cleanly packaged tarballs on a regular basis - although that should arguably be nothing more than, or not much more than, the equivalent of running make package rather than pushing all kinds of specialized packaging logic into the buildbot itself.
Autotools wizards, your assistance would be greatly appreciated.
Spreading Evergreen buildbot knowledge
To ensure that our project can survive the loss of the current master build server (or me, for that matter!), I've been committing a password-sanitized copy of the buildbot configuration to the examples directory of the OpenSRF repository. In addition to reducing the dependency on one person and one server, it also gives anyone else interested in contributing to the Evergreen buildbot the ability to easily define a build master and build slaves in a local environment.