Evergreen and the business case for choosing an open source ILS
Posted on Sun 22 April 2007 in Libraries
Due to a sad event, Art Rhyno asked me to be his co-presenter at the OLITA Digital Odyssey 2007. Our broad subject was Evergreen, more specifically introducing the Evergreen ILS to an audience that was aware of Evergreen's existence but wanted to know more about it from both a technical and a business perspective. I had two days' notice to prepare for the presentation, so I split my time between polishing the VMWare image of Evergreen and creating the slides for my presentation (PDF).
Art gave a general introduction to open source development, told the story of how Evergreen came about, and described its architecture and the capabilities currently demonstrated on the in-production system at PINES. Perhaps of most interest to the audience, Art talked a bit about the direction that he's taking Woodchip, the serials and acquisitions module based on Apache OFBiz that the University of Windsor has agreed to develop for Evergreen. No pressure, Art
Then the presentation was handed off to me. I started by asking for demographic information from the audience; to no surprise, about half of the audience of approximately 60 ran Horizon systems. Many of the attendees in the audience paid more than $20,000 annually for support and licensing costs. Most of the sites had the equivalent of one full-time position devoted to the care and feeding of their current library system.
The goals of my presentation were to:
- Demonstrate that the library community has a strong culture of self-support with respect to library systems (based on the volume of email on our closed library systems mailing lists)
- Suggest that the quality of official support we receive from our closed library systems does not warrant the annual support fees we pay
- Point out that we already devote personnel to the care and feeding of our closed library systems, so the refrain of "open source is like getting a free kitten" is fine given that we're currently paying for a dog of a closed system
- Urge the audience to consider what a waste of money and time it is to train staff to learn the proprietary API, templating language, etc for a closed system when that knowledge can become useless if the system is pulled from the market -- while investing money and time in learning the API and templating language for Evergreen results in reusable skills for your personnel because those are based on open standards.
- Let the community know the preliminary results of my evaluation of the internationalization support offered by Evergreen
- List some of the challenges that we face in achieving a wide adoption of Evergreen
- Notify the community that my VMWare image is really, truly, close to being released and suggest that it would be a great way to get started with Evergreen
- Run a quick live demo with the VMWare image to prove that a full install of Evergreen can scale down to running in a virtual machine with 512M of RAM
My self-assessment? I did not want to come across as an open source zealot; rather, I wanted to point out where our current relationships with our vendors are failing us and how open source can fill in some of those gaps. Unfortunately, I feel that I probably veered a little too much towards the rant side of the continuum a couple of times -- my passion for this subject came through, no doubt, but it was perhaps a little too strong.
I knew my presentation was text-heavy, but I didn't beat myself up too much because a good visual presentation needs more than just a couple of days to come together and I didn't have a variation of this already in the can somewhere... this was brand new content. I was pleased that I came up with and shared the visual image of migration ninjas. As the closed vendors' licensing terms might prevent us from openly sharing migration kits or migration how-tos, the “migration ninjas” would be the community's system gurus who would slip into a library and perform the secret, inhuman feats necessary to migrate from a closed system to an open system.
I wasn't at all happy with my live demo. First, I failed to arrange with the conference hosts to obtain an Internet connection, so the cover art in the catalog and the Z39.50 copy cataloging in the staff client facets of the demo were a bust. Second, while I knew it would be an exploratory live demo, given that I had just achieved a full working install a few days prior to the session, it's not very impressive for an audience to watch a presenter fumbling around the command line in response to a question about the API. Third, I failed to show off some really cool features of Evergreen such as the shelf-browser (although without cover art it wouldn't have been nearly as impressive). I tried firing up the reports Web interface and failed. So, now that I have a working install, I'll be able to prepare a much better live demo in the future - I just hope that our audience didn't take away a bad impression from our session on Friday.
Questions from the audience
We had some good questions from the audience; here's what I can remember. Please add more to the comments on this post, if you have them!
Why is there so much interest in Evergreen and why aren't we hearing much about Koha?
Dan said something about how his first investigation of Koha revealed evidence of classic MySQL dependencies and assumptions in the codebase that, as a former product planner for IBM DB2 relational database, made him cringe. Evergreen, in comparison, is built on PostgreSQL which was reassuring. I failed to note at the time that Evergreen has been developed so that it can support other databases, although some work would be required to convert to the SQL dialect and full-text search required by the target database.
Art mentioned that while Koha had been quite popular internationally for the past number of years, it had not been as popular in North America. Part of that reason may have been a severe scalability problem that kicked in somewhere around 450,000 records. Dan suggested that problem could be traced directly to MySQL 3 / 4, but that it might have been alleviated in MySQL 5 (which Koha does not yet support). Art noted that Koha ZOOM, using indexdata.dk's Zebra indexing engine, overcame that performance problem but some extra care was required to commit updates to the index.
What about the dangers of someone forking the code?
In my opinion, we didn't really answer this question well. Art didn't think that a fork was likely as Evergreen had been built with the best-of-breed components and plenty of input from the PINES library staff and community. What I should have added was that the ability to create a fork of a project is actually a wonderful feature of open source - it enables communities to route around projects that become overly bureaucratic, or closed to new developers, or not interested in input or exploring new directions.
You (Dan) talked a lot about the benefits of a system built on standards. Can you show us what the Web templating language looks like?
I fumbled this one badly. I quickly brought up footer.xml, but that doesn't contain any dynamic content so it was a bad example. I then suffered from presentation brain and couldn't remember the word “introspect” to demonstrate srfsh's ability to introspect its objects. Finally I (lamely) showed an example of a srfsh API request.
Summary
I believe that a solid business case needs to be developed on a library-by-library basis or on a consortial basis for migrations to Evergreen. I think that my presentation provided some useful input to those business cases, but in and of itself is not enough. Certainly, as our own library considers its options in the coming years, we're going to have to have a much more solid set of criteria before we can make any decision. I encourage you to take what you can from the presentation and improve, polish, and contribute your own analysis back to the Evergreen community so that you can help other libraries make an informed decision.