Reducing cached content pain after Evergreen upgrades

Posted on Mon 23 May 2011 in Libraries

If you have been through an Evergreen upgrade, you know that the days after the upgrade can be painful. Users complain that the catalogue doesn't work right, there are mysterious glitches that happen on some machines and not others (even though the browser and operating systems are identical on each machine!), rebooting doesn't help... and then eventually the problem goes away.

The problem isn't all that mysterious, really, it's the result of the browser caching content. Normally, browser caching is a very positive experience: when a browser requests a file from a Web server, the Web server tells it to how long the browser should hold onto the file via a Cache-control directive. This means that if a page on your Web site is dozens of hundreds of images and CSS and JavaScript files, your browser doesn't have to download every one of those files on every page you visit; as long as the file hasn't expired, the browser can just serve it up from the local cache and only the fresh content needs to be fetched from the server. It's how the Web works, and it's really important for performance reasons.

However, if your Web server has told your browser to cache files for a month, and then during that month you upgrade your Web site so that there is new JavaScript and CSS files that your fresh content depends on, then you can run into trouble until those cached files expire. And that is exactly the case that we run into with Evergreen upgrades - only the problem is amplified by how heavily the Evergreen catalogue (which is just a Web site) relies on JavaScript for basic operations.

On the user side, you can handle the problem a few ways:

  • Doing a hard refresh to force the browser to fetch fresh versions of all the files in its cache. You can force a hard refresh on most browsers by holding down the Shift key and clicking the Refresh or Reload button.
  • Emptying the browser cache.

Neither of these user-side approaches is particularly convenient. Doing a hard refresh may work for one page, but as the user navigates to a different page that uses different CSS and JavaScript, they will have to do another hard refresh... and so on, which in the case of Evergreen means users will have to refresh around a half-dozen different pages (home page, search results, record details, account, advanced search). Hard refreshes are also not reliable, as resources fetched by XHR are not actually refreshed (this is a long-standing bug with Chrome and Firefox). If you don't know what XHR is, just know that Evergreen uses a lot of them. And emptying the browser cache is both painful (every browser has a different way of emptying browser cache) and overkill (you just want to discard the cache for one site, but most browsers will discard the cache for every site they have visited).

The "right" solution is to have the server tell the browser to fetch a new version of the resource. You could change the caching settings to be very short-lived - for example, change the cache time from one month down to one day for JavaScript and CSS - but unless you upgrade your site very frequently, that would mean that 99% of the time your users' browsers will be making unnecessary requests, and their experience of your catalogue will be that it is slower to load than other sites on the Web. Not so good.

The other approach is to change the pathname for the cached resources at upgrade time so that the browser doesn't find a match in its local cache and has to fetch the new version. There's some good news: some work has been going on in the Evergreen 2.1 release to tackle this problem, but it is not yet complete. And most sites are only looking at moving to 2.0 right now. As it happens, we made the jump from to 2.0.6+ yesterday and boy howdy the browser cache was a problem after the upgrade, as one would expect. I took a quick stab at identifying the most likely paths that needed to be refreshed and threw together some shell commands to "munge" our catalogue skins so that browsers would be forced to pick up the new versions of the content.

Post-upgrade panic, I refactored those commands into a Perl script named (well, more precisely, a Perl script that generates shell commands). The Perl script has two hardcoded variables: a datestamp (which is really any uniquely identifying string that can appear in a directory name and URL) and a list of catalogue skins to munge. When you run the script, it generates a set of shell commands that you should be able to run on your Evergreen 2.0 instance to force browsers to cache the new version of your catalogue's JavaScript and CSS files.

Some limitations: I haven't written a script to convert your skins back to pristine mode (that's mostly a matter of updating the ack-grep commands and reversing the sed commands). And I haven't written a script to update a munged set of skins. And, I'm not 100% sure that I've hit every set of JavaScript and CSS that needs to be refreshed after an upgrade from 1.6 to 2.0. But it's a reasonable start, in my opinion, and hopefully it helps inform the Evergreen 2.1 effort so that we can have a standard, supported, painless means of telling browsers to fetch new resources as an automatic part of any upgrade in the future.