Our nginx caching proxy setup for Evergreen

Posted on Thu 24 August 2017 in Libraries

A long time ago, I experimented with using nginx as a caching proxy in front of Evergreen but never quite got it to work. Since then, a lot has changed in both nginx and Evergreen, and Bill Erickson figured out how to get nginx to proxy the websockets that Evergreen now needs for its web-based staff client. This spring, as part of my work towards building prototype offline support for the Evergreen catalogue's My Account section, I dug in and started figuring out some of the final pieces that are needed to enable nginx to proxy most of the static content that Apache (with its bloated processes) would otherwise have to serve up, and wrote a configuration generator script for the nginx and Apache pieces. And in July, we went live with the configuration.

This post documents what we currently (as of August 2017) are running on our Evergreen 2.12 server with Ubuntu 16.04. If you have any questions about this or our corresponding Apache configuration, please let me know and I'll attempt to answer them!


This is the core configuration for the nginx server:

proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=my_cache:10m max_size=1g
                 inactive=60m use_temp_path=off;
proxy_cache_key $scheme$http_host$request_uri;

server {
    listen 80;
    server_name clients.concat.ca;

    include /etc/nginx/concat_ssl.conf;
    include /etc/nginx/osrf_sockets.conf;

    location / {
        proxy_pass https://localhost:7443;

        rewrite ^/?$ /updates/manualupdate.html permanent;

        include /etc/nginx/concat_headers.conf;
  • The proxy_cache_path directive tells nginx where to store the data it is caching, what kind of directory structure it should create (levels), the name of the shared memory zone to use (keys_zone), the maximum size of the disk cache (max_size), how long to retain a cached copy of the file (inactive), and whether to use the value of the proxy_temp_path directive as a parent directory for the cache.
  • The proxy_cache_key tells nginx to use a combination of the request scheme (typically HTTP or HTTPS), the hostname, and the full request URI (including GET arguments) to store and lookup the cached data. Apache's response tells nginx how long the request should be cached (whether it should expire immediately, or as of #1681095 "Extend browser cache-busting support", cache for a full year for images, JavaScript, and CSS (at least until you run autogen.sh again).
  • We currently include one server directive per hostname that we support, which is quite repetitive. Looking at this with fresh eyes, we should probably simply use something like server_name *.concat.ca to cover all of our hostnames on our domain with a single directive.
  • In this block, we only listen to port 80, which seems odd given that we're an HTTPS-only site. Read on!
  • include /etc/nginx/concat_ssl.conf; keeps all of the TLS-related configuration in one place, including listening to port 443. We'll pry open this file later.
  • include /etc/nginx/osrf_sockets.conf; keeps all of the OpenSRF websockets translator proxy configuration in one place. We'll also pry open this file later.
  • The location / block handles the proxying. At first I was nervous and wanted to proxy the actual hostname instead of localhost to ensure we got the right templates, etc, but it turns out the proxy headers guide the request to the right host. So now I'm relaxed and we simply pass the request on to https://localhost:443. Be very careful with those trailing slashes!


listen 443 ssl http2;
ssl_certificate /etc/apache2/ssl/server.crt;
ssl_certificate_key /etc/apache2/ssl/server.key;

if ($scheme != "https") {
    return 301 https://$host$request_uri;

# generate with openssl dhparam -out dhparams.pem 2048
ssl_dhparam /etc/apache2/dhparams.pem;

# From https://mozilla.github.io/server-side-tls/ssl-config-generator/
ssl_prefer_server_ciphers on;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;

# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;

# OCSP Stapling ---
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;

There's a fair bit going on here, but it's almost entirely related to TLS support and a lot of the content comes either from the Mozilla TLS configuration generator or from Certbot's configuration plugin for nginx. Perhaps most interesting is the listen 443 ssl http2; line that enables listening on the standard HTTPS port and also supports HTTP/2 for browsers that support it--effectively a way to use a single connection from a browser to a server to issue many parallel requests for resources, amongst other performance enhancements.

We also force any HTTP request to use an HTTPS connection using the if ($scheme != "https") { block.


This is extracted from the sample nginx configuration shipped with OpenSRF:

location /osrf-websocket-translator {
    proxy_pass https://localhost:7682;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    # Needed for websockets proxying.
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";

    # Raise the default nginx proxy timeout values to an arbitrarily
    # high value so that we can leverage osrf-websocket-translator's
    # timeout settings.
    proxy_connect_timeout 5m;
    proxy_send_timeout 1h;
    proxy_read_timeout 1h;


This is not perfectly named; while we do set up the proxy headers in this file, we also include some of the other statements we would otherwise have to repeat inside the server block. Here's what the contents look like:

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

proxy_cache my_cache;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_cache_lock on;

rewrite ^/?$ /eg/opac/home permanent;
  • The proxy_set_header directive adds headers to the requests forwarded to Apache, so that Apache can figure out which host was actually requested, accurately log requests (instead of saying everything is coming from localhost), etc. These directives were copied directly from the sample nginx configuration shipped with OpenSRF.
  • proxy_cache tells this server to use the cache we previously named in our keys_zone parameter.
  • proxy_cache_use_stale tells this server to return stale data (if it has a cached copy) if Apache returns an error or a timeout or any of the specified HTTP status codes while trying to fetch a fresh copy.
  • proxy_cache_lock tells this server to, should multiple identical requests for data that needs to be cached or refreshed arrive, only allow a single request to be passed through to Apache and have the other requests wait. This can be one way to avoid the "someone set a book down on a keyboard and caused 100 identical requests in one second" problem.
  • The rewrite simply directs the request for a bare hostname (with or without a trailing slash) to the catalogue home page.