Or, if you like, the nginx-Varnish-nginx sandwich.
This is, admittedly, a bit unorthodox. But here’s my rationale:
Since time immemorial (ie. more than a couple of years of Internet time), we at Reveal IT have been deploying Varnish in front of the web sites we build for our customers, with the dual purpose of a faster (and thus better) user experience and conservation of server resources.
In our previous Varnish setups, only standard HTTP would be passed through Varnish, and HTTPS traffic gets paased directly to nginx. However, we’re seeing increasing demands for TLS/SSL, and more sites are going HTTPS-only.
Varnish itself does not support (and with good reason), so we need another program to provide the secure connection. I’ve tried a couple of commonly recommended TLS/SSL terminators, namely Pound and stud, but I’ve yet to succeed in getting either to work on my server setup. And since I didn’t feel like spending an entire workday getting to know either tool enough to deploy it with confidence, I was reminded of the old quote
I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.
In this case, I have an excellent hammer, nginx, so I decided to treat this problem like a nail. Here’s my current setup:
This is all running on a single machine, “rajka”, Varnish is listening on port 80, passing uncached (or uncacheable) requests on port 8080.
Now to the interesting parts. I’ve set up an nginx virtual host for the TLS terminator work:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
While nginx virtual host should be pretty self-explanatory, the Varnish configuration is a bit more tricky. I suggest you take the time to review our entire varnishconf, but I have extracted the most relevant parts here:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
The main issue here is the
which is used by the web application to determine the IP address of the
actual client, not any intermediary proxies. Since the
can be used for IP address spoofing, it is important to configure
For your web application to get the correct IP address for the client
user, it needs to be configured to trush the
for requests coming from the Varnish server. Most web applications have
built-in support for this.
However, this means that we need to be completely sure that we do not
pass on malicious X-Forwarded-For headers from the client. The easiest
way to accomplish this is to simply set the header yourself. But in this
case, we have two levels of proxying. So nginx always sets the X-Forwarded-For
header to the client’s IP address. Varnish normally does the same, but
if, and only if, the request is coming from
127.0.0.1 (the IP of the
TLS terminator), we use the value it provided instead.
Now, this is not strictly compliant to how X-Forwarded-For is supposed to be used (we should actually append the IP address of the TLS terminator to its value), but since Drupal uses the right-most (ie. last added) IP address, that would not actually work in our case.
Lastly, by including
X-Forwarded-Proto in the
vcl_hash function, we
ensure that HTTP and HTTPS requests are cached separately, so a user
visiting the site via HTTPS will get pages where the links are also
HTTPS. This does reduce the efficiency of the cache (since it’ll leave
two copies of everything in the cache, including images and other static
files that are not protocol-sensitive.
Fixing that issue is left as an exercise to the reader.
Though I’ve had this idea for a while, I’ve only had a working, in production, implementation of this for ~8 hours. It may yet turn out that this was a horrible idea, but so far it’s working great.