In this tutorial, we will cover how to use Varnish Cache 4.0 to improve the performance of your existing web server. We will also show you a way to add HTTPS support to Varnish, with Nginx performing the SSL termination. We will assume that you already have a web application server set up, and we will use a generic LAMP (Linux, Apache, MySQL, PHP) server as our starting point.
Varnish Cache is a caching HTTP reverse proxy, or HTTP accelerator, which reduces the time it takes to serve content to a user. The main technique it uses is caching responses from a web or application server in memory, so future requests for the same content can be served without having to retrieve it from the web server. Performance can be improved greatly in a variety of environments, and it is especially useful when you have content-heavy dynamic web applications. Varnish was built with caching as its primary feature but it also has other uses, such as reverse proxy load balancing.
In many cases, Varnish works well with its defaults but keep in mind that it must be tuned to improve performance with certain applications, especially ones that use cookies. In depth tuning of Varnish is outside of the scope of this tutorial.
In this tutorial, we assume that you already have a web application server that is listening on HTTP (port 80) on its private IP address. If you do not already have a web server set up, use the following link to set up your own LAMP stack: How To Install Linux, Apache, MySQL, PHP (LAMP) stack on Ubuntu 14.04. We will refer to this server as LAMP_VPS.
You will need to create a new Ubuntu 14.04 VPS which will be used for your Varnish installation. Create a non-root user with sudo permissions by completing steps 1-4 in the initial server setup for Ubuntu 14.04 guide. We will refer to this server as Varnish_VPS.
Keep in mind that the Varnish server will be receiving user requests and should be adequately sized for the amount of traffic you expect to receive.
Our goal is to set up Varnish Cache in front of our web application server, so requests can be served quickly and efficiently. After the caching is set up, we will show you how to add HTTPS support to Varnish, by utlizing Nginx to handle incoming SSL requests. After your setup is complete, both your HTTP and HTTPS traffic will see the performance benefits of caching.
Now that you have the prerequisites set up, and you know what you are trying to build, let’s get started!
The recommended way to get the latest release of Varnish 4.0 is to install the package avaiable through the official repository.
Ubuntu 14.04 comes with apt-transport-https
, but just run the following command on Varnish_VPS to be sure:
sudo apt-get install apt-transport-https
Now add the Varnish GPG key to apt:
curl https://repo.varnish-cache.org/ubuntu/GPG-key.txt | sudo apt-key add -
Then add the Varnish 4.0 repository to your list of apt sources:
sudo sh -c 'echo "deb https://repo.varnish-cache.org/ubuntu/ trusty varnish-4.0" >> /etc/apt/sources.list.d/varnish-cache.list'
Finally, update apt-get and install Varnish with the following commands:
sudo apt-get update
sudo apt-get install varnish
By default, Varnish is configured to listen on port 6081
and expects your web server to be on the same server and listening on port 8080
. Open a browser and go to port 6081 of your server (replace the highlighted part with your public IP address or domain):
http://varnish_VPS_public_IP:6081
Because we installed Varnish on a new VPS, visiting port 6081
on your server’s public IP address or domain name will return the following error page:
This indicates that Varnish is installed and running, but it can’t find the web server that it is supposed to be caching. Let’s configure it to use our web server as a backend now.
First, we will configure Varnish to use our LAMP_VPS as a backend.
The Varnish configuration file is located at /etc/varnish/default.vcl
. Let’s edit it now:
sudo vi /etc/varnish/default.vcl
Find the following lines:
backend default {
.host = "127.0.0.1";
.port = "8080";
}
And change the values of host
and port
match your LAMP server private IP address and listening port, respectively. Note that we are assuming that your web application is listening on its private IP address and port 80. If this is not the case, modify the configuration to match your needs:
backend default {
.host = "LAMP_VPS_private_IP";
.port = "80";
}
Varnish has a feature called “grace mode” that, when enabled, instructs Varnish to serve a cached copy of requested pages if your web server backend goes down and becomes unavailable. Let’s enable that now. Find the following sub vcl_backend_response
block, and add the following highlighted lines to it:
sub vcl_backend_response {
set beresp.ttl = 10s;
set beresp.grace = 1h;
}
This sets the grace period of cached pages to one hour, meaning Varnish will continue to serve cached pages for up to an hour if it can’t reach your web server to look for a fresh copy. This can be handy if your application server goes down and you prefer that stale content is served to users instead of an error page (like the 503 error that we’ve seen previously), while you bring your web server back up.
Save and exit the default.vcl
file.
We will want to set Varnish to listen on the default HTTP port (80), so your users will be able to access your site without adding an unusual port number to your URL. This can be set in the /etc/default/varnish
file. Let’s edit it now:
sudo vi /etc/default/varnish
You will see a lot of lines, but most of them are commented out. Find the following DAEMON_OPTS
line (it should be uncommented already):
DAEMON_OPTS="-a :6081 \
The -a
option is used to assign the address and port that Varnish will listen for requests on. Let’s change it to listen to the default HTTP port, port 80. After your modification, it should look like this:
DAEMON_OPTS="-a :80 \
Save and exit.
Now restart Varnish to put the changes into effect:
sudo service varnish restart
Now test it out with a web browser, by visiting your Varnish server by its public IP address, on port 80 (HTTP) this time:
http://varnish_VPS_public_IP
You should see the same thing that is served from your LAMP_VPS. In our case, it’s just a plain Apache2 Ubuntu page:
At this point, Varnish is caching our application server–hopefully will you see performance benefits in decreased response time. If you had a domain name pointing to your existing application server, you may change its DNS entry to point to your Varnish_VPS_public_IP.
Now that we have the basic caching set up, let’s add SSL support with Nginx!
Varnish does not support SSL termination natively, so we will install Nginx for the sole purpose of handling HTTPS traffic. We will cover the steps to install and configure Nginx with a self-signed SSL certificate, and reverse proxy traffic from an HTTPS connection to Varnish over HTTP.
If you would like a more detailed explanation of setting up a self-signed SSL certificate with Nginx, refer to this link: SSL with Nginx for Ubuntu. If you want to try out a certificate from StartSSL, here is a tutorial that covers that.
Let’s install Nginx.
On Varnish_VPS, let’s install Nginx with the following apt command:
sudo apt-get install nginx
After the installation is complete, you will notice that Nginx is not running. This is because it is configured to listen on port 80 by default, but Varnish is already using that port. This is fine because we want to listen on the default HTTPS port, port 443.
Let’s generate the SSL certificate that we will use.
On Varnish_VPS, create a directory where SSL certificate can be placed:
sudo mkdir /etc/nginx/ssl
Generate a self-signed, 2048-bit SSL key and certicate pair:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt
Make sure that you set common name
to match your domain name. This particular certificate will expire in a year.
Now that we have our certificate in place, let’s configure Nginx to use it.
Open the default Nginx server block configuration for editing:
sudo vi /etc/nginx/sites-enabled/default
Delete everything in the file and replace it with the following (and change the server_name
to match your domain name):
Save and exit. The above configuration has a few important lines that we will explain in more detail:
localhost
)The other proxy_set_header
lines tell Nginx to forward information, such as the original user’s IP address, along with any user requests.
Now let’s start Nginx so our server can handle HTTPS requests.
sudo service nginx start
Now test it out with a web browser, by visiting your Varnish server by its public IP address, on port 443 (HTTPS) this time:
https://varnish_VPS_public_IP
Note: If you used a self-signed certificate, you will see a warning saying something like “The site’s security certificate is not trusted”. Since you know you just created the certificate, it is safe to proceed.
Again, you should see the same application page as before. The difference is that you are actually visiting the Nginx server, which handles the SSL encryption and forwards the unencrypted request to Varnish, which treats the request like it normally does.
If your backend web server is binding to all of its network interfaces (i.e. public and private network interfaces), you will want to modify your web server configuration so it is only listening on its private interface. This is to prevent users from accessing your backend web server directly via its public IP address, which would bypass your Varnish Cache.
In Apache or Nginx, this would involve assigning the value of the listen
directives to bind to the private IP address of your backend server.
If you are having trouble getting Varnish to serve your pages properly, here are a few commands that will help you see what Varnish is doing behind the scenes.
If you want to get an idea of how well your cache is performing, you will want to take a look at the varnishstat
command. Run it like this:
varnishstat
You will a screen that looks like the following:
There is a large variety of stats that come up, and using the up/down arrows to scroll will show you a short description of each item. The cache_hit
stat shows you how many requests were served with a cached result–you want this number to be as close to the total number of client requests (client_req
) as possible.
Press q
to quit.
If you want to get a detailed view of how Varnish is handling each individual request, in the form of a streaming log, you will want to use the varnishlog
command. Run it like this:
varnishlog
Once it is running, try and access your Varnish server via a web browser. For each request you send to Varnish, you will see a detailed output that can be used to help troubleshoot and tune your Varnish configuration.
Press CTRL + C
to quit.
Now that your web server has a Varnish Cache server in front of it, you will see improved performance in most cases. Remember that Varnish is very powerful and tuneable, and it may require additional tweaks to get the full benefit from it.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Why do we need both Nginx and Varnish? Why no Nginx alone? (btw, does it have the grace mode?)
Just a note – if you’re upgrading from Varnish 3x. to Varnish 4.x be prepared for problems. I ran into a variety of VCL issues and didn’t have time to troubleshoot further – just had to roll back for now.
Not related to the article, but just FYI.
@pedrogomesyoo easily could just using caching in Nginx also, no one is saying that’s not an option. :)
I am getting below exception after installing and making changes
Take a look at the varnish config file: default.vcl and try this suggestion http://serverfault.com/a/305259
Above issue resolved …issue in this line of post
it should be like this
But after making this change i am getting
If you’re having a hard time trying to get Apache to listen on a different port, check to see if the following file exists:
In that file, you’ll probably find:
That’s where you’ll need to make your change.
@subodhcjoshi82 – sounds like you have something else listening on the port Varnish is trying to use.
Check the output of netstat -tplanet | grep :80 – ideally you’ll want to see something like this:
tcp IPV4:80 0.0.0.0:* LISTEN 0 1585/varnishd tcp6 IPV6:80 :::* LISTEN 0 1585/varnishd
I suspect you’ll see your web server is still listening on port 80 though – therefore Varnish can’t bind. So instead you’ll see something like:
tcp IPV4:80 0.0.0.0:* LISTEN 0 15878/nginx: worker
This might be Apache if you’re using that instead of Nginx.
@xxdesmus Yes Port 80 Listen by my web server Apache Tomcat so i do not need to add port number in url …By default it listen by Apache but i made changes and now Apache working on port 8079 and now i made changes in
/etc/varnish/default.vcl
Like below
But still same issue coming when i am firing this command
Getting same message
Right, but what port are you expecting Varnish to listen on? 80? If so you’ll need to move Apache Tomcat to a different port. Otherwise you’ll need to specify a different port for Varnish to listen on via -> DAEMON_OPTS="-a :80 \ … should be some other port if you don’t expect it to be listening on 80, by default Varnish would be listening on port 6081 as the guide indicates.
It’d help if you could provide more background about where you expect Varnish to be listening vs. how you have your config setup for Varnish right now.
Tomcat is running on pot 80 because it will be quite easy for me setup domain with my application now when someone with enter domain name my application by default open previously it was not working as expected so i made changes in Tomcat config files and Tomcat started using port 80 …and Apache is listing on 8079. Now i saw your post we can boost our web application by this cache i installed Vanish in my droplet and as u mentioned these lines
Look like i confused because of these lines “so your users will be able to access your site without adding an unusual port number to your URL”
So now as i already mentioned port 80 Listen by Tomcat and i will want to use Varnish as well …What changes i have to do its cache will work and when Tomcat down Varnish will work?
If you want user -> Varnish -> Tomcat -> Apache you’ll need to move Tomcat to some other port (let’s say 8080) and then move Varnish to port 80.
I believe you will need to have Varnish in front of anything to get really get the benefit out of it – but someone can correct me if I’m wrong.
Hi, great article. works perfectly! One big question, how do I modify my web server configuration so it is only listening on its private interface? Thank you!
You mean Nginx? You’d do something like this:
This way you’ll have nginx listening on port 8080 on localhost, and as long as you tell Varnish that the backend is listening on localhost:8080 you’ll be all set. So your Varnish VCL should look like (as the article mentions):
Technically it works but there are performances problems, these are my tests with Apache Bench :
HTTP : Nginx(8080) = 875 req/sec HTTP : Varnish(80) -> Nginx(8080) = 2484 req/s
HTTPS : Nginx(443) -> Varnish(80) -> Nginx(8080) = 154 req/s HTTPS : Nginx(443) = 198 req/s
With Varnish there is no cache_miss, only cache_hit but performances are very low with HTTPS.
Do you think the SSL Terminations are responsibles of this performances drop ?
The SSL handshake takes some time. Longer requests equal less requests per second. You can try tweaking the default SSL cipher as described in this blog post.
Many thanks uncoverd this trick allowed me to double the number of requests per second. I’ll have to accept that SSL handshake will necessarily impact the overall performance of the server. I’ll continue to tweak this SSL layer, thanks ;)
I end up whit this error:
Running VCC-compiler failed, exited with 2
VCL compilation failed
What should i do?
root@amiga:~# sudo service varnish restart
Hi
As I understand, this tutorial advices how to setup two separate VPS servers. While VARNISH_VPS works on separate IP as dedicated server dealing with visitor requests, other LAMP_VPS is handling requests coming from VARNISH_VPS.
Can we setup a single server where the requests from visitors in handled as shown below:
Visitor > Nginx > SSL Termination > Varnish > Apache
– Regards Saurabh
[UPDATE] I created a test droplet and tried to figure out the configuration by myself :)
Thanks to @DigitalOcean for their product pricing.
If someone is looking for this type of config, #AskMe :-)
Hello,
This is precisely what I need, I have a single ubuntu 16.04 server hosting my websites.
Could you provide me with the instructions on how to do it?
Best regards, Gabriel
Hi, Thank you very match for this tutorial ! it’s very clear.
But i’got problem with “vcl_backend_response”.
It did not exist In the defaut default.vcl file. So i added in the last of the file.
Removing it, varnish works fine.
I’m using ubunu server 14.04, varnish 4.
The error output when restarting varnish:
I don’t know why but it’s not Varnish 4 that’s installed but Varnish 3.0.5, even if “deb https://repo.varnish-cache.org/ubuntu/ trusty varnish-4.0” is set properly in the sources.
More, if I change the source to varnish-3.0, it’s updated to 3.0.5. Lol.
@manicas – What app did you use to make the diagrams in your tutorial (i.e. https://assets.digitalocean.com/articles/varnish/goal.png)? They’re pretty nifty looking.
Thanks! Adobe Illustrator
Hi @manicas
After successfully configuring the setup on Ubuntu12.04.5 [Visitor>Nginx>SSL Termination>Varnish3.0>Apache], I installed WHMCS and encountered an error, “The page isn’t redirecting properly”.
After searching online I followed those steps.
So I added following line in my concerned server block:
And then I added following line in my .htaccess file:
It went well, stopped redirecting and I could access my admin panel over https.
Now while logging into WHMCS admin panel, I found that the IP tracking shows that my IP is 127.0.0.1 on the login page. Even though as per your tutorial I have added following lines to my server block:
Yet it seems the IP is not getting passed on and WHMCS is picking the proxy 127.0.0.1 coming from port 80 through varnish.
Can you please advice how to configure server so that real IP is passed on & understood by Apache/WHMCS?
PS: I have tried but failed to implement Real IP Module
– Regards Saurabh
[update] I checked PHP Info into my WMCS and found the following:
Apache Environment
Maybe this could help to investigate the matter…
After searching left & right, ultimately landed answers from WHMCS Support & CloudFlare Module.
Although my CloudFlare module is still not working but temporarily I have been able to resolve the Real IP issue by adding all CloudFlare IPs to trusted proxy list in WHMCS security settings.
Still got no idea why mod_cloudflare is not able to supply correct IP to the Apache server :(
Can someone throw some light on mod_cloudflare or known answer to this issue? I am afraid that my installation of WordPress (yet to be done) will also meet the same fate as WHMCS and I would be running here and there for answers :(
Thanks for this tutorial! I’m wondering how this would work when Apache is not in the picture. I’ve got a LEMP stack using SSL, so nginx is redirecting all port 80 traffic to port 443. I’m interested in putting Varnish on the server to increase my ability to handle additional traffic, but it looks like it’s a whole different proposition given that we’re using a full-time SSL setup…
If you’re hosting a WordPress site without SSL or want to disable SSL now that you’re proxying the SSL request, don’t forget to check for
HTTP_X_FORWARDED_PROTO
via yourwp-config.php
file to prevent a redirect loop.if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') $_SERVER['HTTPS']='on';
See Administration Over SSL on codex.wordpress.org for more information.
I am getting following error… Any clues?
I have a LAMP on Ubuntu 12.04 … trying to speed up my WordPress :-|
If you want to skip Apache, and use only Nginx with Varnish and SSL support reffer to my cheat sheet: http://dev.kafol.net/2015/05/ubuntu-nginx-php5-fpm-mariadb-varnish.html
Anyone have a suggestion as to how to get phpymyadmin working with this setup?
I have a stock configuration following this tutorial, with phpmyadmin running on my apache backend content server.
On varnish_vps I added this line to my varnish.conf:
When I visit mysite.com/phpmyadmin, I get the login screen, but after I login I get the following error:
phpMyAdmin - Error Cannot start session without errors, please check errors given in your PHP and/or webserver log file and configure your PHP installation properly. Also ensure that cookies are enabled in your browser.
I was unable to find any errors in php5-fpm.log
Great tutorial. I’m having issues with an infinite 302 loop when accessing wp-admin - I think the problem is due to the fact that WordPress backend (PHP) thinks the URL is HTTP, but it expects HTTPS. Of course, the URL I go to is HTTPS which is terminated at nginx - any ideas?
I’m experiencing the same issue, WP-Admin redirects to wp-login on the wrong port.
I followed the tutorial but when I try to reach my site on https I receive this error:
Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 1599316674
How to solve this?
Cheers, Jaap
Hi how i can setup redirection for request non ssl. I describe it here http://stackoverflow.com/questions/37545873/apache-rewrite-http-to-https-for-non-local-connections
here is example htacces in apache -
RewriteCond %{REMOTE_ADDR} !=127.0.0.1 RewriteCond %{HTTPS} off RewriteCond %{HTTP_HOST} ^(?:www.)?(.)$ [NC] RewriteRule (.) https://%1%{REQUEST_URI} [L,R=301]
this not work because i have
http -> varnish -> apache or https -> nginx -> varnish -> apache
so for apache all time have 127.0.0.1 so i need know this is varnish request direct from http or from varnish. How to do this ?
How would the second part of this look if I have let’s encrypt? (following this tutorial https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-14-04 )
It’s working … mostly?
Page requests are coming through ssl just fine, but resources ON the page are not (absolute URL’s). In some cases, resources aren’t loading (some images and a webfont).
Is there a way (without forcing everything to redirect to https) to have on-page references to the same server call https if and only if the original page request does?
How is the performance of this setup? this looks like the traffic is routed through a lot of big software, so I have a big doubt how the performance turns out to be.
It seems a bit outdated, Will it work for current Varnish 5.1 and PHP7?
As good as this tutorial is, and i have to admit i do have servers with a very similar setup (i prefer to use the same apache twice (443,8080), if you really want to configure a good setup for this, use a load balancer so you don’t need the front nginx at all.
Header Age = 0 in Response Headers
Why ? Please help me !!!
If anyone has trouble getting varnish to listen on port 80, check this out:
https://mail.queryxchange.com/q/1_824389/varnish-daemon-not-listening-on-configured-port/
How to configure this for multiple domains? It only works for a single domain now.
Awesome guide! Just a couple of questions I hope can be cleared up…
I have a VPS with 3 IP addresses and many websites.
How do I configure
server_name example.com;
to work with all websites on the server?How do I configure
Hi Mitchell,
I have exact same configuration. and it’d been working well with all the GET requests. But for POST I am getting following error in nginx:
recv() failed (104: Connection reset by peer) while reading response header from upstream
Do you happen to come across similar issue? Also, if yes, what’s the solution?
Is it possible to set an SSL certificate from LetsEncrypt with Varnish and nginx?
Hi,
Total Noob question, I already have a full wildcard SSL certificate installed on my site. Do I need a separate certificate for this?
Thanks!
How would this work if I have a LEMP stack? I am not using Apache. Please advise!