Tutorial

How To Set Up Nginx Load Balancing

Published on August 27, 2012
How To Set Up Nginx Load Balancing

About Load Balancing

Loadbalancing is a useful mechanism to distribute incoming traffic around several capable Virtual Private servers.By apportioning the processing mechanism to several machines, redundancy is provided to the application -- ensuring fault tolerance and heightened stability. The Round Robin algorithm for load balancing sends visitors to one of a set of IPs. At its most basic level Round Robin, which is fairly easy to implement, distributes server load without implementing considering more nuanced factors like server response time and the visitors’ geographic region.

Setup

The steps in this tutorial require the user to have root privileges on your VPS. You can see how to set that up in the Users Tutorial.

Prior to setting up nginx loadbalancing, you should have nginx installed on your VPS. You can install it quickly with apt-get:

sudo apt-get install nginx

Upstream Module

In order to set up a round robin load balancer, we will need to use the nginx upstream module. We will incorporate the configuration into the nginx settings.

Go ahead and open up your website’s configuration (in my examples I will just work off of the generic default virtual host):

sudo nano /etc/nginx/sites-available/default

We need to add the load balancing configuration to the file.

First we need to include the upstream module which looks like this:

upstream backend  {
  server backend1.example.com;
  server backend2.example.com;
  server backend3.example.com;
}

We should then reference the module further on in the configuration:

 server {
  location / {
    proxy_pass  http://backend;
  }
}

Restart nginx:

sudo service nginx restart

As long as you have all of the virtual private servers in place you should now find that the load balancer will begin to distribute the visitors to the linked servers equally.

Directives

The previous section covered how to equally distribute load across several virtual servers. However, there are many reasons why this may not be the most efficient way to work with data. There are several directives that we can use to direct site visitors more effectively.

Weight

One way to begin to allocate users to servers with more precision is to allocate specific weight to certain machines. Nginx allows us to assign a number specifying the proportion of traffic that should be directed to each server.

A load balanced setup that included server weight could look like this:

upstream backend  {
  server backend1.example.com weight=1;
  server backend2.example.com weight=2;
  server backend3.example.com weight=4;
}

The default weight is 1. With a weight of 2, backend2.example will be sent twice as much traffic as backend1, and backend3, with a weight of 4, will deal with twice as much traffic as backend2 and four times as much as backend 1.

Hash

IP hash allows servers to respond to clients according to their IP address, sending visitors back to the same VPS each time they visit (unless that server is down). If a server is known to be inactive, it should be marked as down. All IPs that were supposed to routed to the down server are then directed to an alternate one.

The configuration below provides an example:

upstream backend {
  ip_hash;
  server   backend1.example.com;
  server   backend2.example.com;
  server   backend3.example.com  down;
 }

Max Fails

According to the default round robin settings, nginx will continue to send data to the virtual private servers, even if the servers are not responding. Max fails can automatically prevent this by rendering unresponsive servers inoperative for a set amount of time.

There are two factors associated with the max fails: max_fails and fall_timeout. Max fails refers to the maximum number of failed attempts to connect to a server should occur before it is considered inactive. Fall_timeout specifies the length of that the server is considered inoperative. Once the time expires, new attempts to reach the server will start up again. The default timeout value is 10 seconds.

A sample configuration might look like this:

upstream backend  {
  server backend1.example.com max_fails=3  fail_timeout=15s;
  server backend2.example.com weight=2;
  server backend3.example.com weight=4;

See More

This has been a short overview of simple Round Robin load balancing. Additionally, there are other ways to speed and optimize a server:

By Etel Sverdlov

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
45 Comments
Leave a comment...

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Do I have to copy the same website files to the different backend servers? If yes, do i have to ask all the apache servers read from the same mysql server?

it seems to only cover http://

proxy_pass http://backend;

how about dealing with https?

@marwan, this article is for nginx and not apache - to answer your questions, yes you do have to keep the same set of files on each of the server, and yes, a database (only) server sitting behind it

thanks, really helped me

Self answer

To deal with https that should probably look something like: proxy_pass $scheme://backend;

Not tested.

How do I manage user sessions across all servers?

anil.virtuali: memcache sessions.

contato: redis as a session keeper to get more persistent.

marwan: you can use NFS to keep them sync.

So simple, so useful. Thank you!

For those asking about sessions, storing them in a central location is ideal: mysql, mongodb, memcache, etc. If (for whatever reason) you need to store them locally, I suspect the “ip_hash” directive mentioned above should keep them working (as it keeps your visitors tied to one machine).

How do we make sure the load balancer never fails? For example in case of an outage on DigitalOcean, or any problem with the droplet nginx runs on.

For me the main use case of a load balancer is to avoid downtime when a droplet with my web server fails. More droplets, more redundancy. But now nginx will be the single point of failure, am I wrong?

Kamal Nasser
DigitalOcean Employee
DigitalOcean Employee badge
November 27, 2013

@Andrei: Since we do not support floating IPs, that is correct. However you can make use of the round robin DNS feature so that if one load balancer fails, visitors will get redirected to the one that is still up.

@Kamal: I think if one of the servers went down RR DNS would still route traffic to it, so half of the http/s requests would still fail.

Given that the load balancer will be generally be a smaller droplet than the application servers, will it inherit their transfer allowance?

proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Queue-Start “t=${msec}000”;

These should be added so that all of the data is sent foward for proxy pass

Kamal Nasser
DigitalOcean Employee
DigitalOcean Employee badge
January 20, 2014

@John: Unfortunately, no. We do not support bandwidth pooling.

@Kamal, too bad. I know you know this already, but this is a reminder that DigitalOcean really needs a load balancer droplet type.

Kamal Nasser
DigitalOcean Employee
DigitalOcean Employee badge
January 22, 2014

@John: Load balancers are on the roadmap :) You can follow the progress on that here: <a href=“http://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/2670745-load-balancer”>http://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/2670745-load-balancer</a> There’s no ETA currently, though.

This tutorial was far easier than I thought! Best feeling in the world when I refreshed and I saw which droplet was serving my page. Thanks!

I have followed every steps of this tutorial but fall on error “The page isn’t redirecting properly Firefox has detected that the server is redirecting the request for this address in a way that will never complete. This problem can sometimes be caused by disabling or refusing to accept cookies.” In my server block under location section: location / { try_files $uri $uri/ /index.php?q=$uri&$args; }. should i have to remove this line? i have tried both but the result is same. My server configuration is nginx, php5-fpm, mysql, wordpress and varnish. i tried also tried by stopping varnish but chance.

Kamal Nasser
DigitalOcean Employee
DigitalOcean Employee badge
January 31, 2014

@diegodacm: That line shouldn’t cause that. Do you have any redirect/rewrite rules? Can you pastebin all of your virtualhosts?

Excellent tutorial.

how big has to be this machine? or how to size it?

I dont understand it very well… How would my backend machines keep their data synchronized? I mean, there would be a load balancer and it would redirect the traffic to the machines, but if one machine gets some filmes modified, how would the other one have the same modification?

Andrew SB
DigitalOcean Employee
DigitalOcean Employee badge
June 10, 2014

@lucasmx: The actual example here is extremely basic. It’s simply showing how to use Nginx as a load balancer. Unless you were just serving static content, you’d need to set up database replication. For a more realistic example, you might want to check out this article showing how to scale out a Wordpress installation with multiple front end and database servers:

https://www.digitalocean.com/community/tutorials/how-to-optimize-wordpress-performance-with-mysql-replication-on-ubuntu-14-04

nice tut. Thanks.

Here is a an awkward questions, if I have several blocks config, do I have to add the load balancer to each or only one will do using this code: <pre> upstream backend { } </pre> should I just include that in nginx.config instead?

Andrew SB
DigitalOcean Employee
DigitalOcean Employee badge
June 19, 2014

@mshq20022001: What are you using the different server blocks for? Are they subdomains? Most likely, you’ll need separate upstream directives for each server block.

After setting up a load balancer with Nginx and three backend hosts (also running Nginx), my website runs much slower than using 1 VPS :(. Do you have any idea why is that? I tested my load balancer on tools.pingdom.com and realize that the wait time for each request has been increased dramatically.

What do you think could be wrong here?

Thanks in advance,

Kamal Nasser
DigitalOcean Employee
DigitalOcean Employee badge
August 15, 2014

@chien.study: Are all droplets located in the same datacenter? Can you post the results of a ping from the load balancer to the web servers/backend hosts?

Etel Sverdlov – Good Tutorial, but you should give credit to http://nginx.org/en/docs/http/load_balancing.html since 99% of your material is from the original documentation.

This means should have more than one server? what the purpose of the ‘backend’ and ‘http://backend’, whether it is the purpose for the domain/subdomain?

So when we have this configuration, does this mean that we have to have other droplets that have nginx servers on it?

Hey guys, help me to understand better…if I set up a load balancer with Nginx such as in this tutorial, can I have the other backend hosts in apache?

I followed this tutorial but a blank page appears. Do I have to config in backend servers or make any other configurations. I test with only one backend server. Here is my nginx config in http context

upstream backend { server s1.my_domain.us:80; }

server { listen 80; server_name my_demo_domain.com; location / { proxy_pass http://backend; } }

Please give me some advice,

Thanks.

I absolutely love your tutorials! Thank you very much. Your technical writing skills are by far the best of the best.

Thank you for the article. Just a minor note: the article mentions “fall_timeout” twice. The code instead correctly uses “fail_timeout”.

Take care, Martin

How setup this using easyengine?

Hello, really simple, short and sweet on load balance with Nginx. Does Nginx default to round robin algo., when referred to backend module from these lines below?

location / {
    proxy_pass  http://backend;
  }

This comment has been deleted

    This comment has been deleted

      Hello @kamaln7, I’ve followed this tutorial thoroughly but am having trouble.

      In short, when I pass my ip address directly into “proxy_pass”, the proxy works:

      server {
              location / {
                      proxy_pass http://01.02.03.04;
              }
      }
      

      When I visit my proxy computer, I can see the content from the proxy ip… but when I use an upstream directive, it doesn’t

      upstream backend {
               server 01.02.03.04;
      }
      
      server {
              location / {
                      proxy_pass http://backend;
              }
      }
      

      When I visit my proxy computer, I am greeted with the default Nginx server page.

      Any further assistance would be appreciated. I’ve done a ton of research but can’t figure out why “upstream” is not working. I don’t get any errors. It just doesn’t proxy.

      Okay, looks like I found the answer…

      two things about the backend servers, at least for the above scenario when using IP addressses:

      1. a port must be specified
      2. the port cannot be :80

      backend server block(s) should be configured as following:

          server {
      
              # for your reverse_proxy, *do not* listen to port 80
              listen 8080;
              listen [::]:8080;
      
              server_name 01.02.03.04;
      
              # your other statements below
              ...
          }
      
      

      and your reverse proxy server block should be configured like below:

          upstream backend {
              server 01.02.03.04:8080;
          }
      
          server {
              location / {
                  proxy_pass http://backend;
               }
          }
      

      It looks as if a backend server is listening to :80, the reverse proxy server doesn’t render it’s content. I guess that makes sense, since the server is in fact using default port 80 for the general public.

      Great tutorial, nonetheless. I know this is some sophisticated technology and our use-cases will be different. Some things you have to learn by doing.

      There are two factors associated with the max fails: max_fails and fall_timeout.

      find a typo fall_timeout should be fail_timeout

      Excellent Tutorial

      I had my doubts about if Nginx was going to be helpful… and I was having difficulties setting it up. You solved my probelem for setting it up and another helped me realize the benefits of using Nginx for load balancing.

      Do we need to do same NGNIX Config on rest slave servers?

      Is it possible combine least_connect and max fails?

      For example:

      upstream backend { leastconn; server backend1.example.com maxfails=3 fail_timeout=15s; server backend2.example.com ; server backend3.example.com ;

      Try DigitalOcean for free

      Click below to sign up and get $200 of credit to try our products over 60 days!

      Sign up

      Join the Tech Talk
      Success! Thank you! Please check your email for further details.

      Please complete your information!

      Become a contributor for community

      Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

      DigitalOcean Documentation

      Full documentation for every DigitalOcean product.

      Resources for startups and SMBs

      The Wave has everything you need to know about building a business, from raising funding to marketing your product.

      Get our newsletter

      Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

      New accounts only. By submitting your email you agree to our Privacy Policy

      The developer cloud

      Scale up as you grow — whether you're running one virtual machine or ten thousand.

      Get started for free

      Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

      *This promotional offer applies to new accounts only.