Tutorial

How To Scale Web Applications on Ubuntu 12.10

Published on April 11, 2013
author

Jason Kurtz

How To Scale Web Applications on Ubuntu 12.10

Scaling a Web Application on Ubuntu

Scaling web applications is one of the most exciting things for a web administrator to have to do. Scaling is the process by which a system administrator utilizes multiple servers to serve a single web application.

Most scaling involves separating your web server and your database, and adding redundant systems to each aspect.

This article will walk you through the steps to take an application from a single server to two, by adding a redundant web-server.

The two servers (which will be referred to as "Server A" and "Server B") will be your web servers and will use nginx for load balancing.

In all of the examples in this tutorial, the following server to IP map will apply:

Server A: 1.1.1.1

Server B: 2.2.2.2

Server A and B will be load balanced by using a program called nginx. Nginx can function by a webserver itself, but in our case we will only be using it for a load-balancer between two servers running apache.

Step 1 - Configure Nginx on Server A

The following steps will result in Server A and Server B sharing the load from website traffic.

The first thing we are going to do is install nginx on Server A, to do our load balancing.

 
sudo apt-get install nginx php5-fpm

Once it is installed, we need to configure it a bit. We need to edit /etc/nginx/sites-enabled/default and tell nginx the IP addresses and port numbers where our website will actually be hosted.

Go ahead and open that file:

sudo nano /etc/nginx/sites-enabled/default

We can do that with an upstream block. An example upstream block is shown here and explained line by line below. In the following set of examples,

upstream nodes {
	ip_hash; 
        server 1.1.1.1:8080 max_fails=3 fail_timeout=30s;
	server 2.2.2.2:8080 max_fails=3 fail_timeout=30s; 
}

The first line defines an upstream block and names it "nodes" and the last line closes that block.

You can create as many upstream blocks as you like, but they must be uniquely named.

The two "server" lines are the important ones; they define the IP addresses and port numbers that our actual web servers are listening on.

Keep in mind that this IP address can be that of the same server that we are running nginx on.

Whether or not that is the case, it is recommended that you use a port other than 80.

A port other than the default HTTP port is recommended because you want to make it hard for end-users to accidentally stumble upon any of the individual servers used for web-server load-balancing.

A firewall may also be used as a preventative measure, as all web connections to any of the servers in your upstream originate from the IP address of the server running nginx. Steps to enhance security on your web servers will be explored later in this article.

The next thing we have to do is configure nginx so it will respond to and forward requests for a particular hostname. We can accomplish both of these with a virtualhost block that includes a proxy_pass line.

See below for an example and an explanation.

server {
        listen   1.1.1.1:80;

        root /path/to/document/root/;
        index index.html index.htm;

        server_name domain.tld www.domain.tld;

        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi_params;
        }

        location / {
                proxy_pass http://nodes;
        }
}

There are a few key pieces to this configuration, including the "listen" line, the "server_name" line and the "location" block.

Make sure to edit the document root to point to your site

The first two are standard configuration elements, specifying the IP address and port that our web server is listening on respectively, but it is the last element, the "location" block, that allows us to load balance our servers.

Since Server A is going to serve as both the endpoint that users will connect to and as one of the load-balanced servers, we need to make a second virtualhost block, listening on a non-standard port for incoming connections.

server {
        listen   127.0.0.1:8080; 

         root /path/to/document/root/;
        index index.html index.htm index.php;

        server_name domain.tld www.domain.tld;

        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi_params;
        }
}

Once you're done, reload nginx:

sudo service nginx reload

Step 2 - Configure nginx on Server B

We need to set up a similar virtualhost block on Server B so it will also respond to requests for our domain. It will look very similar to the second server block we have on Server A.

server {
        listen   2.2.2.2:8080; 

        root /path/to/document/root/;
        index index.html index.htm index.php;

        server_name domain.tld www.domain.tld;

        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi_params;
        }
}

Reload nginx on the second server as well:

sudo service nginx reload

That is the only configuration that we need to do on this server!

One of the drawbacks when dealing with load-balanced web servers is the possibility of data being out of sync between the servers.

A solution to this problem might be employing a git repository to sync to each server, which will be the subject of a future tutorial.

You should now have a working load-balanced configuration. As always, any feedback in the comments is welcome!

By Jason Kurtz

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors
Default avatar
Jason Kurtz

author

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
10 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

In Step 1, the first server block (for the load-balancer) includes a location block for php scripts. Why?

I would think you wouldn’t want the LB virtual host to run any scripts, just to pass them to one of the web server nodes.

It seems to me that load balancing would cause issues with differing PHP Sessions unless they’re stored in a replicating database or on a different network share. Does load balancing reduce the usefullness of PHP sessions?

So will this same process work for Ubuntu 14.04?

I had to remove the php portion to get this to work. Mine simply has

server { listen 80; server_name www.domain.com;

    return 301 https://$server_name$request_uri;

}

server { listen 443;

    ssl on;
    ssl_certificate /etc/ssl/nginx/server.crt;
    ssl_certificate_key /etc/ssl/nginx/server.key;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers ECDHE-RSA-AES256-SHA384:AES256-SHA256:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
    ssl_prefer_server_ciphers on;

    root /var/www/default;
    index index.html index.htm;

    location / {
            proxy_pass http://nodes;
    }

}

Kamal Nasser
DigitalOcean Employee
DigitalOcean Employee badge
December 14, 2013

@robert: <blockquote>1) webservers: I would like to horizontally scale them (add another droplets when resource usage exceeds 80% and remove a droplet when resource usage drop below 40%)

Additionally these droplets could be located over a variety of locations, and users would be connected based on both load and location. (scaling could possibly be done per location, depending on the size of the app) </blockquote>

Check out <a href=“https://www.digitalocean.com/community/articles/how-to-scale-your-infrastructure-with-digitalocean”>https://www.digitalocean.com/community/articles/how-to-scale-your-infrastructure-with-digitalocean</a>. You can also use Amazon Route 53 or any other DNS service that supports GeoDNS.

<blockquote>2) database server: I would like to vertically scale this one (increase resources when usage exceeds 80% and decrease resources when usage drops below 40%)

Additionally I would consider a master-master sync of my database server for the sake of uptime guarantee. I would then have one Amsterdam droplet, and another USA location droplet) </blockquote> Unfortunately you won’t be able to do that since we require the droplet to be powered off in order to resize it however you can add more masters if need be.

Hello, I am building a realtime application, for the purpose of this question think of it as a chat application.

I would like to seperate the webservers from the database and “autoscale” the infrastructure:

  1. webservers: I would like to horizontally scale them (add another droplets when resource usage exceeds 80% and remove a droplet when resource usage drop below 40%)

Additionally these droplets could be located over a variety of locations, and users would be connected based on both load and location. (scaling could possibly be done per location, depending on the size of the app)

  1. database server: I would like to vertically scale this one (increase resources when usage exceeds 80% and decrease resources when usage drops below 40%)

Additionally I would consider a master-master sync of my database server for the sake of uptime guarantee. I would then have one Amsterdam droplet, and another USA location droplet)

Any ideas or experience with autoscaling?

Kamal Nasser
DigitalOcean Employee
DigitalOcean Employee badge
August 27, 2013

@ken.thul: It depends on how you set up your cluster. You should usually install apc/ufw/fail2ban on both servers and mysql/memcache on a separate server accessible by both servers.

Great tutorial, will try it out on my php/mysql application.

Right now I am running on a 32gb server, and will try to add another 8gb droplet for my databse.

Right now I have Varnish/memcache/APC/mysql/phpmyadmin/postfix installed.

If I add a second server should I install varnish/memcache/APC/mysql/UFW/fail2ban on it aswell? Will also follow this tutorial on Master - Master https://www.digitalocean.com/community/articles/how-to-set-up-mysql-master-master-replication

Also, is here a tutorial on cluster install for ubuntu?

Kamal Nasser
DigitalOcean Employee
DigitalOcean Employee badge
May 16, 2013

@info+digitalocean: Here is an article on setting up GlusterFS to share data between droplets: https://www.serverstack.com/blog/2013/01/25/using-glusterfs-on-your-managed-server/

This is great. Can you now show us to sync or replicate the data (and what data to sync) on both servers?

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.