Tutorial

How To Optimize Nginx Configuration

Updated on March 27, 2020
author

Alex Kavon

How To Optimize Nginx Configuration

Introduction

Nginx

Nginx is a fast and lightweight alternative to the sometimes overbearing Apache 2. However, Nginx just like any kind of server or software must be tuned to help attain optimal performance.

Requirements

Worker Processes and Worker Connections

The first two variables we need to tune are the worker processes and worker connections. Before we jump into each setting, we need to understand what each of these directives control. The worker_processes directive is the sturdy spine of life for Nginx. This directive is responsible for letting our virtual server know how many workers to spawn once it has become bound to the proper IP and port(s). It is common practice to run 1 worker process per core. Anything above this won’t hurt your system, but it will leave idle processes usually just lying about.

To figure out what number you’ll need to set worker_processes to, simply take a look at the amount of cores you have on your setup. If you’re using the DigitalOcean 512MB setup, then it’ll probably be one core. If you end up fast resizing to a larger setup, then you’ll need to check your cores again and adjust this number accordingly. We can accomplish this by greping out the cpuinfo:

grep processor /proc/cpuinfo | wc -l

Let’s say this returns a value of 1. Then that is the amount of cores on our machine!

The worker_connections command tells our worker processes how many people can simultaneously be served by Nginx. The default value is 768; however, considering that every browser usually opens up at least 2 connections/server, this number can half. This is why we need to adjust our worker connections to its full potential. We can check our core’s limitations by issuing a ulimit command:

ulimit -n

On a smaller machine (512MB droplet) this number will probably read 1024, which is a good starting number.

Let’s update our config:

sudo nano /etc/nginx/nginx.conf

worker_processes 1;
worker_connections 1024;

Remember, the amount of clients that can be served can be multiplied by the amount of cores. In this case, we can server 1024 clients/second. This is, however, even further mitigated by the keepalive_timeout directive.

Buffers

Another incredibly important tweak we can make is to the buffer size. If the buffer sizes are too low, then Nginx will have to write to a temporary file causing the disk to read and write constantly. There are a few directives we’ll need to understand before making any decisions.

client_body_buffer_size: This handles the client buffer size, meaning any POST actions sent to Nginx. POST actions are typically form submissions.

client_header_buffer_size: Similar to the previous directive, only instead it handles the client header size. For all intents and purposes, 1K is usually a decent size for this directive.

client_max_body_size: The maximum allowed size for a client request. If the maximum size is exceeded, then Nginx will spit out a 413 error or Request Entity Too Large.

large_client_header_buffers: The maximum number and size of buffers for large client headers.

client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;

Timeouts

Timeouts can also drastically improve performance.

The client_body_timeout and client_header_timeout directives are responsible for the time a server will wait for a client body or client header to be sent after request. If neither a body or header is sent, the server will issue a 408 error or Request time out.

The keepalive_timeout assigns the timeout for keep-alive connections with the client. Simply put, Nginx will close connections with the client after this period of time.

Finally, the send_timeout is established not on the entire transfer of answer, but only between two operations of reading; if after this time client will take nothing, then Nginx is shutting down the connection.

client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;

Gzip Compression

Gzip can help reduce the amount of network transfer Nginx deals with. However, be careful increasing the gzip_comp_level too high as the server will begin wasting cpu cycles.

gzip             on;
gzip_comp_level  2;
gzip_min_length  1000;
gzip_proxied     expired no-cache no-store private auth;
gzip_types       text/plain application/x-javascript text/xml text/css application/xml;

Static File Caching

It’s possible to set expire headers for files that don’t change and are served regularly. This directive can be added to the actual Nginx server block.

location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
expires 365d;
}

Add and remove any of the file types in the array above to match the types of files your Nginx servers.

Logging

Nginx logs every request that hits the VPS to a log file. If you use analytics to monitor this, you may want to turn this functionality off. Simply edit the access_log directive:

access_log off;

Save and close the file, then run:

sudo service nginx restart

Conclusion

At the end of the day a properly configured server is one that is monitored and tweaked accordingly. None of the variables above are set in stone and will need to be adjusted to each unique case. Even further down the road, you may be looking to further your machine performance with research in load balancing and horizontal scaling. These are just a few of the many enhancements a good sysadmin can make to a server.

<div class=“author”>Submitted by: <a href=“https://twitter.com/alexkavon”>Alex Kavon</div>

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors
Default avatar
Alex Kavon

author

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
10 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Great tutorial! Useful and well-written.

One thing: It’s good practice to test your configuration changes after making them, before restarting nginx so perhaps add one final step before “service nginx restart” to recommend the user enter “nginx -t” to make sure there are no typos etc in your configuration changes.

Here is a common.conf I have that you can include in your server blocks to make adding servers easier.

listen 80;
index index.php index.html;

# protect dotfiles
location ~ /\. { deny all; error_log off; log_not_found off; }

# ignore common 404s
location = /robots.txt  { access_log off; log_not_found off; }
location = /favicon.ico { access_log off; log_not_found off; }

# case insensitive browser cache for static files
# this is the same list as cloudflare plus extras
location ~* \.(7z|ai|bmp|bz2|class|css|csv|docx?|ejs|eot|eps|flv|gif|gz|html?|ico|jar|jpe?g|js|json|lzh|m4a|m4v|midi?|mov|mp3|mp4|pdf|pict|pls|png|pptx?|ps|psd|rar|rss|rtf|svgz?|swf|tar|tiff?|ttf|txt|wav|webp|woff|xlsx?|zip)$ {
    expires max;
    add_header Pragma public;
    add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}

# static folders cache
location ~ /(static|files|wp-content|images)/ {
    expires max;
    add_header Pragma public;
    add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}

Also make sure that if you’re using regular expressions in the server_name that you give the original host name to PHP, otherwise $_SERVER['SERVER_NAME'] will be the regular expression!

In the fastcgi_params file:

fastcgi_param   SERVER_NAME     $host;

Adding the following

location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
expires 365d;
}

gives me the following error "location" directive is not allowed here in /etc/nginx/nginx.conf:60.

Am I missing something?

Thanks!

Kamal Nasser
DigitalOcean Employee
DigitalOcean Employee badge
July 3, 2014

@gustavojimenez.folta: The paths to some configuration files might differ from Ubuntu to Debian but since Ubuntu is based on Debian you should be fine following this tutorial on an Ubuntu system.

GREAT write-up, however I would love a new article with an updated OS. Many of the config values have changed and will not work in a recent version of Nginx.

This directive is responsible for letting our virtual server know many workers to spawn once it has become bound to the proper IP and port(s).

how many workers to spawn once it has become bound to the proper IP and port(s).

Do you make all these changes in the nginx.conf file, because I can’t follow.

Why don’t you use nproc instead of grep processor /proc/cpuinfo | wc -l?

If I set in php.ini:

upload_max_filesize = 2M
max_file_uploads = 20

Then to what size do I set the client_body_buffer_size and client_max_body_size ? Can the buffer/body size be 2M or must it be 20x2=40M or more? And will timeouts also be affected by this?

How to adjust correctly maximum opened files by system? because its related with worker_connections

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.