Tutorial

How To Serve Flask Applications with Gunicorn and Nginx on CentOS 7

How To Serve Flask Applications with Gunicorn and Nginx on CentOS 7
Not using CentOS 7?Choose a different version or distribution.
CentOS 7

Introduction

In this guide, we will be setting up a simple Python application using the Flask micro-framework on CentOS 7. The bulk of this article will be about how to set up the Gunicorn application server to launch the application and Nginx to act as a front end reverse proxy.

Prerequisites

Before starting on this guide, you should have a non-root user configured on your server. This user needs to have sudo privileges so that it can perform administrative functions. To learn how to set this up, follow our initial server setup guide.

To learn more about the WSGI specification that our application server will use to communicate with our Flask app, you can read the linked section of this guide. Understanding these concepts will make this guide easier to follow.

When you are ready to continue, read on.

Install the Components from the CentOS and EPEL Repositories

Our first step will be to install all of the pieces that we need from the repositories. We will need to add the EPEL repository, which contains some extra packages, in order to install some of the components we need.

You can enable the EPEL repo by typing:

sudo yum install epel-release

Once access to the EPEL repository is configured on our system, we can begin installing the packages we need. We will install pip, the Python package manager, in order to install and manage our Python components. We will also get a compiler and the Python development files needed by Gunicorn. We’ll install Nginx now as well.

You can install all of these components by typing:

sudo yum install python-pip python-devel gcc nginx

Create a Python Virtual Environment

Next, we’ll set up a virtual environment in order to isolate our Flask application from the other Python files on the system.

Start by installing the virtualenv package using pip:

sudo pip install virtualenv

Now, we can make a parent directory for our Flask project. Move into the directory after you create it:

mkdir ~/myproject
cd ~/myproject

We can create a virtual environment to store our Flask project’s Python requirements by typing:

virtualenv myprojectenv

This will install a local copy of Python and pip into a directory called myprojectenv within your project directory.

Before we install applications within the virtual environment, we need to activate it. You can do so by typing:

source myprojectenv/bin/activate

Your prompt will change to indicate that you are now operating within the virtual environment. It will look something like this (myprojectenv)user@host:~/myproject$.

Set Up a Flask Application

Now that you are in your virtual environment, we can install Flask and Gunicorn and get started on designing our application:

Install Flask and Gunicorn

We can use the local instance of pip to install Flask and Gunicorn. Type the following commands to get these two components:

pip install gunicorn flask

Create a Sample App

Now that we have Flask available, we can create a simple application. Flask is a micro-framework. It does not include many of the tools that more full-featured frameworks might, and exists mainly as a module that you can import into your projects to assist you in initializing a web application.

While your application might be more complex, we’ll create our Flask app in a single file, which we will call myproject.py:

nano ~/myproject/myproject.py

Within this file, we’ll place our application code. Basically, we need to import flask and instantiate a Flask object. We can use this to define the functions that should be run when a specific route is requested. We’ll call our Flask application in the code application to replicate the examples you’d find in the WSGI specification:

from flask import Flask
application = Flask(__name__)

@application.route("/")
def hello():
    return "<h1 style='color:blue'>Hello There!</h1>"

if __name__ == "__main__":
    application.run(host='0.0.0.0')

This basically defines what content to present when the root domain is accessed. Save and close the file when you’re finished.

You can test your Flask app by typing:

python myproject.py

Visit your server’s domain name or IP address followed by the port number specified in the terminal output (most likely :5000) in your web browser. You should see something like this:

Flask sample app

When you are finished, hit CTRL-C in your terminal window a few times to stop the Flask development server.

Create the WSGI Entry Point

Next, we’ll create a file that will serve as the entry point for our application. This will tell our Gunicorn server how to interact with the application.

We will call the file wsgi.py:

nano ~/myproject/wsgi.py

The file is incredibly simple, we can simply import the Flask instance from our application and then run it:

from myproject import application

if __name__ == "__main__":
    application.run()

Save and close the file when you are finished.

Testing Gunicorn’s Ability to Serve the Project

Before moving on, we should check that Gunicorn can correctly.

We can do this by simply passing it the name of our entry point. We’ll also specify the interface and port to bind to so that it will be started on a publicly available interface:

cd ~/myproject
gunicorn --bind 0.0.0.0:8000 wsgi

If you visit your server’s domain name or IP address with :8000 appended to the end in your web browser, you should see a page that looks like this:

Flask sample app

When you have confirmed that it’s functioning properly, press CTRL-C in your terminal window.

We’re now done with our virtual environment, so we can deactivate it:

deactivate

Any operations now will be done to the system’s Python environment.

Create a Systemd Unit File

The next piece we need to take care of is the Systemd service unit file. Creating a Systemd unit file will allow CentOS’s init system to automatically start Gunicorn and serve our Flask application whenever the server boots.

Create a file ending with .service within the /etc/systemd/system directory to begin:

sudo nano /etc/systemd/system/myproject.service

Inside, we’ll start with a [Unit] section, which is used to specify metadata and dependencies. We’ll put a description of our service here and tell the init system to only start this after the networking target has been reached:

[Unit]
Description=Gunicorn instance to serve myproject
After=network.target

Next, we’ll open up the [Service] section. We’ll specify the user and group that we want the process to run under. We will give our regular user account ownership of the process since it owns all of the relevant files. We’ll give the Nginx user group ownership so that it can communicate easily with the Gunicorn processes.

We’ll then map out the working directory and set the PATH environmental variable so that the init system knows where our the executables for the process are located (within our virtual environmment). We’ll then specify the commanded to start the service. Systemd requires that we give the full path to the Gunicorn executable, which is installed within our virtual environment.

We will tell it to start 3 worker processes (adjust this as necessary). We will also tell it to create and bind to a Unix socket file within our project directory called myproject.sock. We’ll set a umask value of 007 so that the socket file is created giving access to the owner and group, while restricting other access. Finally, we need to pass in the WSGI entry point file name:

[Unit]
Description=Gunicorn instance to serve myproject
After=network.target

[Service]
User=user
Group=nginx
WorkingDirectory=/home/user/myproject
Environment="PATH=/home/user/myproject/myprojectenv/bin"
ExecStart=/home/user/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi

The last piece we need to add to the file is an [Install] section. This will tell Systemd what to link this service to if we enable it to start at boot. We want this service to start when the regular multi-user system is up and running:

[Unit]
Description=Gunicorn instance to serve myproject
After=network.target

[Service]
User=user
Group=nginx
WorkingDirectory=/home/user/myproject
Environment="PATH=/home/user/myproject/myprojectenv/bin"
ExecStart=/home/user/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi

[Install]
WantedBy=multi-user.target

With that, our Systemd service file is complete. Save and close it now.

We can now start the Gunicorn service we created and enable it so that it starts at boot:

sudo systemctl start myproject
sudo systemctl enable myproject

Configuring Nginx to Proxy Requests

Our Gunicorn application server should now be up and running, waiting for requests on the socket file in the project directory. We need to configure Nginx to pass web requests to that socket by making some small additions to its configuration file.

Begin by opening up Nginx’s default configuration file:

sudo nano /etc/nginx/nginx.conf

Open up a server block just above the other server {} block that is already in the file:

http {
    . . .

    include /etc/nginx/conf.d/*.conf;

    server {
    }

    server {
        listen 80 default_server;

        . . .

We will put all of the configuration for our Flask application inside of this new block. We will start by specifying that this block should listen on the default port 80 and that it should respond to our server’s domain name or IP address:

server {
    listen 80;
    server_name server_domain_or_IP;
}

The only other thing that we need to add is a location block that matches every request. Within this block, we’ll set some standard proxying HTTP headers so that Gunicorn can have some information about the remote client connection. We will then pass the traffic to the socket we specified in our Systemd unit file:

server {
    listen 80;
    server_name server_domain_or_IP;

    location / {
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://unix:/home/user/myproject/myproject.sock;
    }
}

Save and close the file when you are finished.

The nginx user must have access to our application directory in order to access the socket file there. By default, CentOS locks down each user’s home directory very restrictively, so we will add the nginx user to our user’s group so that we can then open up the minimum permissions necessary to grant access.

You can add the nginx user to your user group with the following command. Substitute your own username for the user in the command:

sudo usermod -a -G user nginx

Now, we can give our user group execute permissions on our home directory. This will allow the Nginx process to enter and access content within:

chmod 710 /home/user

With the permissions set up, we can test our Nginx configuration file for syntax errors:

sudo nginx -t

If this returns without indicating any issues, we can start and enable the Nginx process so that it starts automatically at boot:

sudo systemctl start nginx
sudo systemctl enable nginx

You should now be able to go to your server’s domain name or IP address in your web browser and see your application:

Flask sample app

Conclusion

In this guide, we’ve created a simple Flask application within a Python virtual environment. We create a WSGI entry point so that any WSGI-capable application server can interface with it, and then configured the Gunicorn app server to provide this function. Afterwards, we created a Systemd unit file to automatically launch the application server on boot. We created an Nginx server block that passes web client traffic to the application server, relaying external requests.

Flask is a very simple, but extremely flexible framework meant to provide your applications with functionality without being too restrictive about structure and design. You can use the general stack described in this guide to serve the flask applications that you design.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
10 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Thanks everyone for all the info. I found this helped for the SELinux/nginx issues:

sudo yum install policycoreutils-python 
sudo semanage permissive -a httpd_t 

After following this tutorial exactly as above, I found two additional steps required to overcome a ‘502 Bad Gateway Error’.

  1. In the file at sudo nano /etc/systemd/system/myproject.service PYTHONPATH must replace PATH, and your user id should be set in the User field.

  2. Following another comment here, one must add SELinux policy with sudo:

sudo audit2allow -a -M nginx
sudo semodule -i nginx.pp

Following all of the steps in the tutorial and these two additional steps gave me a working Hello There application.

Are all the commands for generating the virtualenv supposed to be ran with sudo? virtualenv myprojectenv does not work unless ran w/ sudo and source myprojectenv/bin/activate also won’t work with it. I tried running through this guide as root when that happened but I’m getting the distinct impression (based on my 502 error) I’m not supposed to do that.

Hi I am using centos8 and my system md file is unable to run giving me a bad gateway status.

when running systemctl status el.service I get the following belwo

el.service - Gunicorn instance for El loan decision algorithm. Loaded: loaded (/etc/systemd/system/el.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Fri 2020-09-25 14:09:59 UTC; 21s ago Main PID: 109173 (code=exited, status=203/EXEC)

Sep 25 14:09:59 ABI-Data systemd[1]: Started Gunicorn instance for El loan decision algorithm… Sep 25 14:09:59 ABI-Data systemd[1]: el.service: Main process exited, code=exited, status=203/EXEC Sep 25 14:09:59 ABI-Data systemd[1]: el.service: Failed with result ‘exit-code’.

not working…

502 bad gateway…

in nginx/error/log it says *16 connect() to unix:/home/lukas/myproject/myproject.sock failed (2: No such file or directory)

These instructions are wonderful. Thank you very much. But… before

sudo systemctl start nginx

I had to shut down my regular httpd service with

systemctl stop httpd systemctl disable httpd

Followed this and in the the end got nginx permission denied error. Trace in /var/log/nginx/error.log:

" *63 connect() to unix:/run/appserver/appserver.sock failed (13: Permission denied) while connecting to upstream,"

This is because nginx cannot connect to the socket due to a SELinux policy. To fix you need to add SELinux policy:

audit2allow -a -M nginx semodule -i nginx.pp

Where is the appropriate place to set persistent environment variables? I’d like to store some API keys outside of my Python files. When I run python myproject.py os.environ[‘keyname’] returns the keyname correctly.

When I try running it as part of the completed guide the python file throws a KeyError because it can’t return the env variable. Where does this env variable need to be set so that it can be accessed?

I’ve tried having it in my .bash_profile, wsgi.py, virtualenv/bin/activate and virtualenv/bin/gunicorn, but none of those have worked :/

Hi, How I can see the real-time output of the requests it receives my application ?, when I use Upstar can view the output with the following command:

sudo tail -f /var/log/upstart/myapp.log

In CentOS, I get not monitor the log, I only get access to the status of gunicorn with this command: sudo journalctl --unit = myapp

But I need to see the requests and outputs flask.

I try this in the Unit File:

ExecStart=/home/user/myapp/myappenv/bin/gunicorn --workers 8 --bind unix:myapp.sock -m 007 wsgi --debug --log-file /tmp/myapp.log --log-level debug --error-logfile /tmp/myapp_error.log

But I do not get results

I followed this step by step guide and almost struck gold. I am getting 502 Bad Gateway error.

I was able to successfully get gunicorn to serve my app in the browser. After that step, I had a couple questions on the “Create a Systemd Unit File”, which I believe may be the source of my problem.

Under this section: [Service] User=user Group=nginx

What user must be entered? Does User=“current user we are logged in as” ?

I did this under root, I know bad practice, but I entered a different username with sudo privileges.

I also didn’t store my project in the home directory, I created a www dir under /var. Do permissions need to be something else other than 710?

Thanks.

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.