Tutorial

Configuring Logstash on Droplets to Forward Nginx Logs to Managed OpenSearch

Configuring Logstash on Droplets to Forward Nginx Logs to Managed OpenSearch

Introduction

Keeping track of web server logs is essential for running your website smoothly, solving problems, and understanding user behavior. If you’re using Nginx, it produces access and error logs full of valuable information. To manage and analyze these logs, you can use Logstash to process and forward them and DigitalOcean’s Managed OpenSearch to index and visualize the data.

In this tutorial, we will walk you through installing Logstash on a Droplet, setting it up to collect your Nginx logs, and sending them to DigitalOcean Managed OpenSearch.

Prerequisites

Use Case

You might need this setup if you want to:

  • Monitor and Troubleshoot: Track web server performance and errors by analyzing real-time logs.
  • Analyze Performance: Gain insights into web traffic patterns and server metrics.
  • Centralize Logging: Aggregate logs from multiple Nginx servers into a single OpenSearch instance for easier management.

Note: The setup time should be around 30 minutes.

Step 1 - Installing Logstash on Droplets

Logstash can be installed using binary files available here or package repositories tailored for your operating system. For easier management and updates, using package repositories is generally recommended. You can use the APT package manager on Debian-based systems such as Ubuntu, while on Red Hat-based systems such as CentOS or RHEL, you can use yum. Both methods ensure Logstash is properly integrated into your system’s package management infrastructure, simplifying installation and maintenance.

In this section, we will walk you through the installation of Logstash using both apt and yum package managers, ensuring that you can configure Logstash on your Droplet regardless of your Linux distribution.

To find the Operating System, run the following command:

cat /etc/os-release

For APT-Based Systems (Ubuntu/Debian)

1.Download and install the Public Signing Key:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elastic-keyring.gpg

2.Install apt-transport-https if it is not already installed:

sudo apt-get install apt-transport-https

3.Add and save the Logstash repository definition to your apt sources list:

echo "deb [signed-by=/usr/share/keyrings/elastic-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list

Note: Ensure that you do not use add-apt-repository commands as it may add a deb-src entry that is not supported. If you encounter an error related to a deb-src entry, delete it from the /etc/apt/sources.list file. If you have added the deb-src entry, you will see an error like the following:

Unable to find expected entry 'main/source/Sources' in Release file (Wrong sources.list entry or malformed file)

If you just delete the deb-src entry from the /etc/apt/sources.list file, the installation should work as expected.

4.Update the package index to include the new repository:

sudo apt-get update

5.Install Logstash using the apt package manager:

sudo apt-get install logstash

6.Start Logstash and enable it to start automatically on boot:

sudo systemctl start logstash
sudo systemctl enable logstash

Logstash is now installed and running on your system.

For YUM-Based Systems (CentOS/RHEL)

1.Download and Install the Public Signing Key for the Logstash repository:

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

2.Create a repository file for Logstash in /etc/yum.repos.d/. For example, create a file named logstash.repo. You can copy and paste the below contents to create the file and update the contents:

sudo tee /etc/yum.repos.d/logstash.repo > /dev/null <<EOF
[logstash-8.x]
name=Elastic repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF

The repository is ready to use.

3.Install Logstash using the YUM package manager:

sudo yum install logstash

4.Start Logstash and enable it to start automatically on boot:

sudo systemctl start logstash
sudo systemctl enable logstash

Logstash is now installed and running on your system.

Step 2 - Installing the Open Search Output Plugin

You can install the OpenSearch output plugin by running the following command:

/usr/share/logstash/bin/logstash-plugin install logstash-output-opensearch

You can find more information about the plugin on this logstash-output-opensearch plugin repository.

Step 3 - Configuring Logstash to Send Nginx Logs to OpenSearch

A Logstash pipeline consists of three main stages: input, filter, and output. Logstash pipelines make use of plugins. You can make use of community plugins or create your own.

  • Input: This stage collects data from various sources. Logstash supports numerous input plugins to handle data sources like log files, databases, message queues, and cloud services.
  • Filter: This stage processes and transforms the data collected in the input stage. Filters can modify, enrich, and structure the data to make it more useful and easier to analyze.
  • Output: This stage sends the processed data to a destination. Destinations can include databases, files, and data stores like OpenSearch.

Now let’s create a pipeline.

1.Create Logstash Configuration File at /etc/logstash/conf.d/nginx-to-opensearch.conf with the following contents:

input {
  file {
    path => "/var/log/nginx/access.log"
    start_position => "beginning"
    sincedb_path => "/dev/null"
    tags => ["nginx_access"]
  }
  file {
    path => "/var/log/nginx/error.log"
    start_position => "beginning"
    sincedb_path => "/dev/null"
    tags => ["nginx_error"]
  }
}
filter {
  if "nginx_access" in [tags] {
    grok {
      match => { "message" => "%{IPORHOST:client_ip} - %{USER:ident} \[%{HTTPDATE:timestamp}\] \"%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:http_version}\" %{NUMBER:response} %{NUMBER:bytes} \"%{DATA:referrer}\" \"%{DATA:user_agent}\"" }
    }
    mutate {
      remove_field => ["message", "[log][file][path]", "[event][original]"]
    }
  } else if "nginx_error" in [tags] {
    grok {
      match => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{LOGLEVEL:level}\] \[%{DATA:pid}\] \[%{DATA:tid}\] %{GREEDYDATA:error_message}" }
    }
    mutate {
      remove_field => ["message", "[log][file][path]", "[event][original]"]
    }
  }
}
output {
  if "nginx_access" in [tags] {
    opensearch {
      hosts => ["https://<OpenSearch-Hostname>:25060"]
      user => "doadmin"
      password => "<your_password>"
      index => "nginx_access-%{+YYYY.MM.dd}"
      ssl => true
      ssl_certificate_verification => true
    }
  } else if "nginx_error" in [tags] {
    opensearch {
      hosts => ["https://<OpenSearch-Hostname>:25060"]
      user => "doadmin"
      password => "<your_password>"
      index => "nginx_error-%{+YYYY.MM.dd}"
      ssl => true
      ssl_certificate_verification => true
    }
  }
}

Replace:

  • OpenSearch-Hostname with your OpenSearch server’s hostname.
  • <your_password> with with your OpenSearch password.

2.Apply the new configuration by restarting Logstash:

sudo systemctl restart logstash

3.Check Logstash logs to ensure it is processing and forwarding data correctly:

sudo tail -f /var/log/logstash/logstash-plain.log

Breakdown of the nginx-to-opensearch.conf configuration

INPUT

The input block configures two file inputs to read logs:

Nginx Logs: Paths: /var/log/nginx/access.log (for access logs) /var/log/nginx/error.log (for error logs) Start Position: beginning – Reads from the start of the log files. Sincedb Path: /dev/null – Disables tracking for continuous reading. Tags: ["nginx_access"] for access logs ["nginx_error"] for error logs

Note: Ensure the Logstash service has access to the input paths.

FILTER

The filter block processes logs based on their tags:

Log Processing: Access Logs: Uses a grok filter to parse the access log format, extracting fields like client_ip, timestamp, method, request, http_version, response, bytes, referrer, and user_agent. Removes the original message and certain metadata fields.

Error Logs: Checks for the nginx_error tag and applies a grok filter to extract fields such as timestamp, level, pid, tid, and error_message. Also removes the message and metadata fields.

OUTPUT

The output block routes events to OpenSearch based on their tags:

Routing to OpenSearch: For both access and error logs, it specifies: Hosts: URL of the OpenSearch instance. User: doadmin for authentication. Password: Your OpenSearch password. Index: nginx_access-%{+YYYY.MM.dd} for access logs nginx_error-%{+YYYY.MM.dd} for error logs SSL Settings: Enables SSL and certificate verification.

Step 4 - Configure OpenSearch

1.Open your web browser and go to the OpenSearch Dashboard URL:

https://<OpenSearch-Hostname>

Replace OpenSearch-Hostname with your OpenSearch server’s hostname.

2.Create an Index Pattern. a. On the left sidebar, navigate to Management > Dashboard Management > Index Patterns. b. Click on Create index pattern on the top right. c. Enter nginx_access-* or nginx_error-* as the index pattern to match all indices created by Logstash and click on Next step. d. Click Create index pattern.

3.Ensure the index pattern is successfully created and visible in the Index Patterns list.

4.On the left sidebar, go to Discover and select the index pattern you created (nginx_access-* or nginx_error-*). Verify that log entries are visible and correctly indexed.

5.Create Visualizations and Dashboards. Visit How to Create a Dashboard in OpenSearch for more details.

Troubleshooting

Check Connectivity

You can verify that Logstash can connect to OpenSearch by testing connectivity:

curl -u doadmin:your_password -X GET "https://<OpenSearch-Hostname>:25060/_cat/indices?v"

Replace:

  • OpenSearch-Hostname with your OpenSearch server’s hostname.
  • <your_password> with with your OpenSearch password.

Data Ingestion

You can ensure that the data is properly indexed in OpenSearch using the following curl command:

curl -u doadmin:your_password -X GET "http://<OpenSearch-Hostname>:25060/nginx-logs-*/_search?pretty"

Replace:

  • OpenSearch-Hostname with your OpenSearch server’s hostname.
  • <your_password> with with your OpenSearch password

Firewall and Network Configuration

Ensure firewall rules and network settings allow traffic between Logstash and OpenSearch on port 25060.

Conclusion

In this guide, you learned to set up Logstash to collect and forward Nginx logs to OpenSearch.

You reviewed how to use apt or yum package managers, depending on your Linux distribution, to get Logstash up and running on your Droplet. You also created and adjusted the Logstash configuration file to make sure Nginx logs are correctly parsed and sent to OpenSearch. Then you set up an index pattern in OpenSearch Dashboards to check that the logs are being indexed properly and are visible for analysis.With these steps completed, you should now have a working setup where Logstash collects Nginx logs and sends them to OpenSearch. This setup lets you use OpenSearch’s powerful search and visualization tools to analyze your server logs.

If you run into any issues, check out the troubleshooting tips we’ve provided and refer to the Logstash and OpenSearch documentation for more help. Regular monitoring will keep your logging system running smoothly and effectively.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors
Default avatar

Senior Solutions Architect


Default avatar

Sr Technical Writer

Senior Technical Writer @ DigitalOcean | 2x Medium Top Writers | 2 Million+ monthly views & 34K Subscribers | Ex Cloud Consultant @ AMEX | Ex SRE(DevOps) @ NUTANIX


Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
Leave a comment


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.