Tutorial

How To Use Logstash and Kibana To Centralize Logs On CentOS 7

Published on July 15, 2014
How To Use Logstash and Kibana To Centralize Logs On CentOS 7

Introduction

In this tutorial, we will go over the installation of Logstash 1.4.2 and Kibana 3 on CentOS 7, and how to configure them to gather and visualize the syslogs of our systems in a centralized location. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana 3 is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch. Elasticsearch, Logstash, and Kibana, when used together is known as an ELK stack.

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.

Note: An updated version of this guide can be found here: How To Install Elasticsearch, Logstash, and Kibana 4 on CentOS 7.

Our Goal

The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.

Our Logstash / Kibana setup has four main components:

  • Logstash: The server component of Logstash that processes incoming logs
  • Elasticsearch: Stores all of the logs
  • Kibana: Web interface for searching and visualizing logs
  • Logstash Forwarder: Installed on servers that will send their logs to Logstash, Logstash Forwarder serves as a log forwarding agent that utilizes the lumberjack networking protocol to communicate with Logstash

We will install the first three components on a single server, which we will refer to as our Logstash Server. The Logstash Forwarder will be installed on all of the servers that we want to gather logs for, which we will refer to collectively as our Servers.

Prerequisites

To complete this tutorial, you will require root access to an CentOS 7 VPS. Instructions to set that up can be found here (steps 3 and 4): Initial Server Setup with CentOS 6.

The amount of CPU, RAM, and storage that your Logstash Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a VPS with the following specs for our Logstash Server:

  • OS: CentOS 7
  • RAM: 2GB
  • CPU: 2

In addition to your Logstash Server, you will want to have a few other servers that you will gather logs from.

Let’s get started on setting up our Logstash Server!

Install Java 7

Elasticsearch and Logstash require Java 7, so we will install that now. We will install OpenJDK 7.

Install the latest stable version of OpenJDK 7 with this command:

sudo yum -y install java-1.7.0-openjdk

Now that Java 7 is installed, let’s install ElasticSearch.

Install Elasticsearch

Note: Logstash 1.4.2 recommends Elasticsearch 1.1.1.

Run the following command to import the Elasticsearch public GPG key into rpm:

sudo rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch

Create and edit a new yum repository file for Elasticsearch:

sudo vi /etc/yum.repos.d/elasticsearch.repo

Add the following repository configuration:

[elasticsearch-1.1]
name=Elasticsearch repository for 1.1.x packages
baseurl=http://packages.elasticsearch.org/elasticsearch/1.1/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

Save and exit.

Install Elasticsearch 1.1.1 with this command:

sudo yum -y install elasticsearch-1.1.1

Elasticsearch is now installed. Let’s edit the configuration:

sudo vi /etc/elasticsearch/elasticsearch.yml

Add the following line somewhere in the file, to disable dynamic scripts:

script.disable_dynamic: true

You will also want to restrict outside access to your Elasticsearch instance, so outsiders can’t read your data or shutdown your Elasticseach cluster through the HTTP API. Find the line that specifies network.host and uncomment it so it looks like this:

network.host: localhost

Then disable multicast by finding the discovery.zen.ping.multicast.enabled item and uncommenting so it looks like this:

discovery.zen.ping.multicast.enabled: false

Save and exit elasticsearch.yml.

Now start Elasticsearch:

sudo systemctl start elasticsearch.service

Then run the following command to start Elasticsearch on boot up:

sudo systemctl enable elasticsearch.service

Now that Elasticsearch is up and running, let’s install Kibana.

Install Kibana

Note: Logstash 1.4.2 recommends Kibana 3.0.1.

Download Kibana to your home directory with the following command:

cd ~; curl -O https://download.elasticsearch.org/kibana/kibana/kibana-3.0.1.tar.gz

Extract Kibana archive with tar:

tar xvf kibana-3.0.1.tar.gz

Open the Kibana configuration file for editing:

vi ~/kibana-3.0.1/config.js

In the Kibana configuration file, find the line that specifies the elasticsearch server URL, and replace the port number (9200 by default) with 80:

   elasticsearch: "http://"+window.location.hostname+":80",

This is necessary because we are planning on accessing Kibana on port 80 (i.e. http://logstash_server_public_ip/).

We will be using Apache to serve our Kibana installation, so let’s move the files into an appropriate location. Create a directory with the following command:

sudo mkdir -p /var/www/kibana3

Now copy the Kibana files into your newly-created directory:

sudo cp -R ~/kibana-3.0.1/* /var/www/kibana3/

Before we can use the Kibana web interface, we have to install Apache. Let’s do that now.

Install Apache HTTP

Use Yum to install Apache HTTP:

sudo yum -y install httpd

Because of the way that Kibana interfaces the user with Elasticsearch (the user needs to be able to access Elasticsearch directly), we need to configure Apache to proxy the port 80 requests to port 9200 (the port that Elasticsearch listens to by default). We will provide a sample VirtualHost file to start with.

Download the sample VirtualHost configuration:

cd ~; wget https://assets.digitalocean.com/articles/logstash/kibana3.conf

Open the sample configuration file for editing:

vi kibana3.conf

Find and change the highlighted values of VirtualHost and ServerName to your FQDN (or localhost if you aren’t using a domain name) and root to where we installed Kibana, so they look like the following entries:

<VirtualHost FQDN:80>
  ServerName FQDN

Save and exit. Now copy it to your Apache configuration configuration:

sudo cp ~/kibana3.conf /etc/httpd/conf.d/

Then generate a login that will be used to access Kibana (substitute your own username):

sudo htpasswd -c /etc/httpd/conf.d/kibana-htpasswd user

Then enter a password and verify it. The htpasswd file just created is referenced in the Apache configuration that you recently configured.

Now start Apache to put our changes into effect:

sudo systemctl start httpd.service

Also, configure Apache to start on boot:

sudo systemctl enable httpd.service

Kibana is now accessible via your FQDN or the public IP address of your Logstash Server i.e. http://logstash_server_public_ip/. If you go there in a web browser, you should see a Kibana welcome page which will allow you to view dashboards but there will be no logs to view because Logstash has not been set up yet. Let’s do that now.

Install Logstash

The Logstash package shares the same GPG Key as Elasticsearch, and we already installed that public key, so let’s create and edit a new Yum repository file for Logstash:

sudo vi /etc/yum.repos.d/logstash.repo

Add the following repository configuration:

[logstash-1.4]
name=logstash repository for 1.4.x packages
baseurl=http://packages.elasticsearch.org/logstash/1.4/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

Save and exit.

Install Logstash 1.4.2 with this command:

sudo yum -y install logstash-1.4.2

Logstash is installed but it is not configured yet.

Generate SSL Certificates

Since we are going to use Logstash Forwarder to ship logs from our Servers to our Logstash Server, we need to create an SSL certificate and key pair. The certificate is used by the Logstash Forwarder to verify the identity of Logstash Server.

Now you have two options for generating your SSL certificates. If you have a DNS setup that will allow your client servers to resolve the IP address of the Logstash Server, use Option 2. Otherwise, Option 1 will allow you to use IP addresses.

Option 1: IP Address

If you don’t have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your Logstash Server—you will have to add your Logstash Server’s private IP address to the subjectAltName (SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:

sudo vi /etc/pki/tls/openssl.cnf

Find the [ v3_ca ] section in the file, and add this line under it (substituting in the Logstash Server’s private IP address):

subjectAltName = IP: logstash_server_private_ip

Save and exit.

Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:

cd /etc/pki/tls
sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let’s complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.

Option 2: FQDN (DNS)

If you have a DNS setup with your private networking, you should create an A record that contains the Logstash Server’s private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server’s public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your Logstash Server.

Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/…), with the following command (substitute in the FQDN of the Logstash Server):

cd /etc/pki/tls
sudo openssl req -subj '/CN=<^>logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let’s complete our Logstash configuration.

Configure Logstash

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

Let’s create a configuration file called 01-lumberjack-input.conf and set up our “lumberjack” input (the protocol that Logstash Forwarder uses):

sudo vi /etc/logstash/conf.d/01-lumberjack-input.conf

Insert the following input configuration:

input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

Save and quit. This specifies a lumberjack input that will listen on tcp port 5000, and it will use the SSL certificate and private key that we created earlier.

Now let’s create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages:

sudo vi /etc/logstash/conf.d/10-syslog.conf

Insert the following syslog filter configuration:

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

Save and quit. This filter looks for logs that are labeled as “syslog” type (by a Logstash Forwarder), and it will try to use “grok” to parse incoming syslog logs to make it structured and query-able.

Lastly, we will create a configuration file called 30-lumberjack-output.conf:

sudo vi /etc/logstash/conf.d/30-lumberjack-output.conf

Insert the following output configuration:

output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}

Save and exit. This output basically configures Logstash to store the logs in Elasticsearch.

With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).

If you want to add filters for other applications that use the Logstash Forwarder input, be sure to name the files so they sort between the input and the output configuration (i.e. between 01 and 30).

Restart Logstash to put our configuration changes into effect:

sudo service logstash restart

Now that our Logstash Server is ready, let’s move onto setting up Logstash Forwarder.

Set Up Logstash Forwarder

Note: Do these steps for each server that you want to send logs to your Logstash Server. For instructions on installing Logstash Forwarder on Debian-based Linux distributions (e.g. Ubuntu, Debian, etc.), refer to the Build and Package Logstash Forwarder section of the Ubuntu variation of this tutorial.

Copy SSL Certificate and Logstash Forwarder Package

On Logstash Server, copy the SSL certificate to Server (substitute with your own login):

scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp

Install Logstash Forwarder Package

On Server, download the Logstash Forwarder RPM to your home directory:

cd ~; curl -O http://download.elasticsearch.org/logstash-forwarder/packages/logstash-forwarder-0.3.1-1.x86_64.rpm

Then install the Logstash Forwarder Package:

sudo rpm -ivh ~/logstash-forwarder-0.3.1-1.x86_64.rpm

Next, you will want to install the Logstash Forwarder init script, so it starts on bootup. We will use the init script provided by logstashbook.com:

cd /etc/init.d/; sudo curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
sudo chmod +x logstash-forwarder

The init script depends on a file called /etc/sysconfig/logstash-forwarder. A sample file is available to download:

sudo curl -o /etc/sysconfig/logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_sysconfig

Open it for editing:

sudo vi /etc/sysconfig/logstash-forwarder

And modify the LOGSTASH_FORWARDER_OPTIONS value so it looks like the following:

LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"

Save and quit.

Now copy the SSL certificate into the appropriate location (/etc/pki/tls/certs):

sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

Configure Logstash Forwarder

On Server, create and edit Logstash Forwarder configuration file, which is in JSON format:

sudo vi /etc/logstash-forwarder

Now add the following lines into the file, substituting in your Logstash Server’s private IP address for logstash_server_private_IP:

{
  "network": {
    "servers": [ "logstash_server_private_IP:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [
        "/var/log/messages",
        "/var/log/secure"
       ],
      "fields": { "type": "syslog" }
    }
   ]
}

Save and quit. This configures Logstash Forwarder to connect to your Logstash Server on port 5000 (the port that we specified an input for earlier), and uses the SSL certificate that we created earlier. The paths section specifies which log files to send (here we specify messages and secure), and the type section specifies that these logs are of type "syslog* (which is the type that our filter is looking for).

Note that this is where you would add more files/types to configure Logstash Forwarder to other log files to Logstash on port 5000.

Now we will want to add the Logstash Forwarder service with chkconfig:

sudo chkconfig --add logstash-forwarder

Now start Logstash Forwarder to put our changes into place:

sudo service logstash-forwarder start

Now Logstash Forwarder is sending messages and auth.log to your Logstash Server! Repeat this process for all of the other servers that you wish to gather logs for.

Connect to Kibana

When you are finished setting up Logstash Forwarder on all of the servers that you want to gather logs for, let’s look at Kibana, the web interface that we installed earlier.

In a web browser, go to the FQDN or public IP address of your Logstash Server. You will need to enter the login you created (during Apache setup), then you should see a Kibana welcome page.

Click on Logstash Dashboard to go to the premade dashboard. You should see a histogram with log events, with log messages below (if you don’t see any events or messages, one of your four Logstash components is not configured properly).

Here, you can search and browse through your logs. You can also customize your dashboard. This is a sample of what your Kibana instance might look like:

Kibana 3 Example Dashboard

Try the following things:

  • Search for “root” to see if anyone is trying to log into your servers as root
  • Search for a particular hostname
  • Change the time frame by selecting an area on the histogram or from the menu above
  • Click on messages below the histogram to see how the data is being filtered

Kibana has many other features, such as graphing and filtering, so feel free to poke around!

Conclusion

Now that your syslogs are centralized via Logstash, and you are able to visualize them with Kibana, you should be off to a good start with centralizing all of your important logs. Remember that you can send pretty much any type of log to Logstash, but the data becomes even more useful if it is parsed and structured with grok.

Note that your Kibana dashboard is accessible to anyone who can access your server, so you will want to secure it with something like htaccess.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
10 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

The logstash-forwarder package is gone:

http://packages.elasticsearch.org/logstashforwarder/centos/logstash-forwarder-0.3.1-1.x86_64.rpm

Use this non-signed package instead:

http://download.elasticsearch.org/logstash-forwarder/packages/logstash-forwarder-0.3.1-1.x86_64.rpm

Then you may also encountered the TLS error like below: “Failed to tls handshake with x.x.x.x x509: cannot validate certificate for x.x.x.x because it doesn’t contain any IP SANs”

Try this approach:

cd /etc/pki/tls; sudo openssl req -subj '/CN=<YOUR_LOGSTASH_SERVER_DN>/' -x509 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

Hi, can you please tell me how to delete older log from kibbana gui…

Using the pre-built Droplet and CentOS 7 and get these errors…

2015/03/07 13:08:40.349662 Connecting to [x.x.x.x]:5000 (dny-lnx-log) 2015/03/07 13:08:40.357296 Failed to tls handshake with x.x.x.x x509: certificate is valid for , not dny-lnx-log

I’ve been able to get everything running minus Kibana. Apache wouldn’t start so I followed @rdem suggestion to comment out the “Set Global Proxy Timeouts.” Doing so allowed me to get Apache to start. Now all I get is the default Apache Testing 123 page. Here is part of what my kibana2.conf file looks like:

<VirtualHost 127.0.0.1:80> ServerName 127.0.0.1

DocumentRoot /var/www/kibana3 <Directory /var/www/kibana3> Allow from all Options -Multiviews </Directory>

I’ve tried localhost, the IP address of my server, with each change I’ve restarted the Apache service.

The Apache error logs show: Cannot serve directory /var/www/html/: No matching DirectoryIndex (index.html) found, and server-generated directory index forbidden by Options directive

I’ve verified the document root is correct, and that the htaccess file path are correct…because htaccess is broken now after my troubleshooting.

Any help is appreciated.

Will this setup accept syslog logs from servers/devices on port 514 or do I have to specify that?

thanks, /GF

Hi, I installed this step: install httpd, but when i using the command:

sudo systemctl start httpd.service

my system can’t start, it show error:

Job for httpd.service failed. See ‘systemctl status httpd.service’ and ‘journalc tl -xn’ for details.

My hostname: logserver

My conf in kibana3.conf: I change FQDN = logserver or localhost, but when start httpd it’s failed…

Tks for your help…

regards.

Hi all, I had to put this in the firewalld

firewall-cmd --zone=public --add-port=9200/tcp --permanen firewall-cmd --reload

And comment this lines in kibana3.conf

Set global proxy timeouts

<Proxy http://127.0.0.1:9200>

ProxySet connectiontimeout=5 timeout=90

</Proxy>

In order to make it works…I hope you find it useful Regards!

Hi,

Great guide, everything’s working properly.

However, I recently updated my machine (yum update) which updated Apache to2.4.6 and Elasticsearch to 1.1.2 as well.

I noticed that gave me errors, causing problems with Kibana, more specifically:

Jan 27 14:31:02 localhost.site.com httpd[5251]: AH00526: Syntax error on line 21 of /etc/httpd/conf.d/kibana3.conf:
Jan 27 14:31:02 localhost.site.com httpd[5251]: ProxyPass/<Proxy> and ProxyPassMatch/<ProxyMatch> can't be used altogether with the same worker name (http://127.0.0.1:9200)

I proceeded to remove the following from the kibana3.conf:

  # Set global proxy timeouts
  #<Proxy http://127.0.0.1:9200>
  #  ProxySet connectiontimeout=5 timeout=90
  #</Proxy>

And now everything’s working again.

Great article. Every thing worked perfectly. one question, why doesn’t the logs of Logstash server gets displayed ? Do i need to make any changes to get the local logs ?

Also if any one is using ipcop as their firewall ? i want to pull iptables logs and proxy logs to logstash server. Any idea how ?

Hi, Thanks for this tutorial. When I run Kibana on the browser i get these errors:

  • Upgrade Required Your version of Elasticsearch is too old. Kibana requires Elasticsearch 0.90.9 or above.
  • Error Could not reach http://logstash.cognigencorp.com:80/_nodes. If you are using a proxy, ensure it is configured correctly.

Although I followed this tutorial to the letter, installed the required versions and tried stephenwny solution it still didn’t work for me. Any suggestions? I’m actually running ELK stack under CentOS 6.5.

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.