Tutorial

How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on Ubuntu 14.04

How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on Ubuntu 14.04
Not using Ubuntu 14.04?Choose a different version or distribution.
Ubuntu 14.04

Introduction

In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on Ubuntu 14.04—that is, Elasticsearch 2.2.x, Logstash 2.2.x, and Kibana 4.5.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.1.x. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.

Our Goal

The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.

Our ELK stack setup has four main components:

  • Logstash: The server component of Logstash that processes incoming logs
  • Elasticsearch: Stores all of the logs
  • Kibana: Web interface for searching and visualizing logs, which will be proxied through Nginx
  • Filebeat: Installed on client servers that will send their logs to Logstash, Filebeat serves as a log shipping agent that utilizes the lumberjack networking protocol to communicate with Logstash

ELK Infrastructure

We will install the first three components on a single server, which we will refer to as our ELK Server. Filebeat will be installed on all of the client servers that we want to gather logs for, which we will refer to collectively as our Client Servers.

Prerequisites

To complete this tutorial, you will require root access to an Ubuntu 14.04 VPS. Instructions to set that up can be found here (steps 3 and 4): Initial Server Setup with Ubuntu 14.04.

If you would prefer to use CentOS instead, check out this tutorial: How To Install ELK on CentOS 7.

The amount of CPU, RAM, and storage that your ELK Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a VPS with the following specs for our ELK Server:

  • OS: Ubuntu 14.04
  • RAM: 4GB
  • CPU: 2

In addition to your ELK Server, you will want to have a few other servers that you will gather logs from.

Let’s get started on setting up our ELK Server!

Install Java 8

Elasticsearch and Logstash require Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elasticsearch recommends. It should, however, work fine with OpenJDK, if you decide to go that route.

Add the Oracle Java PPA to apt:

  1. sudo add-apt-repository -y ppa:webupd8team/java

Update your apt package database:

  1. sudo apt-get update

Install the latest stable version of Oracle Java 8 with this command (and accept the license agreement that pops up):

  1. sudo apt-get -y install oracle-java8-installer

Now that Java 8 is installed, let’s install ElasticSearch.

Install Elasticsearch

Elasticsearch can be installed with a package manager by adding Elastic’s package source list.

Run the following command to import the Elasticsearch public GPG key into apt:

  1. wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

If your prompt is just hanging there, it is probably waiting for your user’s password (to authorize the sudo command). If this is the case, enter your password.

Create the Elasticsearch source list:

  1. echo "deb http://packages.elastic.co/elasticsearch/2.x/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list

Update your apt package database:

  1. sudo apt-get update

Install Elasticsearch with this command:

  1. sudo apt-get -y install elasticsearch

Elasticsearch is now installed. Let’s edit the configuration:

  1. sudo vi /etc/elasticsearch/elasticsearch.yml

You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can’t read your data or shutdown your Elasticsearch cluster through the HTTP API. Find the line that specifies network.host, uncomment it, and replace its value with “localhost” so it looks like this:

elasticsearch.yml excerpt (updated)
network.host: localhost

Save and exit elasticsearch.yml.

Now start Elasticsearch:

  1. sudo service elasticsearch restart

Then run the following command to start Elasticsearch on boot up:

  1. sudo update-rc.d elasticsearch defaults 95 10

Now that Elasticsearch is up and running, let’s install Kibana.

Install Kibana

Kibana can be installed with a package manager by adding Elastic’s package source list.

Create the Kibana source list:

  1. echo "deb http://packages.elastic.co/kibana/4.5/debian stable main" | sudo tee -a /etc/apt/sources.list.d/kibana-4.5.x.list

Update your apt package database:

  1. sudo apt-get update

Install Kibana with this command:

  1. sudo apt-get -y install kibana

Kibana is now installed.

Open the Kibana configuration file for editing:

  1. sudo vi /opt/kibana/config/kibana.yml

In the Kibana configuration file, find the line that specifies server.host, and replace the IP address (“0.0.0.0” by default) with “localhost”:

kibana.yml excerpt (updated)
server.host: "localhost"

Save and exit. This setting makes it so Kibana will only be accessible to the localhost. This is fine because we will use an Nginx reverse proxy to allow external access.

Now enable the Kibana service, and start it:

  1. sudo update-rc.d kibana defaults 96 9
  2. sudo service kibana start

Before we can use the Kibana web interface, we have to set up a reverse proxy. Let’s do that now, with Nginx.

Install Nginx

Because we configured Kibana to listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose.

Note: If you already have an Nginx instance that you want to use, feel free to use that instead. Just make sure to configure Kibana so it is reachable by your Nginx server (you probably want to change the host value, in /opt/kibana/config/kibana.yml, to your Kibana server’s private IP address or hostname). Also, it is recommended that you enable SSL/TLS.

Use apt to install Nginx and Apache2-utils:

  1. sudo apt-get install nginx apache2-utils

Use htpasswd to create an admin user, called “kibanaadmin” (you should use another name), that can access the Kibana web interface:

  1. sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.

Now open the Nginx default server block in your favorite editor. We will use vi:

  1. sudo vi /etc/nginx/sites-available/default

Delete the file’s contents, and paste the following code block into the file. Be sure to update the server_name to match your server’s name:

/etc/nginx/sites-available/default
  1. server {
  2. listen 80;
  3. server_name example.com;
  4. auth_basic "Restricted Access";
  5. auth_basic_user_file /etc/nginx/htpasswd.users;
  6. location / {
  7. proxy_pass http://localhost:5601;
  8. proxy_http_version 1.1;
  9. proxy_set_header Upgrade $http_upgrade;
  10. proxy_set_header Connection 'upgrade';
  11. proxy_set_header Host $host;
  12. proxy_cache_bypass $http_upgrade;
  13. }
  14. }

Save and exit. This configures Nginx to direct your server’s HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.

Now restart Nginx to put our changes into effect:

  1. sudo service nginx restart

Kibana is now accessible via your FQDN or the public IP address of your ELK Server i.e. http://elk-server-public-ip/. If you go there in a web browser, after entering the “kibanaadmin” credentials, you should see a Kibana welcome page which will ask you to configure an index pattern. Let’s get back to that later, after we install all of the other components.

Install Logstash

The Logstash package is available from the same repository as Elasticsearch, and we already installed that public key, so let’s create the Logstash source list:

  1. echo 'deb http://packages.elastic.co/logstash/2.2/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash-2.2.x.list

Update your apt package database:

  1. sudo apt-get update

Install Logstash with this command:

  1. sudo apt-get install logstash

Logstash is installed but it is not configured yet.

Generate SSL Certificates

Since we are going to use Filebeat to ship logs from our Client Servers to our ELK Server, we need to create an SSL certificate and key pair. The certificate is used by Filebeat to verify the identity of ELK Server. Create the directories that will store the certificate and private key with the following commands:

  1. sudo mkdir -p /etc/pki/tls/certs
  2. sudo mkdir /etc/pki/tls/private

Now you have two options for generating your SSL certificates. If you have a DNS setup that will allow your client servers to resolve the IP address of the ELK Server, use Option 2. Otherwise, Option 1 will allow you to use IP addresses.

Option 1: IP Address

If you don’t have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your ELK Server—you will have to add your ELK Server’s private IP address to the subjectAltName (SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:

  1. sudo vi /etc/ssl/openssl.cnf

Find the [ v3_ca ] section in the file, and add this line under it (substituting in the ELK Server’s private IP address):

openssl.cnf excerpt (updated)
subjectAltName = IP: ELK_server_private_IP

Save and exit.

Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:

  1. cd /etc/pki/tls
  2. sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let’s complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.

Option 2: FQDN (DNS)

If you have a DNS setup with your private networking, you should create an A record that contains the ELK Server’s private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server’s public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your ELK Server.

Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/…), with the following command (substitute in the FQDN of the ELK Server):

  1. cd /etc/pki/tls; sudo openssl req -subj '/CN=ELK_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let’s complete our Logstash configuration.

Configure Logstash

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

Let’s create a configuration file called 02-beats-input.conf and set up our “filebeat” input:

  1. sudo vi /etc/logstash/conf.d/02-beats-input.conf

Insert the following input configuration:

02-beats-input.conf
  1. input {
  2. beats {
  3. port => 5044
  4. ssl => true
  5. ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
  6. ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  7. }
  8. }

Save and quit. This specifies a beats input that will listen on tcp port 5044, and it will use the SSL certificate and private key that we created earlier.

Now let’s create a configuration file called 10-syslog-filter.conf, where we will add a filter for syslog messages:

  1. sudo vi /etc/logstash/conf.d/10-syslog-filter.conf

Insert the following syslog filter configuration:

10-syslog-filter.conf
  1. filter {
  2. if [type] == "syslog" {
  3. grok {
  4. match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
  5. add_field => [ "received_at", "%{@timestamp}" ]
  6. add_field => [ "received_from", "%{host}" ]
  7. }
  8. syslog_pri { }
  9. date {
  10. match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
  11. }
  12. }
  13. }

Save and quit. This filter looks for logs that are labeled as “syslog” type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.

Lastly, we will create a configuration file called 30-elasticsearch-output.conf:

  1. sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

Insert the following output configuration:

/etc/logstash/conf.d/30-elasticsearch-output.conf
  1. output {
  2. elasticsearch {
  3. hosts => ["localhost:9200"]
  4. sniffing => true
  5. manage_template => false
  6. index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
  7. document_type => "%{[@metadata][type]}"
  8. }
  9. }

Save and exit. This output basically configures Logstash to store the beats data in Elasticsearch which is running at localhost:9200, in an index named after the beat used (filebeat, in our case).

If you want to add filters for other applications that use the Filebeat input, be sure to name the files so they sort between the input and the output configuration (i.e. between 02- and 30-).

Test your Logstash configuration with this command:

  1. sudo service logstash configtest

It should display Configuration OK if there are no syntax errors. Otherwise, try and read the error output to see what’s wrong with your Logstash configuration.

Restart Logstash, and enable it, to put our configuration changes into effect:

  1. sudo service logstash restart
  2. sudo update-rc.d logstash defaults 96 9

Next, we’ll load the sample Kibana dashboards.

Load Kibana Dashboards

Elastic provides several sample Kibana dashboards and Beats index patterns that can help you get started with Kibana. Although we won’t use the dashboards in this tutorial, we’ll load them anyway so we can use the Filebeat index pattern that it includes.

First, download the sample dashboards archive to your home directory:

  1. cd ~
  2. curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip

Install the unzip package with this command:

  1. sudo apt-get -y install unzip

Next, extract the contents of the archive:

  1. unzip beats-dashboards-*.zip

And load the sample dashboards, visualizations and Beats index patterns into Elasticsearch with these commands:

  1. cd beats-dashboards-*
  2. ./load.sh

These are the index patterns that we just loaded:

  • [packetbeat-]YYYY.MM.DD
  • [topbeat-]YYYY.MM.DD
  • [filebeat-]YYYY.MM.DD
  • [winlogbeat-]YYYY.MM.DD

When we start using Kibana, we will select the Filebeat index pattern as our default.

Load Filebeat Index Template in Elasticsearch

Because we are planning on using Filebeat to ship logs to Elasticsearch, we should load a Filebeat index template. The index template will configure Elasticsearch to analyze incoming Filebeat fields in an intelligent way.

First, download the Filebeat index template to your home directory:

  1. cd ~
  2. curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json

Then load the template with this command:

  1. curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json

If the template loaded properly, you should see a message like this:

Output:
{ "acknowledged" : true }

Now that our ELK Server is ready to receive Filebeat data, let’s move onto setting up Filebeat on each client server.

Set Up Filebeat (Add Client Servers)

Do these steps for each Ubuntu or Debian server that you want to send logs to Logstash on your ELK Server. For instructions on installing Filebeat on Red Hat-based Linux distributions (e.g. RHEL, CentOS, etc.), refer to the Set Up Filebeat (Add Client Servers) section of the CentOS variation of this tutorial.

Copy SSL Certificate

On your ELK Server, copy the SSL certificate—created in the prerequisite tutorial—to your Client Server (substitute the client server’s address, and your own login):

  1. scp /etc/pki/tls/certs/logstash-forwarder.crt user@client_server_private_address:/tmp

After providing your login’s credentials, ensure that the certificate copy was successful. It is required for communication between the client servers and the ELK Server.

Now, on your Client Server, copy the ELK Server’s SSL certificate into the appropriate location (/etc/pki/tls/certs):

  1. sudo mkdir -p /etc/pki/tls/certs
  2. sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

Now we will install the Topbeat package.

Install Filebeat Package

On Client Server, create the Beats source list:

  1. echo "deb https://packages.elastic.co/beats/apt stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list

It also uses the same GPG key as Elasticsearch, which can be installed with this command:

  1. wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Then install the Filebeat package:

  1. sudo apt-get update
  2. sudo apt-get install filebeat

Filebeat is installed but it is not configured yet.

Configure Filebeat

Now we will configure Filebeat to connect to Logstash on our ELK Server. This section will step you through modifying the example configuration file that comes with Filebeat. When you complete the steps, you should have a file that looks something like this.

On Client Server, create and edit Filebeat configuration file:

  1. sudo vi /etc/filebeat/filebeat.yml

Note: Filebeat’s configuration file is in YAML format, which means that indentation is very important! Be sure to use the same number of spaces that are indicated in these instructions.

Near the top of the file, you will see the prospectors section, which is where you can define prospectors that specify which log files should be shipped and how they should be handled. Each prospector is indicated by the - character.

We’ll modify the existing prospector to send syslog and auth.log to Logstash. Under paths, comment out the - /var/log/*.log file. This will prevent Filebeat from sending every .log in that directory to Logstash. Then add new entries for syslog and auth.log. It should look something like this when you’re done:

filebeat.yml excerpt 1 of 5
...
      paths:
        - /var/log/auth.log
        - /var/log/syslog
#        - /var/log/*.log
...

Then find the line that specifies document_type:, uncomment it and change its value to “syslog”. It should look like this after the modification:

filebeat.yml excerpt 2 of 5
...
      document_type: syslog
...

This specifies that the logs in this prospector are of type syslog (which is the type that our Logstash filter is looking for).

If you want to send other files to your ELK server, or make any changes to how Filebeat handles your logs, feel free to modify or add prospector entries.

Next, under the output section, find the line that says elasticsearch:, which indicates the Elasticsearch output section (which we are not going to use). Delete or comment out the entire Elasticsearch output section (up to the line that says #logstash:).

Find the commented out Logstash output section, indicated by the line that says #logstash:, and uncomment it by deleting the preceding #. In this section, uncomment the hosts: ["localhost:5044"] line. Change localhost to the private IP address (or hostname, if you went with that option) of your ELK server:

filebeat.yml excerpt 3 of 5
  ### Logstash as output
  logstash:
    # The Logstash hosts
    hosts: ["ELK_server_private_IP:5044"]

This configures Filebeat to connect to Logstash on your ELK Server at port 5044 (the port that we specified a Logstash input for earlier).

Directly under the hosts entry, and with the same indentation, add this line:

filebeat.yml excerpt 4 of 5
    bulk_max_size: 1024

Next, find the tls section, and uncomment it. Then uncomment the line that specifies certificate_authorities, and change its value to ["/etc/pki/tls/certs/logstash-forwarder.crt"]. It should look something like this:

filebeat.yml excerpt 5 of 5
...
    tls:
      # List of root certificates for HTTPS server verifications
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

This configures Filebeat to use the SSL certificate that we created on the ELK Server.

Save and quit.

Now restart Filebeat to put our changes into place:

  1. sudo service filebeat restart
  2. sudo update-rc.d filebeat defaults 95 10

Again, if you’re not sure if your Filebeat configuration is correct, compare it against this example Filebeat configuration.

Now Filebeat is sending syslog and auth.log to Logstash on your ELK server! Repeat this section for all of the other servers that you wish to gather logs for.

Test Filebeat Installation

If your ELK stack is setup properly, Filebeat (on your client server) should be shipping your logs to Logstash on your ELK server. Logstash should be loading the Filebeat data into Elasticsearch in a date-stamped index, filebeat-YYYY.MM.DD.

On your ELK Server, verify that Elasticsearch is indeed receiving the data by querying for the Filebeat index with this command:

  1. curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

You should see a bunch of output that looks like this:

Sample Output:
... { "_index" : "filebeat-2016.01.29", "_type" : "log", "_id" : "AVKO98yuaHvsHQLa53HE", "_score" : 1.0, "_source":{"message":"Feb 3 14:34:00 rails sshd[963]: Server listening on :: port 22.","@version":"1","@timestamp":"2016-01-29T19:59:09.145Z","beat":{"hostname":"topbeat-u-03","name":"topbeat-u-03"},"count":1,"fields":null,"input_type":"log","offset":70,"source":"/var/log/auth.log","type":"log","host":"topbeat-u-03"} } ...

If your output shows 0 total hits, Elasticsearch is not loading any logs under the index you searched for, and you should review your setup for errors. If you received the expected output, continue to the next step.

Connect to Kibana

When you are finished setting up Filebeat on all of the servers that you want to gather logs for, let’s look at Kibana, the web interface that we installed earlier.

In a web browser, go to the FQDN or public IP address of your ELK Server. After entering the “kibanaadmin” credentials, you should see a page prompting you to configure a default index pattern:

Create index

Go ahead and select [filebeat]-YYY.MM.DD from the Index Patterns menu (left side), then click the Star (Set as default index) button to set the Filebeat index as the default.

Now click the Discover link in the top navigation bar. By default, this will show you all of the log data over the last 15 minutes. You should see a histogram with log events, with log messages below:

Discover page

Right now, there won’t be much in there because you are only gathering syslogs from your client servers. Here, you can search and browse through your logs. You can also customize your dashboard.

Try the following things:

  • Search for “root” to see if anyone is trying to log into your servers as root
  • Search for a particular hostname (search for host: "hostname")
  • Change the time frame by selecting an area on the histogram or from the menu above
  • Click on messages below the histogram to see how the data is being filtered

Kibana has many other features, such as graphing and filtering, so feel free to poke around!

Conclusion

Now that your syslogs are centralized via Elasticsearch and Logstash, and you are able to visualize them with Kibana, you should be off to a good start with centralizing all of your important logs. Remember that you can send pretty much any type of log or indexed data to Logstash, but the data becomes even more useful if it is parsed and structured with grok.

To improve your new ELK stack, you should look into gathering and filtering your other logs with Logstash, and creating Kibana dashboards. You may also want to gather system metrics by using Topbeat with your ELK stack. All of these topics are covered in the other tutorials in this series.

Good luck!

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products


Tutorial Series: Centralized Logging with ELK Stack (Elasticsearch, Logstash, and Kibana) On Ubuntu 14.04

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

This series will teach you how to install Logstash and Kibana on Ubuntu, then how to add more filters to structure your log data. Then it will teach you how to use Kibana.

About the author(s)

Mitchell Anicas
Mitchell Anicas
See author profile

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
80 Comments
Leave a comment...

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Use htpasswd to create an admin user, called “kibanaadmin” (you should use another name), that can access the Nagios web interface:

Suddenly wild Nagios appears?-)

Mitchell Anicas
DigitalOcean Employee
DigitalOcean Employee badge
March 11, 2015

Haha. Yeah, I was writing a Nagios tutorial at the same time. Fixed!

I did this tutorial, every commands are working fine, excepting that at the end of server(client) part, when i execute the “sudo service logstash-forwarder restart” it tells me that the service is started but when i look at “service logstash-forwarder status” it tells me that logstash-forwarder is not running. Any ideas of how to make it run. All 3 services are started on the logstash server. I can “telnet logstash_server_IP 5000” from the client. I had the same problem with the last version of this tutorial. When I log in into the web interface of kibana I don’t get any log into it. Because of that I can’t configure the configure pattern into kibana.

Mitchell Anicas
DigitalOcean Employee
DigitalOcean Employee badge
March 11, 2015

Run this command on the client server:

tail -f /var/log/logstash-forwarder/logstash-forwarder.err

It is likely due to your SSL certificate or a problem with the logstash forwarder configuration file.

the result of the command " tail -f /var/log/logstash-forwarder/logstash-forwarder.err" is:

}

2015/03/11 09:47:17.212855 Failed unmarshalling json: invalid character ‘"’ after object key:value pair 2015/03/11 09:47:17.212871 Could not load config file /etc/logstash-forwarder.conf: invalid character ‘"’ after object key:value pair

This is effectively a problem from the config file! Thank you

I know that is not covered by the tutorial but is it possible to send a directory of log instead of just a single file at a time? and How long the log are kept by the logstash server?

Mitchell Anicas
DigitalOcean Employee
DigitalOcean Employee badge
March 11, 2015

You can use wildcards to ship multiple files with Logstash Forwarder, e.g. "/var/log/*.log".

By default, there is no data retention policy (i.e. everything is kept forever, until you delete it). There are a few options for dealing with this:

  • Elasticsearch indices are stored in separate directories by day. See /var/lib/elasticsearch/elasticsearch/nodes/0/indices. You can set up a process to rotate out (delete) the indices that you don’t want anymore.
  • Curator
  • Setting TTLs on your data in Elasticsearch

about the Curator

I open the link and it opened github website. I have no idea what to do with all these files and folder. I mean, how can I get to install Curator or make it work. I am clueless. Help please.

Thanks

Mitchell Anicas
DigitalOcean Employee
DigitalOcean Employee badge
October 12, 2015

I believe there are installation instructions in the readme.

I had the same error as jplamoureux (…/etc/logstash-forwarder.conf: invalid character ‘"’ after object key:value pair) and resolved it with the help of http://www.logstashbook.com/TheLogstashBook_sample.pdf#34

Hi Mitchell, Thanks for putting this together. Any idea how you send the output from Logstash to a remote Elasticsearch server? Your example just has localhost, but we’ve got three separate server, one with Logstash, one with Elasticsearch and another with Kibana on it. However I can’t work out how to send the output from Logstash over to the Elasticsearch setup on a different server.

Cheers!

Mitchell Anicas
DigitalOcean Employee
DigitalOcean Employee badge
March 11, 2015

Not sure of your exact setup, but basically you will need to configure the following:

  • Elasticsearch (elasticsearch.yml) to listen on an address (network.host: ES_private_IP) that is accessible to Logstash and Kibana
  • Kibana configuration (kibana.yml) elasticsearch_url should be set to Elasticsearch’s listening IP/port (elasticsearch_url: "http://ES_private_IP:9200")
  • Obviously, you need to update your proxy configuration to point to Kibana (unless you’re running Nginx on the same server as Kibana)
  • Logstash needs its output configured to point to the Elasticsearch server (30-lumberjack-output.conf)

Right you are, I’ll have a nosey and let you know how it goes. Thanks for that!

EDIT : Mmmmm, still not sure. If I explain what we’ve got, it might help. We’ve got hundreds of physical servers all monitored by Nagios & Zenoss in a private setup, no public access at all. The logs we’re wanting to monitor/search are normal syslog outputs, so no web traffic and therefore no need for a proxy.

I’m investigating using an ELK stack, and so I’ve got three physical servers, each dedicated to the E, L and K (ignoring virtualisation, failover, etc for the moment as this is a prototype and we’ve got the spare kit). If I change the ‘localhost’ entry in the logstash.conf file to be either an IP address or a DNS entry, then logstash complains and says ‘did I know that --configtest was available’. So really, the question is how I get logstash to log ‘stuff’ to elasicsearch that’s on a totally different server. (It’s fine if I bung everything on one box and stick with localhost, but that defeats the point about separating things to different machines - which will be pertinent further down the line…)

Sorry to be a pain.

This comment has been deleted

    EDIT 2 : (Can’t seem to reply to the bottom bit, only this reply - oh well…)

    SOLVED : I just put the hostname in quotes and ‘stuff’ started appearing on the elasticsearch host from the syslogs perfectly fine - all good now.

    Nice Tutorial !!! Being new comer in software field, find this easy to work with. And my concept got clear… Thank you for proper formatting it. The only issue i was facing is my server’s and client’s IP getting changed after some time interval. Can you please guide me to make it static in proper way. Because my /etc/network/interfaces file has only following lines: auto lo iface lo inet loopback

    Mitchell Anicas
    DigitalOcean Employee
    DigitalOcean Employee badge
    March 13, 2015

    In the interfaces file, static IPs are defined like this (these IP addresses/masks are examples):

    auto eth0
    iface eth0 inet static
            address 100.200.100.50
            netmask 255.255.255.0
            gateway 100.200.100.1
            dns-nameservers 8.8.8.8 8.8.4.4
    

    I’ve been looking for a tutorial like this, so I was really glad to find it.

    The problem I have is the forwarder can’t connect. From what I understand I should find that once I’ve completed the steps regarding the files in /etc/logstash/conf.d I should find kibana listening for connection attempts on port 5000. This doesn’t happen and the error log on the forwarder confirms this saying the connection was refused. A netstat -aon confirms nothing listening on 5000 but I do see port 5601 which is the backend port in the kibana yml file.

    Any ideas?

    Mitchell Anicas
    DigitalOcean Employee
    DigitalOcean Employee badge
    March 13, 2015

    It sounds like Logstash is misconfigured (and not running).

    Check if it’s running with ps -ef | grep [l]ogstash.

    Also, check the logs (/var/log/logstash).

    Hi Thanks for the reply.

    Yes logstash had stopped somewhere along the line but even once it was started I had the same problem. However after removing the init.d script and readding (i’m not sure why I even done this in the first place) it now is working as expected.

    I hate it when the resolution makes no sense, feels more like luck.

    Mitchell Anicas
    DigitalOcean Employee
    DigitalOcean Employee badge
    March 13, 2015

    Maybe the init script didn’t have execute permissions.

    Thanks for the tutorial.

    Unfortunately I have a problem with Kibana. I can’t create any index because the button is gray and says “Unable to fetch mapping. Do you have indices matching the pattern?”.

    Any idea?

    Mitchell Anicas
    DigitalOcean Employee
    DigitalOcean Employee badge
    March 14, 2015

    You Logstash Forwarder is likely misconfigured, or there is a problem with the certificate. That is, your logs are not being shipped to Logstash.

    Check the Logstash Forwarder error logs: tail -f /var/log/logstash-forwarder/logstash-forwarder.err

    Press Control-C to quit the tail.

    This comment has been deleted

      Thanks to that, I was able to understand that for some reasons, logstash was not started. But now when I try to access Kibana via my browser I get this:

      Kibana: Unable to connect to Elasticsearch

      Error: unknown error at respond (http://myDomainName.com/index.js?b=5930:81568:15) at checkRespForFailure (http://myDomainName.com/index.js?b=5930:81534:7) at http://myDomainName.com/index.js?b=5930:80203:7 at wrappedErrback (http://myDomainName.com/index.js?b=5930:20882:78) at wrappedErrback (http://myDomainName.com/index.js?b=5930:20882:78) at wrappedErrback (http://myDomainName.com/index.js?b=5930:20882:78) at http://myDomainName.com/index.js?b=5930:21015:76 at Scope.$eval (http://myDomainName.com/index.js?b=5930:22002:28) at Scope.$digest (http://myDomainName.com/index.js?b=5930:21814:31) at Scope.$apply (http://myDomainName.com/index.js?b=5930:22106:24)

      What can I do?

      Thanks

      EDIT: ElasticSearch was not running either.

      Thanks for the great tutorial!

      Mitchell Anicas
      DigitalOcean Employee
      DigitalOcean Employee badge
      March 16, 2015

      Did you resolve the issue then?

      Yes, I just launched both logstash and elasticsearch and everything works great now :)

      I was stuck here. Logstash was crashing due to the Java/ruby/oracle bug. if on Ubuntu and using Java/oracle instead of openJDK this is the likely fix. https://github.com/elastic/logstash/issues/3127

      I’m getting the same “Kibana: Unable to connect to Elasticsearch” after spinning up the ELK stack from the DO image… Is there anything that needs to be done with that fresh image? If so, that should be added to the MOTD at login, or covered here… Tried restarting each service, and still the same error.

      Mitchell Anicas
      DigitalOcean Employee
      DigitalOcean Employee badge
      June 30, 2015

      Hi,

      I am getting an warning

      "

      Index Patterns

      Warning No default index pattern. You must select or create one to continue.
      

      "

      Thanks Pradeep

      Mitchell Anicas
      DigitalOcean Employee
      DigitalOcean Employee badge
      March 16, 2015

      There should be a default index pattern logstash-*.

      Hi and thanks for this guide, In the kibana index configuration page i get “unable to fetch mapping. Do you have indices matching this pattern?” , check the screenshot http://postimg.org/image/i5fs9yiz9/

      Since i find configuring logstash forwarder difficult and more difficult for windows, i was wondering if there is a more simple alternative to send the logs to the server? Thanks

      Mitchell Anicas
      DigitalOcean Employee
      DigitalOcean Employee badge
      March 16, 2015

      Hi. It looks like your Logstash server isn’t receiving logs from your log shipper. If you’re on Linux, and using Logstash Forwarder, you should run tail -f /var/log/logstash-forwarder/logstash-forwarder.err to find out what is going wrong. (Press Control-C to quit the tail.)

      I haven’t set up a Windows log shipper, but I heard that NXlog is what a lot of people use.

      On the CentOS 7 version of this article, a user, @david76, was able to get his Windows logs into his ELK stack. Perhaps you can ask him how he did it.

      Mitchell Anicas
      DigitalOcean Employee
      DigitalOcean Employee badge
      March 16, 2015

      This comment has been deleted

        For Windows Servers, I use eventlog-to-syslog to send the login/logouts logs to my ELK server, 5000 port. https://code.google.com/p/eventlog-to-syslog/downloads/list

        In 64 bits Windows Servers, you can get the RDP logins/logouts with: Security Registry + 4624 Event (login) and 4634 Event (logout) + Categoy 10 (keyboard interaction)

        The neccessary line in the config file is:

        XPath:Security:<Select Path="Security">*[EventData[Data[@Name='LogonType']='10'] and (System[(EventID='4624')] or System[(EventID='4634')])]</Select>
        

        During the instalation of the service, you put the IP and Port (5000) of the remote ELK server:

        if exist "c:\windows\SysWOW64" (
                        "c:\windows\system32\xcopy.exe" "64bits\evtsys.exe" "c:\windows\system32" /y
                        "c:\windows\system32\xcopy.exe" evtsys.cfg "c:\windows\system32" /y
                        c:\windows\system32\evtsys.exe -i -h 10.XXX.XXX.XXX -p 5000 -l 0 
                        sc start evtsys
        )
        

        (again, sorry for my English)

        Hi Mitchell Thank you for sharing this with us, it’s really helpful.

        I have a question. Can i create multiple input, filter and output files in conf.d ? I want to “parse” the same datas with different filter files, (filter-1.conf and filter-2.conf) i can’t do it in the same filter config file cause it’s the same datas in input. I can’t differenciate them with a type field or something. But if i create 2 filter files in conf.d, and multiple input files also, how to indicate to Logstash which filter matches with input1 or input2 etc… ? How can i do that please ?

        Thanks. Fares

        Mitchell Anicas
        DigitalOcean Employee
        DigitalOcean Employee badge
        March 17, 2015

        I’m not sure of the best way to filter the same logs twice.

        The input/filter/output conf files in /etc/logstash/conf.d get concatenated together, so it doesn’t matter if you separate them.

        One way would be to ship the same logs twice, as different types, and add another filter for the duplicate log.

        Ok thank you.

        I try to ‘run’ multiple nodes under my elasticsearch cluster, two data nodes and one master node. On Windows i create 3 different elasticsearch.yml files and run 3 services, it works.

        I don’t know how to do it on Linux, if you have some it would be so helpful.

        Thanks a lot.

        Hello,

        I’m not sure if that’s the right place to ask but here is what I would like to do. I followed your tutorial and everything works fine. I have a tomcat server running a web app, I’d like my web app to run an “instance” of logstash-forwarder and send my catalina log file to the logstash server. Is that possible? If yes, then how?

        Thanks.

        Mitchell Anicas
        DigitalOcean Employee
        DigitalOcean Employee badge
        March 17, 2015

        Follow the steps in the Set Up Logstash Forwarder section, which will get your Tomcat server sending syslogs to the Logstash server. Then you can add the Tomcat log files to the Logstash Forwarder configuration (and ship them as “tomcat” type). Then, on your Logstash server, add a filter for “tomcat” type messages (you will need to write a grok filter that matches your tomcat logs).

        Check out the 2nd tutorial in this series for help with adding grok filters to Logstash. It gives examples of gathering Nginx and Apache logs.

        This comment has been deleted

          Mitchell Anicas
          DigitalOcean Employee
          DigitalOcean Employee badge
          March 17, 2015

          You need to include the Logstash server’s SSL certificate and IP address. It can be done, but I’m not sure if that’s the best approach.

          Hi Mitchell, I am not able to spot the location where elasticsearch stores parsed logs. As per reading the only pointer i got is that we can specify it in elasticsearch.yml in path.data field, but apart from that what is the default storage location for elasticsearch data

          Mitchell Anicas
          DigitalOcean Employee
          DigitalOcean Employee badge
          March 18, 2015

          /var/lib/elasticsearch/elasticsearch/nodes/0/indices

          I followed the previous article which had Kibana 3. I’ve since upgraded Elasticsearch and Logstash, but apparently it’s not so easy upgrading to Kibana 4. Where can I get instructions on how to upgrade?

          Mitchell Anicas
          DigitalOcean Employee
          DigitalOcean Employee badge
          March 18, 2015

          I think Kibana 4 is a complete rewrite, so you just have to install it and re-create any visualizations and dashboards that you may have had.

          This comment has been deleted

            Having some trouble configuring nginx to use with https and Kibana:

            Using curl: (7) Failed to connect to 127.0.0.1 port 5601: Connection refused

            Also changed in kibana.yml: elasticsearch_url: “https://localhost:9200

            server {
                listen 443 ssl spdy default_server;
                listen [::]:443 ssl spdy ipv6only=on default_server;
            
                root /var/www/html;
                index index.php index.html index.htm;
            
                server_name www.site.com;
            
                ssl_stuff
            
                location / {
                    proxy_pass http://localhost:5601;
                    proxy_http_version 1.1;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection 'upgrade';
                    proxy_set_header Host $host;
                    proxy_cache_bypass $http_upgrade;
                    proxy_set_header X-Forwarded-Proto https;
                    proxy_set_header X-Forwarded-Port 443;
                    proxy_set_header X-Secure on; 
                    auth_basic "Restricted Access";
                    auth_basic_user_file /etc/nginx/htpasswd.users;
                }
            
            Mitchell Anicas
            DigitalOcean Employee
            DigitalOcean Employee badge
            March 20, 2015

            I haven’t set up https yet, but I’ll update the guide when I do. Sorry.

            Hi Mitchell, I’m having trouble with running Logstash with the exact configuration you wrote here, specifically with input (lumberjack) part. When I start logstash agent using command:

            /opt/logstash/bin/logstash agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log

            (init.d script didn’t give enough output) I get an output like this:

            The error reported is: 
              setting default path failed: null```
            
            I did generate SSL certs using Option 2. I don't know where to look for more details :/

            Should logstash’s input, filter, output config separate into three conf files? And

            Each kind of log should be in pairs of input and filter config ?

            Mitchell Anicas
            DigitalOcean Employee
            DigitalOcean Employee badge
            March 23, 2015

            The conf files in the /etc/logstash/conf.d directory are concatenated together (ordered by filename), so you can put them all in the same file if you want. They are separated in the tutorial for better organization.

            Each kind of log should have a new filter (with a different “type” and pattern), and you need to configure Logstash Forwarder to ship the additional log “files”, as a different “type”. The second tutorial in this series covers how to add Apache/Nginx logs, which should help you figure out adding other types of logs.

            I followed this guide and logstash server and forwarder worked fine on local servers. Then, I installed logstash forward on a remote website. Of course, the server ip is pointed to my public ip address of my outgoing internet (a firewall is in front) I opened and forwarded port 5000 to a local logstatash server.

            2015/03/23 13:57:31.809056 Connecting to [x.x.x.x]:5000 (x.x.x.x) 2015/03/23 13:57:32.281942 Failed to tls handshake with x.x.x.x x509: certificate is valid for 192.168.103.105, not x.x.x.x.53

            My initial setup is, the cert ip is using local ip. How logstash server cert serve both local and remote server ?

            Mitchell Anicas
            DigitalOcean Employee
            DigitalOcean Employee badge
            March 23, 2015

            You can generate a new SSL certificate, on your Logstash server, that has two subjectAltName fields with both the private and public IP addresses of your server. If you do this, you’ll have to restart Logstash, replace the public certs on all of your servers (i.e. even on the ones that are currently working), and restart your Logstash Forwarders.

            Hi Guys,

            I have followed the howto and on some of my server it works great though on others I get a “Failed to tls handshake with x.x.x.x x509: certificate signed by unknown authority” error. I used the FQDN method and also added an entry to the /etc/hosts file with the servers IP and FQDN IS there anything else I am missing?

            logstash-forcarder config: http://pastebin.com/NtacPjVc

            Mitchell Anicas
            DigitalOcean Employee
            DigitalOcean Employee badge
            March 23, 2015

            Remove the underscore from ssl ca.

            Thanks, It was really Helpful…

            Hi Mitchell,

            youre tutorial is very nice and working nearly perfect in my case. I configured another file for example “example.log”

            Now is my Problem, when this file are updated. logstash-forwarder is always sending the complete file to logstash. Logstash is than sending all data to EL. So I got all informations from the file replicated and than they are doubled in EL. This happen again and again when the file is changed/updated.

            My request: Is there a way for logstash-forwarder send only the updated information (Last new lines from file) to logstash ?

            Or do you have another idea to fix this? thx for help Paris0

            Mitchell Anicas
            DigitalOcean Employee
            DigitalOcean Employee badge
            March 23, 2015

            Try checking the Logstash Forwarder error logs to see if there are connection errors between the Logstash server: /var/log/logstash-forwarder/logstash-forwarder.err

            This comment has been deleted

              I have checked it, and can’t find any special. It is a restart of the service and 3 imports.

              2015/03/23 20:29:45.682073 	--- options -------
              2015/03/23 20:29:45.682150 	config-arg:          /etc/logstash-forwarder.conf
              2015/03/23 20:29:45.682158 	idle-timeout:        5s
              2015/03/23 20:29:45.682160 	spool-size:          1024
              2015/03/23 20:29:45.682163 	harvester-buff-size: 16384
              2015/03/23 20:29:45.682165 	--- flags ---------
              2015/03/23 20:29:45.682167 	tail (on-rotation):  false
              2015/03/23 20:29:45.682169 	log-to-syslog:          false
              2015/03/23 20:29:45.682171 	quiet:             false
              2015/03/23 20:29:45.682256 {
                "network": {
                  "servers": [ "192.168.222.33:5000" ],
                  "timeout": 15,
                  "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
                },
              
                "files": [
                  {
                    "paths": [
                      "/develop/data/ipinfo.log"
                     ],
                    "fields": { "type": "ipinfo" }
                  }
                ]
              }
              
              
              2015/03/23 20:29:45.683213 Loading registrar data from /var/lib/logstash-forwarder/.logstash-forwarder
              2015/03/23 20:29:45.683290 Waiting for 1 prospectors to initialise
              2015/03/23 20:29:45.683342 Resuming harvester on a previously harvested file: /develop/data/ipinfo.log
              2015/03/23 20:29:45.683348 Registrar will re-save state for /develop/data/ipinfo.log
              2015/03/23 20:29:45.683351 All prospectors initialised with 1 states to persist
              2015/03/23 20:29:45.683373 harvest: "/develop/data/ipinfo.log" position:5983 (offset snapshot:5983)
              2015/03/23 20:29:45.683516 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
              2015/03/23 20:29:45.683710 Connecting to [192.168.222.33]:5000 (192.168.222.33) 
              2015/03/23 20:29:45.745467 Connected to 192.168.222.33
              2015/03/23 20:32:35.694230 Launching harvester on rotated file: /develop/data/ipinfo.log
              2015/03/23 20:32:35.694342 harvest: "/develop/data/ipinfo.log" (offset snapshot:0)
              2015/03/23 20:32:38.215972 Registrar: processing 44 events
              2015/03/23 20:34:15.700768 Launching harvester on rotated file: /develop/data/ipinfo.log
              2015/03/23 20:34:15.700921 harvest: "/develop/data/ipinfo.log" (offset snapshot:0)
              2015/03/23 20:34:18.237222 Registrar: processing 44 events
              2015/03/23 20:45:55.770540 Launching harvester on rotated file: /develop/data/ipinfo.log
              2015/03/23 20:45:55.770682 harvest: "/develop/data/ipinfo.log" (offset snapshot:0)
              2015/03/23 20:45:58.229821 Registrar: processing 45 events
              
              Mitchell Anicas
              DigitalOcean Employee
              DigitalOcean Employee badge
              March 23, 2015

              Are you rotating your logs frequently?

              This comment has been deleted

                This comment has been deleted

                  The log get from time to time some new informations added. I want that only this new lines will be processed and not everytime the whole file, like it is at the moment. Or I understood something wrong? Or explain me how to handle the log file please.

                  The new Informations in the file are lines like this:

                  2014-07-06 13:04:23,789+01:00 123.51.18.70 {"register":"07-013", "tag":["Server6", "Proxy", "Web", "picture"], "comment":"texttext"}
                  

                  and they are added always at the end, after the last entry.

                  Hi, the date filter is not working for me… I tried several things, it never works.

                  Here is what I have:

                  grok { match => [ “message”, ‘%{HTTPDATE:date}’ ] } date { match => [ “date”, “dd/MMM/YYYY:HH:mm:ss Z” ] locale => “en_US.UTF-8” }

                  The log is: 23/Mar/2015:18:26:24 +0000

                  In the JSON, I get 2 @timestamps! The first one is in “fields”, it is a long number corresponding to the time the log was received and the second one is in “_source”, it is my date in a string format like this “2015-03-23T18:26:24.562Z”.

                  I just want to be able to sort my logs according to the date I parsed, not the date the log was received… Anything I do wrong?

                  Thanks!

                  Mitchell Anicas
                  DigitalOcean Employee
                  DigitalOcean Employee badge
                  March 23, 2015

                  _source should contain the entire parsed log message.

                  When you configured the Kibana index pattern, did you select @timestamp or received_at as the time-field name?

                  _source does contains the entire parsed log message.

                  I selected @timestamp since that the only option proposed. (I removed the line add_field => [ "received_at", "%{@timestamp}" ] from my grok file)

                  After installation of elasticsearch, the status shows “elastic service is not running”. Followed the instructions mentioned here

                  Using command like below

                  sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -p /var/run/elasticsearch.pid -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch
                  

                  shows the following output.

                  • BindTransportException[Failed to bind to [9300-9400]] ChannelException[Failed to bind to: /192.168.0.1:9400] BindException[Cannot assign requested address]

                  After elasticsearch installation, the service does not start.

                  The following output is shown when I run command like

                  sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch

                  {0.20.6}: Initialization Failed ...
                  - ElasticSearchIllegalStateException[Failed to obtain node lock, is the following location writable?: [/usr/share/elasticsearch/data/elasticsearch]]
                      IOException[failed to obtain lock on /usr/share/elasticsearch/data/elasticsearch/nodes/49]
                          IOException[Cannot create directory: /usr/share/elasticsearch/data/elasticsearch/nodes/49]
                  

                  How can we correct such errors?

                  Mitchell Anicas
                  DigitalOcean Employee
                  DigitalOcean Employee badge
                  March 24, 2015

                  Verify that the network.host setting (Elasticsearch configuration) is set properly, then try and restart it.

                  I understand the fact that logstash is running as a service. But do I do if I want to create a new config, and load in some csv files?

                  Do I restart logstash or run command line /bin/logstash/logstash…?

                  Mitchell Anicas
                  DigitalOcean Employee
                  DigitalOcean Employee badge
                  March 24, 2015

                  You can reload your Logstash configuration by restarting it:

                  sudo service logstash restart
                  

                  Hi Mitchell, I’m having trouble with running Logstash with the exact configuration you wrote here, specifically with input (lumberjack) part. When I start logstash agent using command:

                  /opt/logstash/bin/logstash agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log

                  (init.d script didn’t give enough output) I get an output like this:

                  The error reported is: 
                    setting default path failed: null```
                  
                  I did generate SSL certs using Option 2. I don't know where to look for more details :/

                  Should logstash’s input, filter, output config separate into three conf files? And

                  Each kind of log should be in pairs of input and filter config ?

                  Mitchell Anicas
                  DigitalOcean Employee
                  DigitalOcean Employee badge
                  March 23, 2015

                  The conf files in the /etc/logstash/conf.d directory are concatenated together (ordered by filename), so you can put them all in the same file if you want. They are separated in the tutorial for better organization.

                  Each kind of log should have a new filter (with a different “type” and pattern), and you need to configure Logstash Forwarder to ship the additional log “files”, as a different “type”. The second tutorial in this series covers how to add Apache/Nginx logs, which should help you figure out adding other types of logs.

                  I followed this guide and logstash server and forwarder worked fine on local servers. Then, I installed logstash forward on a remote website. Of course, the server ip is pointed to my public ip address of my outgoing internet (a firewall is in front) I opened and forwarded port 5000 to a local logstatash server.

                  2015/03/23 13:57:31.809056 Connecting to [x.x.x.x]:5000 (x.x.x.x) 2015/03/23 13:57:32.281942 Failed to tls handshake with x.x.x.x x509: certificate is valid for 192.168.103.105, not x.x.x.x.53

                  My initial setup is, the cert ip is using local ip. How logstash server cert serve both local and remote server ?

                  Mitchell Anicas
                  DigitalOcean Employee
                  DigitalOcean Employee badge
                  March 23, 2015

                  You can generate a new SSL certificate, on your Logstash server, that has two subjectAltName fields with both the private and public IP addresses of your server. If you do this, you’ll have to restart Logstash, replace the public certs on all of your servers (i.e. even on the ones that are currently working), and restart your Logstash Forwarders.

                  Hi Guys,

                  I have followed the howto and on some of my server it works great though on others I get a “Failed to tls handshake with x.x.x.x x509: certificate signed by unknown authority” error. I used the FQDN method and also added an entry to the /etc/hosts file with the servers IP and FQDN IS there anything else I am missing?

                  logstash-forcarder config: http://pastebin.com/NtacPjVc

                  Mitchell Anicas
                  DigitalOcean Employee
                  DigitalOcean Employee badge
                  March 23, 2015

                  Remove the underscore from ssl ca.

                  Thanks, It was really Helpful…

                  Hi Mitchell,

                  youre tutorial is very nice and working nearly perfect in my case. I configured another file for example “example.log”

                  Now is my Problem, when this file are updated. logstash-forwarder is always sending the complete file to logstash. Logstash is than sending all data to EL. So I got all informations from the file replicated and than they are doubled in EL. This happen again and again when the file is changed/updated.

                  My request: Is there a way for logstash-forwarder send only the updated information (Last new lines from file) to logstash ?

                  Or do you have another idea to fix this? thx for help Paris0

                  Mitchell Anicas
                  DigitalOcean Employee
                  DigitalOcean Employee badge
                  March 23, 2015

                  Try checking the Logstash Forwarder error logs to see if there are connection errors between the Logstash server: /var/log/logstash-forwarder/logstash-forwarder.err

                  This comment has been deleted

                    I have checked it, and can’t find any special. It is a restart of the service and 3 imports.

                    2015/03/23 20:29:45.682073 	--- options -------
                    2015/03/23 20:29:45.682150 	config-arg:          /etc/logstash-forwarder.conf
                    2015/03/23 20:29:45.682158 	idle-timeout:        5s
                    2015/03/23 20:29:45.682160 	spool-size:          1024
                    2015/03/23 20:29:45.682163 	harvester-buff-size: 16384
                    2015/03/23 20:29:45.682165 	--- flags ---------
                    2015/03/23 20:29:45.682167 	tail (on-rotation):  false
                    2015/03/23 20:29:45.682169 	log-to-syslog:          false
                    2015/03/23 20:29:45.682171 	quiet:             false
                    2015/03/23 20:29:45.682256 {
                      "network": {
                        "servers": [ "192.168.222.33:5000" ],
                        "timeout": 15,
                        "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
                      },
                    
                      "files": [
                        {
                          "paths": [
                            "/develop/data/ipinfo.log"
                           ],
                          "fields": { "type": "ipinfo" }
                        }
                      ]
                    }
                    
                    
                    2015/03/23 20:29:45.683213 Loading registrar data from /var/lib/logstash-forwarder/.logstash-forwarder
                    2015/03/23 20:29:45.683290 Waiting for 1 prospectors to initialise
                    2015/03/23 20:29:45.683342 Resuming harvester on a previously harvested file: /develop/data/ipinfo.log
                    2015/03/23 20:29:45.683348 Registrar will re-save state for /develop/data/ipinfo.log
                    2015/03/23 20:29:45.683351 All prospectors initialised with 1 states to persist
                    2015/03/23 20:29:45.683373 harvest: "/develop/data/ipinfo.log" position:5983 (offset snapshot:5983)
                    2015/03/23 20:29:45.683516 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
                    2015/03/23 20:29:45.683710 Connecting to [192.168.222.33]:5000 (192.168.222.33) 
                    2015/03/23 20:29:45.745467 Connected to 192.168.222.33
                    2015/03/23 20:32:35.694230 Launching harvester on rotated file: /develop/data/ipinfo.log
                    2015/03/23 20:32:35.694342 harvest: "/develop/data/ipinfo.log" (offset snapshot:0)
                    2015/03/23 20:32:38.215972 Registrar: processing 44 events
                    2015/03/23 20:34:15.700768 Launching harvester on rotated file: /develop/data/ipinfo.log
                    2015/03/23 20:34:15.700921 harvest: "/develop/data/ipinfo.log" (offset snapshot:0)
                    2015/03/23 20:34:18.237222 Registrar: processing 44 events
                    2015/03/23 20:45:55.770540 Launching harvester on rotated file: /develop/data/ipinfo.log
                    2015/03/23 20:45:55.770682 harvest: "/develop/data/ipinfo.log" (offset snapshot:0)
                    2015/03/23 20:45:58.229821 Registrar: processing 45 events
                    
                    Mitchell Anicas
                    DigitalOcean Employee
                    DigitalOcean Employee badge
                    March 23, 2015

                    Are you rotating your logs frequently?

                    This comment has been deleted

                      This comment has been deleted

                        The log get from time to time some new informations added. I want that only this new lines will be processed and not everytime the whole file, like it is at the moment. Or I understood something wrong? Or explain me how to handle the log file please.

                        The new Informations in the file are lines like this:

                        2014-07-06 13:04:23,789+01:00 123.51.18.70 {"register":"07-013", "tag":["Server6", "Proxy", "Web", "picture"], "comment":"texttext"}
                        

                        and they are added always at the end, after the last entry.

                        Hi, the date filter is not working for me… I tried several things, it never works.

                        Here is what I have:

                        grok { match => [ “message”, ‘%{HTTPDATE:date}’ ] } date { match => [ “date”, “dd/MMM/YYYY:HH:mm:ss Z” ] locale => “en_US.UTF-8” }

                        The log is: 23/Mar/2015:18:26:24 +0000

                        In the JSON, I get 2 @timestamps! The first one is in “fields”, it is a long number corresponding to the time the log was received and the second one is in “_source”, it is my date in a string format like this “2015-03-23T18:26:24.562Z”.

                        I just want to be able to sort my logs according to the date I parsed, not the date the log was received… Anything I do wrong?

                        Thanks!

                        Mitchell Anicas
                        DigitalOcean Employee
                        DigitalOcean Employee badge
                        March 23, 2015

                        _source should contain the entire parsed log message.

                        When you configured the Kibana index pattern, did you select @timestamp or received_at as the time-field name?

                        _source does contains the entire parsed log message.

                        I selected @timestamp since that the only option proposed. (I removed the line add_field => [ "received_at", "%{@timestamp}" ] from my grok file)

                        After installation of elasticsearch, the status shows “elastic service is not running”. Followed the instructions mentioned here

                        Using command like below

                        sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -p /var/run/elasticsearch.pid -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch
                        

                        shows the following output.

                        • BindTransportException[Failed to bind to [9300-9400]] ChannelException[Failed to bind to: /192.168.0.1:9400] BindException[Cannot assign requested address]

                        After elasticsearch installation, the service does not start.

                        The following output is shown when I run command like

                        sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch

                        {0.20.6}: Initialization Failed ...
                        - ElasticSearchIllegalStateException[Failed to obtain node lock, is the following location writable?: [/usr/share/elasticsearch/data/elasticsearch]]
                            IOException[failed to obtain lock on /usr/share/elasticsearch/data/elasticsearch/nodes/49]
                                IOException[Cannot create directory: /usr/share/elasticsearch/data/elasticsearch/nodes/49]
                        

                        How can we correct such errors?

                        Mitchell Anicas
                        DigitalOcean Employee
                        DigitalOcean Employee badge
                        March 24, 2015

                        Verify that the network.host setting (Elasticsearch configuration) is set properly, then try and restart it.

                        I understand the fact that logstash is running as a service. But do I do if I want to create a new config, and load in some csv files?

                        Do I restart logstash or run command line /bin/logstash/logstash…?

                        Mitchell Anicas
                        DigitalOcean Employee
                        DigitalOcean Employee badge
                        March 24, 2015

                        You can reload your Logstash configuration by restarting it:

                        sudo service logstash restart
                        

                        A bit off the beaten path,

                        I am having problems with S3 input since upgrading to the latest versions of ELK.

                        I get the following warning:

                        ubuntu@ip-10-0-0-38:~$ ./test_logstash_config.sh elb_logs.conf
                        You are using a deprecated config setting "credentials" set in s3. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. This only exists to be backwards compatible. This plugin now uses the AwsConfig from PluginMixins If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"credentials", :plugin=><LogStash::Inputs::S3 --->, :level=>:warn}
                        

                        Do you know anything about this? My S3 input was working fine, now I get nothing.

                        Here is my config:

                        input {
                        
                        s3{
                        
                                bucket=>"thoracic.org"
                                credentials=>["dfsdfsdfd","sfdsfdfsd"]
                                type => "elb"
                                prefix => "thoracic_lb_logs/"
                                sincedb_path => "/home/ubuntu/logstash_configs/s3/s3.db"
                        }
                        
                        
                        }
                        
                        
                        filter {
                        grok {
                        match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int} %{IP:backend_ip}:%{NUMBER:backend_port:int} %{NUMBER:request_processing_time:float} %{NUMBER:backend_processing_tim$
                        }
                        date {
                        match => [ "timestamp", "ISO8601" ]
                        }
                        
                        
                        
                        
                         geoip
                          {
                            source => "client_ip"
                            target => "geoip"
                           # database => "/opt/geodb/GeoLiteCity.dat"
                            add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
                            add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
                          }
                          mutate
                          {
                              convert => [ "[geoip][coordinates]", "float" ]
                          }
                        
                        }
                        
                        output {
                          elasticsearch {
                         host => localhost
                            # Setting 'embedded' will run  a real elasticsearch server inside logstash.
                            # This option below saves you from having to run a separate process just
                            # for ElasticSearch, so you can get started quicker!
                            embedded => false
                                protocol => "transport"
                                index => "elb"
                          }
                        
                        stdout
                        {
                        codec => rubydebug{}
                        
                        }
                        
                        }
                        
                        Mitchell Anicas
                        DigitalOcean Employee
                        DigitalOcean Employee badge
                        March 24, 2015

                        Sorry, I haven’t used the s3 input. Have you asked in the IRC channel?

                        Great Tutorial!

                        Hi Mitchell,

                        I followed this tutorial in order to create a custom cookbook to deploy a complete ELK stack with Chef. Everything worked fine until I reached the logstash-forwarder part.

                        In the /var/log/logstash-forwarder/logstash-forwarder.err, I keep getting the following :

                        2015/03/25 20:49:56.614924 Connecting to [xx.xxx.xx.xxx]:5000 (<domain name>)
                        2015/03/25 20:50:11.615530 Failed to tls handshake with xx.xxx.xx.xxx read tcp xx.xxx.xx.xxx:5000: i/o timeout
                        

                        I tried to generate my certs file again, kept getting the same error. Maybe I missed something ?

                        Thanks a lot for your great work by the way !

                        Mitchell Anicas
                        DigitalOcean Employee
                        DigitalOcean Employee badge
                        March 26, 2015

                        I’ve read that it can be a few different things, ranging from time synchronization to Logstash/Elasticsearch misconfiguration.

                        This comment has been deleted

                          Great work on these tutorials, Mitchell. I really appreciate them!

                          I used this particular tutorial as my guide to update my existing ELK stack created from your earlier tutorial. Everything seemed go OK, except logstash started consuming all CPU on my server. Some spelunking revealed that I needed to disable the logstash-web service, since it was conflicting on port 80 with nginx.

                          ELK was still functional while this was going on, but the server was sluggish.

                          I needed to do these steps:

                          echo manual | sudo tee /etc/init/logstash-web.override
                          sudo stop logstash-web
                          

                          The first configured logstash-web for manual startup (for the next time I reboot), while the second just stopped the logstash-web service right away.

                          Maybe not everyone will be affected by this conflict, but I think so. You might need to tweak the steps to avoid the port conflict.

                          I had ElasticSearch 1.4.4 running with Kibana3. I now installed Kibana4, following your instructions, but it’s still loading Kibana3. Is there anything from Kibana3 I need to remove or change?

                          update Nginx was still pointing to Kibana3. I changed that, but now I’m getting a 403 forbidden error. From the nginx error.log, I get: directory index of “/opt/kibana/” is forbidden. What else do I need to change?

                          Ok, got Kibana 4 to load, but only to the “configure an index pattern” page. There I get an error: Unable to fetch mapping. Do you have indices matching the pattern.

                          I followed your instructions to the tee! I didn’t receive any errors, when I go to the IP for the elasticsearch instance the only thing I get after I enter my creds is this message:

                          “Kibana is loading. Give me a moment here. I’m loading a whole bunch of code. Don’t worry, all this good stuff will be cached up for next time!”

                          I’ve gone over the instructions again and again and everything looks good. I’m clueless on why this is happening.

                          Mitchell Anicas
                          DigitalOcean Employee
                          DigitalOcean Employee badge
                          March 31, 2015

                          How much CPU and RAM does your server have?

                          I just fixed the issue, it was a misconfig on the nginx!

                          While loading kibana interface for the first time, it gives a *“Warning No default index pattern. You must select or create one to continue.” ** even though default index pattern logstash- is present.

                          Hello Mitchell,

                          Currently, i am doing my diploma thesis on distributed systems and i am studing the ELK stack. It’s my firm belief that your tutorial is the best exists on the internet. I managed to have the ELK stack up and running within a day on my VMs running on an openstack cluster! Thank’s a lot and please keep uploading interesting stuff!

                          Currently i have 1 VM to be the Logstash Server and 2 Clients shipping their logs.

                          I was wondering if there is a way to expand this infrastructure so as to have ElasticSearch to be distributed in more nodes in order to evaluate the response time according to the number of nodes, the scalability of this system etc . Does it make any sense or i have misunderstood the whole thing?

                          Chris.

                          Mitchell Anicas
                          DigitalOcean Employee
                          DigitalOcean Employee badge
                          April 2, 2015

                          Thanks Chris!

                          It is possible (and actually recommended) to scale the Elasticsearch portion of this setup to separate nodes. After you set up Elasticsearch in a configuration that you want, you need to configure the Logstash output and Kibana elasticsearch_url with the location of your ES cluster.

                          Hi! It’s an amazing post!

                          I’m trying to install logstash-forwarder on the same server as the logstash service, so the server is also the client and I’m having trouble making the forwarder connect the logstash service in localhost.

                          It could be a proxy problem not allowing the connection? What i’m missing?

                          My logstash input conf:

                          input {
                            lumberjack {
                              port => 5000
                              type => "logs"
                              ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
                              ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
                            }
                          }
                          

                          My logstash-forwarder conf:

                          "network": {
                          	"servers": [ "localhost:5000" ],
                          	"timeout": 15,
                          	"ssl key": "/etc/pki/tls/private/logstash-forwarder.key", 
                          	"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
                          },
                          "files": [
                              {
                                "paths": [
                                  "/var/log/syslog",
                                  "/var/log/auth.log"
                                ],
                                "fields": { "type": "syslog" }
                              }
                              ..
                          ]
                          

                          My logstash-forwarder log:

                          2015/04/02 07:36:40.550388 Waiting for 1 prospectors to initialise
                          2015/04/02 07:36:40.550591 Launching harvester on new file: /var/log/syslog
                          2015/04/02 07:36:40.550654 Launching harvester on new file: /var/log/auth.log
                          2015/04/02 07:36:40.550963 harvest: "/var/log/syslog" (offset snapshot:0)
                          2015/04/02 07:36:40.551269 harvest: "/var/log/auth.log" (offset snapshot:0)
                          2015/04/02 07:36:40.551416 All prospectors initialised with 0 states to persist
                          2015/04/02 07:36:40.551632 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
                          2015/04/02 07:36:40.557010 Connecting to [127.0.0.1]:5000 (localhost)
                          2015/04/02 07:36:40.564956 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
                          2015/04/02 07:36:41.565537 Connecting to [127.0.0.1]:5000 (localhost)
                          2015/04/02 07:36:41.565746 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
                          2015/04/02 07:36:42.566138 Connecting to [127.0.0.1]:5000 (localhost)
                          2015/04/02 07:36:42.566363 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
                          2015/04/02 07:36:43.567351 Connecting to [127.0.0.1]:5000 (localhost)
                          2015/04/02 07:36:43.567770 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
                          2015/04/02 07:36:44.568582 Connecting to [127.0.0.1]:5000 (localhost)
                          
                          Mitchell Anicas
                          DigitalOcean Employee
                          DigitalOcean Employee badge
                          April 2, 2015

                          If you are trying to gather local logs, you don’t need to use logstash-forwarder. Instead, use an input that specifies a file. e.g.:

                          input {
                            file {
                              path => "/var/log/yourlog.log"
                              type => "logs"
                            }
                          }
                          

                          Thank you @manicas for the answer!

                          Two comments:

                          1. Where I can find documentation about this range thing? ( between 00 and 30, out of this range, etc… ) , where is this explained?

                          2. I’ve tried your answer :

                          Wit this this configuration in 00-local.conf:

                          input {
                             file {
                               path => "/var/log/auth.log"
                               type => "logs"
                            }
                          }
                          

                          Still not working, i get this in /var/log/logstash/logstash.err:

                          root@elk-loyalguru:/var/log# cat logstash/logstash.err
                          NotImplementedError: stat.st_dev unsupported or native support failed to load
                                 dev_major at org/jruby/RubyFileStat.java:188
                            _discover_file at /opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:150
                                      each at org/jruby/RubyArray.java:1613
                            _discover_file at /opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:132
                                     watch at /opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:38
                                      tail at /opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:68
                                       run at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-0.1.6/lib/logstash/inputs/file.rb:133
                                      each at org/jruby/RubyArray.java:1613
                                       run at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-0.1.6/lib/logstash/inputs/file.rb:133
                               inputworker at /opt/logstash/lib/logstash/pipeline.rb:174
                               start_input at /opt/logstash/lib/logstash/pipeline.rb:168
                          
                          Mitchell Anicas
                          DigitalOcean Employee
                          DigitalOcean Employee badge
                          April 2, 2015

                          Hi. Sorry about the confusion.

                          You should actually be able to include the file input into the input section of the 01-logstash-input.conf file.

                          So it might look something like this:

                          input {
                            file {
                              path => "/var/log/syslog"
                              type => "syslog"
                            }
                            lumberjack {
                              port => 5000
                              type => "logs"
                              ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
                              ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
                            }
                          }
                          

                          Thanks again @manicas

                          I still getting the same error in logstash.err

                          Pushing the changes you suggested ( file in the same input as lumberjack, etc… ) :

                          root@elk-loyalguru:~# vi /etc/logstash/conf.d/01-lumberjack-input.conf
                          

                          Restarting the logstash service, after some seconds stops again:

                          root@elk-loyalguru:~# service logstash restart
                          Killing logstash (pid 11303) with SIGTERM
                          Waiting logstash (pid 11303) to die...
                          Waiting logstash (pid 11303) to die...
                          logstash stopped.
                          logstash started.
                          root@elk-loyalguru:~# service logstash status
                          logstash is running
                          root@elk-loyalguru:~# service logstash status
                          logstash is running
                          root@elk-loyalguru:~# service logstash status
                          logstash is running
                          root@elk-loyalguru:~# service logstash status
                          logstash is running
                          root@elk-loyalguru:~# service logstash status
                          logstash is running
                          root@elk-loyalguru:~# service logstash status
                          logstash is running
                          root@elk-loyalguru:~# service logstash status
                          logstash is running
                          root@elk-loyalguru:~# service logstash status
                          logstash is running
                          root@elk-loyalguru:~# service logstash status
                          logstash is not running
                          

                          Checkin the logs and getting the same error i commented above:

                          root@elk-loyalguru:~# tail -f /var/log/logstash
                          logstash/           logstash-forwarder/
                          root@elk-loyalguru:~# tail -f /var/log/logstash/logstash.err
                            _discover_file at /opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:150
                                      each at org/jruby/RubyArray.java:1613
                            _discover_file at /opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:132
                                     watch at /opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:38
                                      tail at /opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:68
                                       run at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-0.1.6/lib/logstash/inputs/file.rb:133
                                      each at org/jruby/RubyArray.java:1613
                                       run at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-0.1.6/lib/logstash/inputs/file.rb:133
                               inputworker at /opt/logstash/lib/logstash/pipeline.rb:174
                               start_input at /opt/logstash/lib/logstash/pipeline.rb:168
                          

                          I really don’t know how to proceed :(

                          A bit off the beaten path,

                          I am having problems with S3 input since upgrading to the latest versions of ELK.

                          I get the following warning:

                          ubuntu@ip-10-0-0-38:~$ ./test_logstash_config.sh elb_logs.conf
                          You are using a deprecated config setting "credentials" set in s3. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. This only exists to be backwards compatible. This plugin now uses the AwsConfig from PluginMixins If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"credentials", :plugin=><LogStash::Inputs::S3 --->, :level=>:warn}
                          

                          Do you know anything about this? My S3 input was working fine, now I get nothing.

                          Here is my config:

                          input {
                          
                          s3{
                          
                                  bucket=>"thoracic.org"
                                  credentials=>["dfsdfsdfd","sfdsfdfsd"]
                                  type => "elb"
                                  prefix => "thoracic_lb_logs/"
                                  sincedb_path => "/home/ubuntu/logstash_configs/s3/s3.db"
                          }
                          
                          
                          }
                          
                          
                          filter {
                          grok {
                          match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int} %{IP:backend_ip}:%{NUMBER:backend_port:int} %{NUMBER:request_processing_time:float} %{NUMBER:backend_processing_tim$
                          }
                          date {
                          match => [ "timestamp", "ISO8601" ]
                          }
                          
                          
                          
                          
                           geoip
                            {
                              source => "client_ip"
                              target => "geoip"
                             # database => "/opt/geodb/GeoLiteCity.dat"
                              add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
                              add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
                            }
                            mutate
                            {
                                convert => [ "[geoip][coordinates]", "float" ]
                            }
                          
                          }
                          
                          output {
                            elasticsearch {
                           host => localhost
                              # Setting 'embedded' will run  a real elasticsearch server inside logstash.
                              # This option below saves you from having to run a separate process just
                              # for ElasticSearch, so you can get started quicker!
                              embedded => false
                                  protocol => "transport"
                                  index => "elb"
                            }
                          
                          stdout
                          {
                          codec => rubydebug{}
                          
                          }
                          
                          }
                          

                          Great Tutorial!

                          Hi Mitchell,

                          I followed this tutorial in order to create a custom cookbook to deploy a complete ELK stack with Chef. Everything worked fine until I reached the logstash-forwarder part.

                          In the /var/log/logstash-forwarder/logstash-forwarder.err, I keep getting the following :

                          2015/03/25 20:49:56.614924 Connecting to [xx.xxx.xx.xxx]:5000 (<domain name>)
                          2015/03/25 20:50:11.615530 Failed to tls handshake with xx.xxx.xx.xxx read tcp xx.xxx.xx.xxx:5000: i/o timeout
                          

                          I tried to generate my certs file again, kept getting the same error. Maybe I missed something ?

                          Thanks a lot for your great work by the way !

                          Great work on these tutorials, Mitchell. I really appreciate them!

                          I used this particular tutorial as my guide to update my existing ELK stack created from your earlier tutorial. Everything seemed go OK, except logstash started consuming all CPU on my server. Some spelunking revealed that I needed to disable the logstash-web service, since it was conflicting on port 80 with nginx.

                          ELK was still functional while this was going on, but the server was sluggish.

                          I needed to do these steps:

                          echo manual | sudo tee /etc/init/logstash-web.override
                          sudo stop logstash-web
                          

                          The first configured logstash-web for manual startup (for the next time I reboot), while the second just stopped the logstash-web service right away.

                          Maybe not everyone will be affected by this conflict, but I think so. You might need to tweak the steps to avoid the port conflict.

                          I had ElasticSearch 1.4.4 running with Kibana3. I now installed Kibana4, following your instructions, but it’s still loading Kibana3. Is there anything from Kibana3 I need to remove or change?

                          update Nginx was still pointing to Kibana3. I changed that, but now I’m getting a 403 forbidden error. From the nginx error.log, I get: directory index of “/opt/kibana/” is forbidden. What else do I need to change?

                          I followed your instructions to the tee! I didn’t receive any errors, when I go to the IP for the elasticsearch instance the only thing I get after I enter my creds is this message:

                          “Kibana is loading. Give me a moment here. I’m loading a whole bunch of code. Don’t worry, all this good stuff will be cached up for next time!”

                          I’ve gone over the instructions again and again and everything looks good. I’m clueless on why this is happening.

                          I just fixed the issue, it was a misconfig on the nginx!

                          While loading kibana interface for the first time, it gives a *“Warning No default index pattern. You must select or create one to continue.” ** even though default index pattern logstash- is present.

                          Hello Mitchell,

                          Currently, i am doing my diploma thesis on distributed systems and i am studing the ELK stack. It’s my firm belief that your tutorial is the best exists on the internet. I managed to have the ELK stack up and running within a day on my VMs running on an openstack cluster! Thank’s a lot and please keep uploading interesting stuff!

                          Currently i have 1 VM to be the Logstash Server and 2 Clients shipping their logs.

                          I was wondering if there is a way to expand this infrastructure so as to have ElasticSearch to be distributed in more nodes in order to evaluate the response time according to the number of nodes, the scalability of this system etc . Does it make any sense or i have misunderstood the whole thing?

                          Chris.

                          Hi! It’s an amazing post!

                          I’m trying to install logstash-forwarder on the same server as the logstash service, so the server is also the client and I’m having trouble making the forwarder connect the logstash service in localhost.

                          It could be a proxy problem not allowing the connection? What i’m missing?

                          My logstash input conf:

                          input {
                            lumberjack {
                              port => 5000
                              type => "logs"
                              ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
                              ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
                            }
                          }
                          

                          My logstash-forwarder conf:

                          "network": {
                          	"servers": [ "localhost:5000" ],
                          	"timeout": 15,
                          	"ssl key": "/etc/pki/tls/private/logstash-forwarder.key", 
                          	"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
                          },
                          "files": [
                              {
                                "paths": [
                                  "/var/log/syslog",
                                  "/var/log/auth.log"
                                ],
                                "fields": { "type": "syslog" }
                              }
                              ..
                          ]
                          

                          My logstash-forwarder log:

                          2015/04/02 07:36:40.550388 Waiting for 1 prospectors to initialise
                          2015/04/02 07:36:40.550591 Launching harvester on new file: /var/log/syslog
                          2015/04/02 07:36:40.550654 Launching harvester on new file: /var/log/auth.log
                          2015/04/02 07:36:40.550963 harvest: "/var/log/syslog" (offset snapshot:0)
                          2015/04/02 07:36:40.551269 harvest: "/var/log/auth.log" (offset snapshot:0)
                          2015/04/02 07:36:40.551416 All prospectors initialised with 0 states to persist
                          2015/04/02 07:36:40.551632 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
                          2015/04/02 07:36:40.557010 Connecting to [127.0.0.1]:5000 (localhost)
                          2015/04/02 07:36:40.564956 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
                          2015/04/02 07:36:41.565537 Connecting to [127.0.0.1]:5000 (localhost)
                          2015/04/02 07:36:41.565746 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
                          2015/04/02 07:36:42.566138 Connecting to [127.0.0.1]:5000 (localhost)
                          2015/04/02 07:36:42.566363 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
                          2015/04/02 07:36:43.567351 Connecting to [127.0.0.1]:5000 (localhost)
                          2015/04/02 07:36:43.567770 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
                          2015/04/02 07:36:44.568582 Connecting to [127.0.0.1]:5000 (localhost)
                          

                          I tried to incorporate shield via this tutorial http://www.elastic.co/guide/en/shield/current/_shield_with_kibana_4.html but get a 502 from nginx, because kibana4 doesn’t seem to run (it did perfectly without shield). I wonder where the kibana4 logs are to check what’s going on?

                          Hello i want to use netflow over logstash.

                          I have created a file called “11-netflow.conf” under /etc/logstash/conf.d with the content:

                          input {
                              udp {
                                port => 9996
                                codec => netflow {}
                              }
                            }
                          output {
                            elasticsearch { host => localhost }
                          }
                          

                          But if i restart then logstash i get this error message under /var/log/logstash/logstash.log

                          {:timestamp=>"2015-04-08T10:17:22.838000+0200", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
                          {:timestamp=>"2015-04-08T10:18:55.215000+0200", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
                          

                          How can i fix this problem and recieve then neflow data in Logstash/Elasticsearch/Kibana?

                          Best Regards

                          Daniel

                          To collect the logs from the ELK-server itself, would I need to install a logstash-forwarder and point it to localhost or is there a shortcut?

                          It’s been suggested to me to insert redis in front of logstash - wondering if you could comment on that and if you have a tutorial on setting that up using SSL?

                          @Mitchell, i don’t use lumberjack so i don’t need a file called 01-lumberjack-input.conf. I want only use syslog and netflow and both only with logstash and with no other service like lumberjack.

                          I’am a newby about ELK, I hope you appreciate that, that i maybe don’t understand your answer right.

                          Best Regards

                          Dany

                          Hi Mitchell, This is great post and the way you described everything, definitely makes it lot easier to understand.

                          One of the thing I tried doing with ELK is more for Incident response purposes. For example, using it for timelining an evidence file (EnCase E01). There are some steps posted on the below link regarding Elastic search v3, do you know if anyone in your knowledge has used the v4 for timelining an EnCase evidence file?

                          http://blog.kiddaland.net/2013/11/visualize-output.html

                          AB

                          Hello everyone,

                          I have some trouble, starting the Logstash server from the init script. When i start it with the command service logstash start this is the output from /var/log/logstash/logstash.log:

                          {:timestamp=>"2015-04-13T13:34:13.431000+0200", :message=>"syslog listener died", :protocol=>:tcp, :address=>"0.0.0.0:514", :exception=>#<SocketError: initialize: name or service not known>, :backtrace=>["org/jruby/ext/socket/RubyTCPServer.java:126:in `initialize'", "org/jruby/RubyIO.java:851:in `new'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-syslog-0.1.3/lib/logstash/inputs/syslog.rb:152:in `tcp_listener'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-syslog-0.1.3/lib/logstash/inputs/syslog.rb:117:in `server'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-syslog-0.1.3/lib/logstash/inputs/syslog.rb:101:in `run'"], :level=>:warn}
                          

                          Logstash cutted config:

                          input {
                            syslog {
                              type => "syslog"
                              port => 514
                            }
                          }
                          
                          filter {
                            if [type] == "syslog" {
                                [grok Filter cutted out]
                            }
                          }
                          
                          output {
                            elasticsearch { host => "127.0.0.1" }
                            stdout { codec => rubydebug }
                          }
                          

                          The cutted logstash init script:

                          pidfile="/var/run/$name.pid"
                          
                          LS_USER=logstash
                          LS_GROUP=logstash
                          LS_HOME=/var/lib/logstash
                          LS_HEAP_SIZE="500m"
                          LS_JAVA_OPTS="-Djava.io.tmpdir=${LS_HOME}"
                          LS_LOG_DIR=/var/log/logstash
                          LS_LOG_FILE="${LS_LOG_DIR}/$name.log"
                          LS_CONF_DIR=/etc/logstash/conf.d
                          LS_OPEN_FILES=16384
                          LS_NICE=19
                          LS_OPTS=""
                          
                          [ -r /etc/default/$name ] && . /etc/default/$name
                          [ -r /etc/sysconfig/$name ] && . /etc/sysconfig/$name
                          
                          program=/opt/logstash/bin/logstash
                          args="agent -f ${LS_CONF_DIR} -l ${LS_LOG_FILE} ${LS_OPTS}"
                          
                          start() {
                          
                          
                            JAVA_OPTS=${LS_JAVA_OPTS}
                            HOME=${LS_HOME}
                            export PATH HOME JAVA_OPTS LS_HEAP_SIZE LS_JAVA_OPTS LS_USE_GC_LOGGING
                          
                            # set ulimit as (root, presumably) first, before we drop privileges
                            ulimit -n ${LS_OPEN_FILES}
                          
                            # Run the program!
                            nice -n ${LS_NICE} chroot --userspec $LS_USER:$LS_GROUP / sh -c "
                              cd $LS_HOME
                              ulimit -n ${LS_OPEN_FILES}
                              exec \"$program\" $args
                            " > "${LS_LOG_DIR}/$name.stdout" 2> "${LS_LOG_DIR}/$name.err" &
                          
                          
                            # Generate the pidfile from here. If we instead made the forked process
                            # generate it there will be a race condition between the pidfile writing
                            # and a process possibly asking for status.
                            echo $! > $pidfile
                          
                            echo "$name started."
                            #echo "$args"
                            return 0
                          }
                          

                          If i write /opt/logstash/bin/logstash agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log direct in the shell, logstash is starting without any problem. I’m on debian wheezy There is nothing else listening on this port, checked with netstat -u

                          I hope someone can help me and sorry for my (probably) bad english.

                          When installing logstash using this method you can not run the logstash command. running /opt/logstash/bin/logstash doesn’t work either…

                          Hello Mitchel. I used the guide to install Logstash, Elasticsearch, and Kibana. The only change I did is use a local repository: echo “deb file:${ELK_INSTALL_BASE_DIR}/elasticsearch/repo ./” | sudo tee -a /etc/apt/sources.list.d/elasticsearch.list

                          I downloaded the deb file and created the repository using the following command: dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz

                          The problem happening is the elasticsearch installation succeeds but I cannot find the start up service: /etc/init.d/elasticsearch

                          Any advice?

                          Hi, Thanks for nice tutorial. I am able to follow every steps without errors. It seems work until last step when I tried to configure index pattern. it shows: unable to fetch mapping, Do you have indices matching the pattern? I checked logfile on client : root@hkghost-ipa-client-1b:/var/log/logstash-forwarder# tail -f logstash-forwarder.err 2015/04/27 21:18:26.574689 Failure connecting to 172.22.22.22: dial tcp 172.22.22.22:5000: i/o timeout 2015/04/27 21:18:27.575001 Connecting to [172.22.22.22]:5000 (172.22.22.22) 2015/04/27 21:18:42.575410 Failure connecting to 172.22.22.22: dial tcp 172.22.22.22:5000: i/o timeout 2015/04/27 21:18:43.575785 Connecting to [172.22.22.22]:5000 (172.22.22.22)

                          How can I solve this problem?

                          Hello Mitchell,

                          Thanks for the award winning tutorial. I have a slightly different issue.

                          I intend to set up 3 separate instances of elasticsearch, Kibana & Logstash. I was able to set up elasticsearch successfully elastic1:9201, elastic2:9202 & elastic3:9203 and its working successfully. However, setting up Kibana 4 (kibana4_1:5601, kibana4_1:5602 & kibana4_1:5603 ) is an issue. Kindly assist by giving a detail tips on how to go about it.

                          In the past, I was able to do it successfully with kibana3 by just dropping the kibana3 into an apache server However, the new kibana4 comes with its own webserver.

                          Waiting for your response

                          Hi, i follow the guide step by step but when I log in with a browser to kibana, i can create an index, it saids me “unable to fetch mapping”

                          On other server i cant pick the opcion @timestamp, nothing apperas to me.

                          Can someone help me please?

                          Hi, I am deploying ELK stack using salt stack. The platform i am using for this is openstack So when i access for kibana home page there in no index pattern available and in logs the error is unable to resolve host name ubuntu. Please let me know whether i shall change from localhost to ubuntu in configuration?

                          Hi Mitchell,

                          Thanks for your tutorial. If we want to sent log from network devices (eq. router / switch) could you please to shared how to configure it.

                          Thank you…

                          Thanks @manicas for taking time to help!

                          Unable to create any index because it says "Unable to fetch mapping. Do you have indices matching the pattern. checked the logstash-forwarder logs and found issue with certificate, recreated the certificates and copied it to logstash-forwarder and edited the yml file with [v3_ca] subjectAltName = IP: <serverip> and restarted all the machines.

                          now there is no issue found in logs of logstash-forwarder. but, unable to login to kibana as it says 502 bad gateway.

                          Please help.

                          Hi, I got an error when i trying to restart nginx. I followed the same steps listed above. I have checked with elasticsearch.yml and kibana.yml. Everything is working fine. And nginx -t also shows the configuration as OK.But still I’m unable to start the nginx. The reload option works fine but not start or restart. Please help me out from this.

                          Great guide. The strange thing is under indices, the “time-field name” does not show up for me in kibana. Kibana version is 4.0.1

                          thanks, flanny

                          apparently there is a bug with Ubuntu 14 and Java, using openJDK is ok. fix is in this link. https://github.com/elastic/logstash/issues/3127

                          I am having issue with the ngnix 502 bad gateway, when I access the http://server_internal_FQDN.

                          Hi Mitchell, I use Amazon EC2 instances for my project. I did this tutorial. However I have some errors. Firstly, when trying to “scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp” command, an error occurs about denied permission and connection lost. I tried Amazon EC2 public dns and private IP for server_private_IP. Seconly, I tried to use Kibana but index pattern has warning as “No default index pattern. You must select or create one to continue.” When I write “logstash-*” on index name, “Pattern matches 0% of existing indices and aliases” and “Unable to fetch mapping. Do you have indices matching the pattern?” are seen. Do you have any idea?

                          Try DigitalOcean for free

                          Click below to sign up and get $200 of credit to try our products over 60 days!

                          Sign up

                          Join the Tech Talk
                          Success! Thank you! Please check your email for further details.

                          Please complete your information!

                          Congratulations on unlocking the whale ambience easter egg!

                          Click the whale button in the bottom left of your screen to toggle some ambient whale noises while you read.

                          Thank you to the Glacier Bay National Park & Preserve and Merrick079 for the sounds behind this easter egg.

                          Interested in whales, protecting them, and their connection to helping prevent climate change? We recommend checking out the Whale and Dolphin Conservation.

                          Become a contributor for community

                          Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

                          DigitalOcean Documentation

                          Full documentation for every DigitalOcean product.

                          Resources for startups and SMBs

                          The Wave has everything you need to know about building a business, from raising funding to marketing your product.

                          Get our newsletter

                          Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

                          New accounts only. By submitting your email you agree to our Privacy Policy

                          The developer cloud

                          Scale up as you grow — whether you're running one virtual machine or ten thousand.

                          Get started for free

                          Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

                          *This promotional offer applies to new accounts only.