Note: This tutorial is for an older version of the ELK stack, which is not compatible with the latest version. The latest version of this tutorial is available at How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on Ubuntu 14.04.
In this tutorial, we will go over the installation of Logstash 1.4.2 and Kibana 3, and how to configure them to gather and visualize the syslogs of our systems in a centralized location. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana 3 is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch. Elasticsearch, Logstash, and Kibana, when used together is known as an ELK stack.
Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.
It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.
The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.
Our Logstash / Kibana setup has four main components:
We will install the first three components on a single server, which we will refer to as our Logstash Server. The Logstash Forwarder will be installed on all of the servers that we want to gather logs for, which we will refer to collectively as our Servers.
To complete this tutorial, you will require root access to an Ubuntu 14.04 VPS. Instructions to set that up can be found here (steps 3 and 4): Initial Server Setup with Ubuntu 14.04.
The amount of CPU, RAM, and storage that your Logstash Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a VPS with the following specs for our Logstash Server:
In addition to your Logstash Server, you will want to have a few other servers that you will gather logs from.
Let’s get started on setting up our Logstash Server!
Elasticsearch and Logstash require Java 7, so we will install that now. We will install Oracle Java 7 because that is what Elasticsearch recommends. It should, however, work fine with OpenJDK, if you decide to go that route.
Add the Oracle Java PPA to apt:
sudo add-apt-repository -y ppa:webupd8team/java
Update your apt package database:
sudo apt-get update
Install the latest stable version of Oracle Java 7 with this command (and accept the license agreement that pops up):
sudo apt-get -y install oracle-java7-installer
Now that Java 7 is installed, let’s install ElasticSearch.
Note: Logstash 1.4.2 recommends Elasticsearch 1.1.1.
Run the following command to import the Elasticsearch public GPG key into apt:
wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
Create the Elasticsearch source list:
echo 'deb http://packages.elasticsearch.org/elasticsearch/1.1/debian stable main' | sudo tee /etc/apt/sources.list.d/elasticsearch.list
Update your apt package database:
sudo apt-get update
Install Elasticsearch with this command:
sudo apt-get -y install elasticsearch=1.1.1
Elasticsearch is now installed. Let’s edit the configuration:
sudo vi /etc/elasticsearch/elasticsearch.yml
Add the following line somewhere in the file, to disable dynamic scripts:
script.disable_dynamic: true
You will also want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can’t read your data or shutdown your Elasticseach cluster through the HTTP API. Find the line that specifies network.host
and uncomment it so it looks like this:
network.host: localhost
Save and exit elasticsearch.yml
.
Now start Elasticsearch:
sudo service elasticsearch restart
Then run the following command to start Elasticsearch on boot up:
sudo update-rc.d elasticsearch defaults 95 10
Now that Elasticsearch is up and running, let’s install Kibana.
Note: Logstash 1.4.2 recommends Kibana 3.0.1
Download Kibana to your home directory with the following command:
cd ~; wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.1.tar.gz
Extract Kibana archive with tar:
tar xvf kibana-3.0.1.tar.gz
Open the Kibana configuration file for editing:
sudo vi ~/kibana-3.0.1/config.js
In the Kibana configuration file, find the line that specifies the elasticsearch
, and replace the port number (9200 by default) with 80
:
elasticsearch: "http://"+window.location.hostname+":80",
This is necessary because we are planning on accessing Kibana on port 80 (i.e. http://logstash_server_public_ip/).
We will be using Nginx to serve our Kibana installation, so let’s move the files into an appropriate location. Create a directory with the following command:
sudo mkdir -p /var/www/kibana3
Now copy the Kibana files into your newly-created directory:
sudo cp -R ~/kibana-3.0.1/* /var/www/kibana3/
Before we can use the Kibana web interface, we have to install Nginx. Let’s do that now.
Use apt to install Nginx:
sudo apt-get install nginx
Because of the way that Kibana interfaces the user with Elasticsearch (the user needs to be able to access Elasticsearch directly), we need to configure Nginx to proxy the port 80 requests to port 9200 (the port that Elasticsearch listens to by default). Luckily, Kibana provides a sample Nginx configuration that sets most of this up.
Download the sample Nginx configuration from Kibana’s github repository to your home directory:
cd ~; wget https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf
Open the sample configuration file for editing:
vi nginx.conf
Find and change the values of the server_name
to your FQDN (or localhost if you aren’t using a domain name) and root
to where we installed Kibana, so they look like the following entries:
server_name FQDN;
root /var/www/kibana3;
Save and exit. Now copy it over your Nginx default server block with the following command:
sudo cp nginx.conf /etc/nginx/sites-available/default
Now we will install apache2-utils
so we can use htpasswd
to generate a username and password pair:
sudo apt-get install apache2-utils
Then generate a login that will be used in Kibana to save and share dashboards (substitute your own username):
sudo htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd user
Then enter a password and verify it. The htpasswd file just created is referenced in the Nginx configuration that you recently configured.
Now restart Nginx to put our changes into effect:
sudo service nginx restart
Kibana is now accessible via your FQDN or the public IP address of your Logstash Server i.e. http://logstash_server_public_ip/. If you go there in a web browser, you should see a Kibana welcome page which will allow you to view dashboards but there will be no logs to view because Logstash has not been set up yet. Let’s do that now.
The Logstash package is available from the same repository as Elasticsearch, and we already installed that public key, so let’s create the Logstash source list:
echo 'deb http://packages.elasticsearch.org/logstash/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list
Update your apt package database:
sudo apt-get update
Install Logstash with this command:
sudo apt-get install logstash=1.4.2-1-2c0f5a1
Logstash is installed but it is not configured yet.
Since we are going to use Logstash Forwarder to ship logs from our Servers to our Logstash Server, we need to create an SSL certificate and key pair. The certificate is used by the Logstash Forwarder to verify the identity of Logstash Server. Create the directories that will store the certificate and private key with the following commands:
sudo mkdir -p /etc/pki/tls/certs
sudo mkdir /etc/pki/tls/private
Now you have two options for generating your SSL certificates. If you have a DNS setup that will allow your client servers to resolve the IP address of the Logstash Server, use Option 2. Otherwise, Option 1 will allow you to use IP addresses.
If you don’t have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your Logstash Server—you will have to add your Logstash Server’s private IP address to the subjectAltName
(SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:
sudo vi /etc/ssl/openssl.cnf
Find the [ v3_ca ]
section in the file, and add this line under it (substituting in the Logstash Server’s private IP address):
subjectAltName = IP: logstash_server_private_ip
Save and exit.
Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:
cd /etc/pki/tls
sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let’s complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.
If you have a DNS setup with your private networking, you should create an A record that contains the Logstash Server’s private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server’s public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your Logstash Server.
Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/…), with the following command (substitute in the FQDN of the Logstash Server):
cd /etc/pki/tls; sudo openssl req -subj '/CN=logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let’s complete our Logstash configuration.
Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.
Let’s create a configuration file called 01-lumberjack-input.conf
and set up our “lumberjack” input (the protocol that Logstash Forwarder uses):
sudo vi /etc/logstash/conf.d/01-lumberjack-input.conf
Insert the following input configuration:
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
Save and quit. This specifies a lumberjack
input that will listen on tcp port 5000
, and it will use the SSL certificate and private key that we created earlier.
Now let’s create a configuration file called 10-syslog.conf
, where we will add a filter for syslog messages:
sudo vi /etc/logstash/conf.d/10-syslog.conf
Insert the following syslog filter configuration:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
Save and quit. This filter looks for logs that are labeled as “syslog” type (by a Logstash Forwarder), and it will try to use “grok” to parse incoming syslog logs to make it structured and query-able.
Lastly, we will create a configuration file called 30-lumberjack-output.conf
:
sudo vi /etc/logstash/conf.d/30-lumberjack-output.conf
Insert the following output configuration:
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
Save and exit. This output basically configures Logstash to store the logs in Elasticsearch.
With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).
If you want to add filters for other applications that use the Logstash Forwarder input, be sure to name the files so they sort between the input and the output configuration (i.e. between 01 and 30).
Restart Logstash to put our configuration changes into effect:
sudo service logstash restart
Now that our Logstash Server is ready, let’s move onto setting up Logstash Forwarder.
Note: Do these steps for each server that you want to send logs to your Logstash Server. For instructions on installing Logstash Forwarder on Red Hat-based Linux distributions (e.g. RHEL, CentOS, etc.), refer to the Build and Package Logstash Forwarder section of the CentOS variation of this tutorial.
On Logstash Server, copy the SSL certificate to Server (substitute with your own login):
scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp
On Server, create the Logstash Forwarder source list:
echo 'deb http://packages.elasticsearch.org/logstashforwarder/debian stable main' | sudo tee /etc/apt/sources.list.d/logstashforwarder.list
It also uses the same GPG key as Elasticsearch, which can be installed with this command:
wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
Then install the Logstash Forwarder package:
sudo apt-get update
sudo apt-get install logstash-forwarder
Note: If you are using a 32-bit release of Ubuntu, and are getting an “Unable to locate package logstash-forwarder” error, you will need to install Logstash Forwarder manually:
wget https://assets.digitalocean.com/articles/logstash/logstash-forwarder_0.3.1_i386.deb
sudo dpkg -i logstash-forwarder_0.3.1_i386.deb
Next, you will want to install the Logstash Forwarder init script, so it starts on bootup:
cd /etc/init.d/; sudo wget https://raw.githubusercontent.com/elasticsearch/logstash-forwarder/a73e1cb7e43c6de97050912b5bb35910c0f8d0da/logstash-forwarder.init -O logstash-forwarder
sudo chmod +x logstash-forwarder
sudo update-rc.d logstash-forwarder defaults
Now copy the SSL certificate into the appropriate location (/etc/pki/tls/certs):
sudo mkdir -p /etc/pki/tls/certs
sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/
On Server, create and edit Logstash Forwarder configuration file, which is in JSON format:
sudo vi /etc/logstash-forwarder
Now add the following lines into the file, substituting in your Logstash Server’s private IP address for logstash_server_private_IP
:
{
"network": {
"servers": [ "logstash_server_private_IP:5000" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/syslog",
"/var/log/auth.log"
],
"fields": { "type": "syslog" }
}
]
}
Save and quit. This configures Logstash Forwarder to connect to your Logstash Server on port 5000 (the port that we specified an input for earlier), and uses the SSL certificate that we created earlier. The paths section specifies which log files to send (here we specify syslog and auth.log), and the type section specifies that these logs are of type "syslog* (which is the type that our filter is looking for).
Note that this is where you would add more files/types to configure Logstash Forwarder to other log files to Logstash on port 5000.
Now restart Logstash Forwarder to put our changes into place:
sudo service logstash-forwarder restart
Now Logstash Forwarder is sending syslog and auth.log to your Logstash Server! Repeat this process for all of the other servers that you wish to gather logs for.
When you are finished setting up Logstash Forwarder on all of the servers that you want to gather logs for, let’s look at Kibana, the web interface that we installed earlier.
In a web browser, go to the FQDN or public IP address of your Logstash Server. You should see a Kibana welcome page.
Click on Logstash Dashboard to go to the premade dashboard. You should see a histogram with log events, with log messages below (if you don’t see any events or messages, one of your four Logstash components is not configured properly).
Here, you can search and browse through your logs. You can also customize your dashboard. This is a sample of what your Kibana instance might look like:
Try the following things:
Kibana has many other features, such as graphing and filtering, so feel free to poke around!
Now that your syslogs are centralized via Logstash, and you are able to visualize them with Kibana, you should be off to a good start with centralizing all of your important logs. Remember that you can send pretty much any type of log to Logstash, but the data becomes even more useful if it is parsed and structured with grok.
Note that your Kibana dashboard is accessible to anyone who can access your server, so you will want to secure it with something like htaccess.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Great Tutorial but wanted configuration for apache2 ssl_access logs and also error logs as well as auth log and mysql slow query and generel logs can you please add that also
This comment has been deleted
@puneetbrar: You need to add them to the files section in <code>/etc/logstash-forwarder</code> It would look like: <pre> “files”: [ { “paths”: [ “/var/log/syslog”, “/var/log/auth.log” ], “fields”: { “type”: “syslog” } }, { “paths”: [ “/var/log/apache2/*.log” ], “fields”: { “type”: “apache” } } ] </pre>
But if i have different servers how will i differentiate between the logs on kibana
@puneetbrar: Check out the section where we set up a filter in the tutorial. You’ll want to add something along the lines of: <pre> add_field => [ “received_from”, “%{host}” ] </pre>
Great tutorial, until ‘sudo gem install fpm’ failed to install. Not sure where I went wrong.
Building native extensions. This could take a while… ERROR: Error installing fpm: ERROR: Failed to build gem native extension.
Gem files will remain installed in /var/lib/gems/1.9.1/gems/json-1.8.1 for inspection.
Could you also advise on the corresponding /etc/logstash/conf.d/10-syslog.conf and /etc/logstash-forwarder additions to monitor the nginx web server access logs? That would be really appreciated.
@crawfishmedia: Is that the entire output? Be sure to install the “ruby-dev” package (sudo apt-get install ruby-dev), not just “ruby”.
I’m sorry FYI https://github.com/elasticsearch/logstash/issues/2292
Note: Logstash 1.4.2 recommends Elasticsearch 1.1.1.
@samuel.leach: To add Nginx logs to Logstash, do the following.
On your Nginx server, edit your logstash-forwarder config file and send the nginx access.log by modifying the “files” section so it looks like this:
<pre> “files”: [ { “paths”: [ “/var/log/syslog”, “/var/log/auth.log” ], “fields”: { “type”: “syslog” } }, { “paths”: [ “/var/log/nginx/access.log”, ], “fields”: { “type”: “nginx” } } ] </pre>
Then restart the logstash forwarder. Then on your Logstash server, open a file called “nginx” in /opt/logstash/patterns:
<pre> sudo vi /opt/logstash/patterns/nginx </pre>
Then insert the following:
<pre> NGUSERNAME [a-zA-Z.@-+_%]+ NGUSER %{NGUSERNAME} NGINXACCESS %{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} [%{HTTPDATE:timestamp}] “%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}” %{NUMBER:response} (?:%{NUMBER:bytes}|-) (?:“(?:%{URI:referrer}|-)”|%{QS:referrer}) %{QS:agent} </pre>
Save and quit. Then change the ownership of the file to logstash:
<pre> chown logstash: /opt/logstash/patterns/nginx </pre>
Then edit your /etc/logstash/conf.d/10-syslog.conf file, and add the following filter under your first filter (syslog):
<pre> filter { if [type] == “nginx” { grok { match => { “message” => “%{NGINXACCESS}” } } } } </pre>
Then restart Logstash.
Great write up, but unless I missed something, you should maybe add a line asking the user to change into their home directory before doing the git pull for the logstash forwarder, otherwise the contents of the repo are installed in /etc/pki/tls (not ideal)
@manicas Thank you!
@wh: Thanks! I edited it.
Hi astarr can you please let me know in the host do i need to specify the ip address something like add_field => [ “received_from”, “%{192.168.x.x}” ]
@manicas Sorry for being a pita but I installed everything line by line as per the tutorial. I created a script with all these commands so I wouldn’t miss anything. I have performed this tutorial several times on fresh ubuntu droplets and cannot finished the install. When I run “sudo apt-get install ruby-dev” I get this: Reading package lists… Done Building dependency tree Reading state information… Done ruby is already the newest version. ruby-dev is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
@crawfishmedia: The only way I have been able to reproduce a similar error is if I do not install ruby-dev (and the error has more details than the one you posted earlier). Did you update apt?
golang is now in the trusty universe repo. You no longer need to add the golang ppa.
hello, as far as i reach the point to verify nginx+kibana is working well and try to access http://some.ip.over.there i get almost blank page with title " {{dashboard.current.title}}".
could you please point me to the right direction? tnx!
@cryo_hybrid: most likely an error in your Kibana config.js file
I’ve installed logstash-forwarder on two servers, but kibana/elasticsearch are only displaying logs from the first server I configured. I’ve verified the second server is shipping logs to the correct IP by tailing /var/log/syslog, and it shows “Registrar received n events”. Any thoughts?
@jeremy.kendall: did you install the SSL certs on the second server?
I can’t seem to get LogStash to pick up my logs with this setup. I’ve changed “/etc/logstash/conf.d/10-syslog.conf” to be to be the snippit below but nothing shows up in Kibana:
However, this runs fine and picks up my logs.
I’ve groked the /etc/init.d/logstash and my conf location looks good, any ideas on what could be wrong?
@555john: How are you starting Logstash? I would guess your script is not reading the 10-syslog.conf file (which you can rename, by the way).
Also, you can run the following in a separate window to see if there are configuration errors when Logstash starts/runs:
Looks like rpm has init scripts incompatible with centos
Hi All
This post is working great but Can u please tell the procedure for building a ELK Stack server in Centos 6. I have Tried the same but Logstash-forwarder is not passing the logs So can you help me the full guide as you made for Ubuntu.
Thanks in Advance
@frutik / @gaurav.chouhan.in:
Here is my ELK Stack On CentOS 7 guide!
Hi, Please update the article to update to take care of the ElasticSearch Vulnerabilities http://bouk.co/blog/elasticsearch-rce/ We followed the tutorial and worked well but we got backdoored :( and had to bring down the droplet. May need to update the CentOS article as well.
@ravi: I just updated it. Sorry about that.
Here are the steps that were added:
Edit your Elasticsearch configuration:
Add the following line somewhere in the file, to disable dynamic scripts:
Save and exit. Now restart Elasticsearch to put the changes into effect:
Did not work for me - IP SANs errors
https://github.com/elasticsearch/logstash-forwarder/issues/221
@Mitchell I found that I had to also modify the elasticsearch.yml to have:
network.publish_host: localhost
@stuart - I’m having the same issue and have had no luck trying the solutions on that page (or I’m doing it wrong).
The following helps you generate certificates:
I found this very useful except the blob location changed.
@stuart @zsprawl: Re: IP SANs errors.
I’ve updated the Set Up Logstash Forwarder section to eliminate the need to build the Logstash Forwarder. Please download the package as noted in the updated instructions and try that.
i got that error when try access the kibana. need advice
I’m experiencing the same issue. 9200 is closed and Elasticsearch is too old. Anyone can provide advice?
thanks!
@jellf.nainggolan: @miguel567: Try updating Elasticsearch and Logstash to the latest versions:
Does that fix it?
Hi; first thanks for the tutoriel. But i faced many problems but the important one is when i finish installing kibana with nginx and apache ; then i installed logstash forwarder. But when i open my browser i dont found kibana main page but i found nginx main page. What i should do? thanks again
@sammdoun: Have you restarted nginx? Do you have any other virtualhosts configured?
there is cacti in localhost already installed and i couldn’t restart nginx. i have this message
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] still could not bind()
@sammdoun: Something else is using port 80. What’s the output of:
i have this :
*80/tcp: 1279 1443 1444 1445 1446 1447 2598 *
@sammdoun: you can find out what processes are using port 80 by running:
all apache2
this is an exemple
root 1279 0.0 0.4 37264 8376 ? Ss 14:02 0:01 /usr/sbin/apache2 -k start
PS it’s 16:08 now in my country
@sammdoun: Apache2 and nginx cannot run at the same time and listen on port 80. If you do not host any sites on your droplet you can remove apache2 and start nginx.
If you do, you might want to use Apache to serve Kibana instead of nginx. Add a virtualhost with the following configuration: https://p.kk7.me/apixudonox.apache. Make sure you replace YOUR_PUBLIC_IP with your droplet’s public IP address.
thanks that’s work :) but now kibana show me this alert “Upgrade Required Your version of Elasticsearch is too old. Kibana requires Elasticsearch 0.90.9 or above.”
@sammdoun: Try updating Elasticsearch and Logstash to the latest versions:
i cant "Removing outdated cached downloads… sha256sum mismatch jdk-7u51-linux-i586.tar.gz Oracle JDK 7 is NOT installed. dpkg: error processing oracle-java7-installer (–configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: oracle-java7-installer E: Sub-process /usr/bin/dpkg returned an error code (1)
@sammdoun: Try removing oracle-java7-installer’s cache:
and re-running the installer:
this is the result "2014-07-25 14:43:15 (136 KB/s) - ‘jdk-7u51-linux-i586.tar.gz’ saved [5307/5307]
Download done. Removing outdated cached downloads… sha256sum mismatch jdk-7u51-linux-i586.tar.gz Oracle JDK 7 is NOT installed. dpkg: error processing oracle-java7-installer (–configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: oracle-java7-installer E: Sub-process /usr/bin/dpkg returned an error code (1)"
but when i type the command i have this " Reading package lists… Done Building dependency tree
Reading state information… Done elasticsearch is already the newest version. logstash is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 321 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y
can’t start logstash-forwarder and kibana gives old version error.
i have installed latest version of elesticsearch.
i am trying to get this elk stack work for more then 5 weeks. i tried to install it on more then 10 newly created droplets. i have never got it worked. i am sure a lot of users have the same problem.
logstash-forwarder works when i replace “logstash_server_private_IP” with “logstash-node” or “elasticsearch-node” (it was “localhost” previously). still no logs in kibana dashboard not sure which one to use.
kibana worked with Nginx. (got old version error in apach2).
@atiturozt: Are you trying to collect logs from the server your ELK stack, or a remote server? the forwarder is supposed to go on the remote server that you are collecting the logs from.
Also, Elasticsearch should be version 1.1.1. The “Elasticsearch version is too old” error is typically seen when Elasticsearch is not running, or the elasticsearch.yml file is misconfigured, or Nginx proxies are misconfigured.
If you are finding that logstash-forwarder isn’t forwarding and you see errors in
/var/log/syslog
likeFailed to tls handshake with 192.168.1.56 x509: certificate is valid for , not log01.lan
on a server running logstash-forwarder, you may need to regenerate the cert on the logstash server using the modified openssl commend below and distribute the cert again.sudo openssl req -x509 -nodes -newkey rsa:2048 -days 3560 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Note that the
-batch
option has been removed. As an added bonus-days 3560
has the cert expiring in 10 years, rather than 30 days.Don’t forget to restart logstash and logstash-forwarder after redistributing the cert.
See Github for details Issue 230 Issue 221
yes, its local. and this guide installs elasticsearch 1.1.2.
kibana version 3.0.1 gives old version error for elasticsearch. but kibana 3.1.0 works fine
how can i make this guide work for local server and apache2
If you don’t want logstash-forwarder writing to
/var/log/syslog
each time it forwards messages,you can discard the messages before they write to syslog, as so
Thanks so much for this article, it was exactly what I needed and it is very well written and documented!
Hi, I have followed the exact steps and it worked but all of sudden it stopped working and the logs say Jul 31 15:18:15 logs 2014-07-31T15:18:15+02:00 x.x.x logstash-forwarder[22514]: 2014/07/31 15:18:15.731119 Failure connecting to x.x.x.x: dial tcp x.x.x.x:x: connection refused. thanks for the help!
@jith: Make sure Elasticsearch and the Logstash server software is running. You can try restarting Elasticsearch then Logstash if one of them isn’t up. Also, the Logstash logs can be read with
sudo less /var/log/logstash.log
(ortail -f
if you want to see the log stream).@atiturozt: Create a “file” input/filter/output to capture logs on the Logstash server.
Create a file
/etc/logstash/conf.d/00-apache-local.conf
, and put something like the following in it (note how the input is a file and not a lumberjack). Make sure the Apache log path is correct:Also, check out the 2nd part of this Logstash series for a few more filters.
@Mitchell Thanks for your help, both are running and when I read the logstash logs, I have a error meaasge saying the following ```The error reported is: \n pattern %{CISCO_ASA_SYSL OG_HEADER} not defined"}
This no longer works with the latest versions of Elasticsearch/Logstash from the listed repositories unless you tell lumberjack output to use http:
Took quite a long time to find that one, the error message that led to me finding the solution on Stack Overflow wasn’t visible unless LogStash was started in debug mode. Without enabling debug mode, the only error that presents itself is: Read error looking for ack: read tcp [redacted]:5000: i/o timeout
http://stackoverflow.com/questions/23937827/logstash-exception-in-thread-output-org-elasticsearch-discovery-masternotdi
@jith: It looks like you are having a syntax error while trying to add a grok pattern. Try looking at the second part of this tutorial series. The Nginx section shows how you can add patterns properly.
Also, if you are trying to figure out if your grok patterns are correct, use the Grok Debugger. Paste in a line of the log you are trying to filter into “Input” and then put your pattern into the “pattern” box. If you don’t see any output below, it means that your pattern won’t match that given log.
@branden.timm: Thanks. Note that Logstash 1.4.2 (currently the latest) is supposed to use Elasticsearch 1.1.1.
Hi
I’m working in local machine and i will receive the logs from a cisco ASA 5505 (that’s the plan).
I installed ubuntu 14.04 again and i foolow the tutorial until “**Copy SSL Certificate and Logstash Forwarder Package **” part; where i dont know what u write on “server_private_IP” ( i write localhost or the ASA IP?) .
Then i tryed to install logstash-forworder but the terminal said that he can’t find the package.
Thanks
@sammdoun: Replace
server_private_IP
with your Logstash server’s IP address.Does your ASA run Debian or Ubuntu? If not, it most likely supports rsyslog logging. You can add a syslog input to your Logstash server. First, create a file called
/etc/logstash/conf.d/01-syslog-input.conf
:So I must have a logstash server; should I buy it ? or how to access on it. Because all i have now is my PC with ubuntu and an ASA 5505.
And how i know what my ASA support (Ubuntu or Deb) ??
Sorry for all this questions, but i spend 1 month trying with it and i want it to work
@sammdoun: The Logstash server is simply the server/computer/machine that you followed this tutorial on, which in this case is your PC. I’m not familiar with ASA’s but I doubt they run Debian/Ubuntu. Try running the following commands on your ASA:
I haven’t tested these commands as I don’t have an ASA, but give it a shot and let us know how it goes :)
Ok i will soon because my ASA isn’t avaible now. But still have problem with that command when i put “localhost” on my sever_private_ip . Also for logstash-forwarder; when i execute this command : "sudo apt-get install logstash-forwarder " i receive an error : "E: couldn’t find find the package "
@sammdoun: You need replace server_private_ip with your IP address, not
localhost
.Does
/etc/apt/sources.list.d/logstash.list
exist? Have you runsudo apt-get update
before trying to installlogstash-forwarder
?the /etc/apt/sources.list.d/logstash.list exist and i run the update before the install.
@sammdoun: Can you please post the output of
sudo apt-get update
?He said that the reading of the list of packages is done
“1,232 ko réceptionnés en 11s (104 ko/s)
Lecture des listes de paquets… Fait”
(sorry my ubuntu is frensh)
@sammdoun: Can you post the full output including the URLs? Thanks :)
ok this what i have :
root@houcem-HP-Pavilion-dv6-Notebook-PC:~# sudo apt-get update Ign http://extras.ubuntu.com trusty InRelease Atteint http://extras.ubuntu.com trusty Release.gpg
Atteint http://extras.ubuntu.com trusty Release
Ign http://archive.ubuntu.com trusty InRelease
Atteint http://extras.ubuntu.com trusty/main Sources
Ign http://packages.elasticsearch.org stable InRelease
Ign http://archive.ubuntu.com trusty-updates InRelease
Atteint http://extras.ubuntu.com trusty/main i386 Packages
Ign http://archive.ubuntu.com trusty-backports InRelease
Ign http://archive.ubuntu.com trusty-security InRelease
Ign http://packages.elasticsearch.org stable InRelease
Atteint http://archive.ubuntu.com trusty Release.gpg
Atteint http://archive.ubuntu.com trusty-updates Release.gpg
Atteint http://archive.ubuntu.com trusty-backports Release.gpg
Ign http://packages.elasticsearch.org stable InRelease
Atteint http://archive.ubuntu.com trusty-security Release.gpg
Atteint http://archive.ubuntu.com trusty Release
Atteint http://archive.ubuntu.com trusty-updates Release
Atteint http://packages.elasticsearch.org stable Release.gpg
Atteint http://archive.ubuntu.com trusty-backports Release
Atteint http://archive.ubuntu.com trusty-security Release
Atteint http://archive.ubuntu.com trusty/main Sources
Atteint http://packages.elasticsearch.org stable Release.gpg
Atteint http://archive.ubuntu.com trusty/restricted Sources
Atteint http://archive.ubuntu.com trusty/universe Sources
Ign http://extras.ubuntu.com trusty/main Translation-fr_FR
Atteint http://packages.elasticsearch.org stable Release.gpg
Atteint http://archive.ubuntu.com trusty/multiverse Sources
Ign http://extras.ubuntu.com trusty/main Translation-fr
Atteint http://archive.ubuntu.com trusty/main i386 Packages
Ign http://extras.ubuntu.com trusty/main Translation-en
Atteint http://archive.ubuntu.com trusty/restricted i386 Packages
Atteint http://packages.elasticsearch.org stable Release
Atteint http://archive.ubuntu.com trusty/universe i386 Packages
Atteint http://archive.ubuntu.com trusty/multiverse i386 Packages
Atteint http://packages.elasticsearch.org stable Release
Atteint http://archive.ubuntu.com trusty/main Translation-fr
Atteint http://archive.ubuntu.com trusty/main Translation-en
Atteint http://packages.elasticsearch.org stable Release Atteint http://archive.ubuntu.com trusty/multiverse Translation-fr
Atteint http://archive.ubuntu.com trusty/multiverse Translation-en
Atteint http://packages.elasticsearch.org stable/main i386 Packages
Atteint http://archive.ubuntu.com trusty/restricted Translation-fr
Atteint http://archive.ubuntu.com trusty/restricted Translation-en
Atteint http://archive.ubuntu.com trusty/universe Translation-fr
Atteint http://archive.ubuntu.com trusty/universe Translation-en Atteint http://archive.ubuntu.com trusty-updates/main Sources Atteint http://archive.ubuntu.com trusty-updates/restricted Sources
Atteint http://archive.ubuntu.com trusty-updates/universe Sources
Atteint http://archive.ubuntu.com trusty-updates/multiverse Sources
Atteint http://archive.ubuntu.com trusty-updates/main i386 Packages
Atteint http://packages.elasticsearch.org stable/main i386 Packages
Atteint http://archive.ubuntu.com trusty-updates/restricted i386 Packages Atteint http://archive.ubuntu.com trusty-updates/universe i386 Packages Atteint http://archive.ubuntu.com trusty-updates/multiverse i386 Packages Atteint http://archive.ubuntu.com trusty-updates/main Translation-en
Atteint http://archive.ubuntu.com trusty-updates/multiverse Translation-en Atteint http://archive.ubuntu.com trusty-updates/restricted Translation-en Atteint http://archive.ubuntu.com trusty-updates/universe Translation-en Atteint http://archive.ubuntu.com trusty-backports/main Sources Atteint http://archive.ubuntu.com trusty-backports/restricted Sources Atteint http://packages.elasticsearch.org stable/main i386 Packages Atteint http://archive.ubuntu.com trusty-backports/universe Sources Atteint http://archive.ubuntu.com trusty-backports/multiverse Sources Atteint http://archive.ubuntu.com trusty-backports/main i386 Packages Atteint http://archive.ubuntu.com trusty-backports/restricted i386 Packages Atteint http://archive.ubuntu.com trusty-backports/universe i386 Packages Atteint http://archive.ubuntu.com trusty-backports/multiverse i386 Packages Atteint http://archive.ubuntu.com trusty-backports/main Translation-en Atteint http://archive.ubuntu.com trusty-backports/multiverse Translation-en Atteint http://archive.ubuntu.com trusty-backports/restricted Translation-en Atteint http://archive.ubuntu.com trusty-backports/universe Translation-en Atteint http://archive.ubuntu.com trusty-security/main Sources Atteint http://archive.ubuntu.com trusty-security/restricted Sources Atteint http://archive.ubuntu.com trusty-security/universe Sources
Atteint http://archive.ubuntu.com trusty-security/multiverse Sources
Atteint http://archive.ubuntu.com trusty-security/main i386 Packages Atteint http://archive.ubuntu.com trusty-security/restricted i386 Packages Atteint http://archive.ubuntu.com trusty-security/universe i386 Packages Atteint http://archive.ubuntu.com trusty-security/multiverse i386 Packages Atteint http://archive.ubuntu.com trusty-security/main Translation-en Atteint http://archive.ubuntu.com trusty-security/multiverse Translation-en Atteint http://archive.ubuntu.com trusty-security/restricted Translation-en Atteint http://archive.ubuntu.com trusty-security/universe Translation-en Ign http://archive.ubuntu.com trusty/main Translation-fr_FR
Ign http://archive.ubuntu.com trusty/multiverse Translation-fr_FR
Ign http://archive.ubuntu.com trusty/restricted Translation-fr_FR Ign http://archive.ubuntu.com trusty/universe Translation-fr_FR Ign http://packages.elasticsearch.org stable/main Translation-fr_FR Ign http://packages.elasticsearch.org stable/main Translation-fr Ign http://packages.elasticsearch.org stable/main Translation-en Ign http://packages.elasticsearch.org stable/main Translation-fr_FR Ign http://packages.elasticsearch.org stable/main Translation-fr Ign http://packages.elasticsearch.org stable/main Translation-en Ign http://packages.elasticsearch.org stable/main Translation-fr_FR Ign http://packages.elasticsearch.org stable/main Translation-fr Ign http://packages.elasticsearch.org stable/main Translation-en Lecture des listes de paquets… Fait
@sammdoun: The command in the article was overwriting the older Logstash entry, can you please run the following two commands and then run
apt-get update
(and try to installlogstash-forwarder
) again? Thanks!i still have the same problem
“root@houcem-HP-Pavilion-dv6-Notebook-PC:~# sudo apt-get install logstash-forwarder Lecture des listes de paquets… Fait Construction de l’arbre des dépendances
Lecture des informations d’état… Fait E: Impossible de trouver le paquet logstash-forwarder”
Thanks Mitchell,
Now I have a different error message.
{:timestamp=>"2014-08-11T13:58:25.520000+0200", :message=>"Error: Expected one of #, input, filter, output at line 117, column 1 (byte 3604) after "}
So, is this a problem in my patterns? My configs are here http://pastebin.com/qVG0x4rf
@jith: Where are you storing the patterns?
@kamal : In the same directory but in diff. file named fw_patterns.
Thanks!
@jith: Patterns should be stored in
/opt/logstash/patterns/
, try moving the file there and restarting Logstash:Hi,
For my project i want to collect tcp;udp … logs ; how can i do it? (the input of the filter will be tcp ; udp logs)
Also i will store some syslog in a file ( the output of the asa will be a file) so how can i put this file as an input on the filter and what are changes i should do ?
Thanks again
This comment has been deleted
Hi. I have tried that, in here ‘/opt/logstash/patterns/’ they have already defined the patterns and using them now but, no use. In tshark it seems like the shipper is shipping the logs. So, I need to check whether the logstash then elasticsearch is working as they intended. Is there any systematic steps in troubleshooting this? One more thing, the domain or ip:9200 is not working now so is it a elasticsearch problem?
Thanks in advance!
Hi, well with the ASA, we have just to make sur of the adress then it will work :) but i have question.
i want to filter the source IP ; the destination IP ; the source port and the destination port
any ideas :)
thanks
@sammdoun: Read Adding Logstash Filters to Improve Centralized Logging. If you can’t figure it out, post sample logs in the comments on that tutorial.
I have found this grok debugger to be useful when writing grok patterns.
jith:
I would (a) make sure you can access Elasticsearch (port 9200) with something like curl, (b) check the elasticsearch logs, © check logstash logs.
Are you trying to access Elasticsearch from the server itself or from your own computer? If you set the
network.host = localhost
, you won’t be able to access it remotely on port 9200.Thanks for this great article, thanks for putting this together.
I am having trouble downloading the 32bit logstash forwarder package from your link:
https://assets.digitalocean.com/articles/logstash/logstash-forwarder_0.3.1_i386.deb Resolving assets.digitalocean.com (assets.digitalocean.com)… 192.241.166.223 Connecting to assets.digitalocean.com (assets.digitalocean.com)|192.241.166.223|:443… connected. HTTP request sent, awaiting response… 200 OK Length: 1782158 (1.7M) [application/octet-stream] logstash-forwarder_0.3.1_i386.deb: Permission denied
Is this available publicly at another link?
Thanks! Ernie
@ernien: Does your user have write access to the directory that you are running the
wget
command from?Ha! That was it. I thought I checked that. Thanks for the response!
Great article.
Except to configure Logstash Forwarder to send syslog messages to local Logstash Server, can it be also configured to send syslog message to external syslog server by providing its IP address?
Thanks, Kevin
@kzhou: I don’t understand your question. If you are asking how to send your Logstash server’s syslog to Logstash, you can create a new “file” input.
Hi, i have an abnormal issue. Last monday and thuesday i can see logs on kibana. but the next day i can’t see anything ( he tell me no logs ) but i’m sure that asa generate logs. the problem continue until friday afternoon when i saw the logs again (and i don’t know why). I didn’t open my ubuntu all the week end . but today when i open my kibana to try to work but i can’t see any logs even the asa on his real time log viewer show me the logs.
Thanks Mitchell, Now it works. My filters were wrong it seems. Now I am getting logs from a firewall and a switch to a port. I want to apply two different filters/patterns for them. As I am getting the logs to a single port, I do not know how to differentiate. It seems like the log messages itself do not have the host info and I can not differentiate them according to the host.
Thanks!
@jith: Refer to the “Configure Logstash” section. When adding a filter, you can use
add_field
to add a field after a match is made.Here is an example of the line that you might add:
@Mitchell, I have a doubt. I want to differentiate two logs, then want to match separately for firewall and switch. Is not there a way to filter out using this “received_from”, “%{host}” information. I tried doing this and it stopped working. Actually I wanted to do like this:
Yes, It works now :)
Hi Jith.
Glad to know that your configuration is working.
I want also need ASA firewall and Router/Switch log at ELK.
Please provide me your email address so that i can communicate with you.
Thanks in advance.
Very good step by step guide. Any chance you have done or considered doing a similar guide for the required config on Logstash server and the logstash forwarders for Centralised logs from multiple VMhosts. For apache2 logs, mysql logs and syslogs altogether?
Thanks @M.Anicas, but is it possible to use this system to centralize different type of logs, not only syslog ??
@tigerrang: Check out this other Logstash guide that I wrote: Adding Logstash Filters To Improve Centralized Logging. It has a section that covers Apache logs. Adding other types of logs is pretty easy if you are familiar with regular expressions.
@h.boudouah: See the tutorial that I linked to @tigerrang.
@manicas Actually i’m working on SIEM project, the system used to work two weeks ago, but now i have an error message which is " Oops! SearchPhaseExecutionException[Failed to execute phase [query], all shards failed]" i think it’s due to server private ip adress, because it’s dhcp, now i’ve changed the configurations, moved to static ip for all servers, and generated another ssl certificate, but i still have the same error, and logstash forwarder is not sending the logs to logstash ! can you help me please !
FYI: Saving of dashboards won’t work by default if you follow the steps here. While saving the
nginx.conf
ensure that you changekibana.myhost.org
innginx.conf
to your FQDN.Wonderful tutorial, thank you. I’ve tried this a couple of times before and it worked just fine. However, I needed to install Logstash and Logstash-Forwarder again and after following the exact same steps, everything was going well until I needed to restart logstash-forwarder:
It kept giving me this error:
Any idea why I’m getting this error? Thank you.
After following this guide, I do not have the default elasticsearch directory layout. My eslasticsearch folder only has the config files elasticsearch.yml and logging.yml. I am missing bin, conf, data, logs and plugins folders. Has anyone else come across this? I wish to install some elasticsearch plugins.
Ok to answer my own question, I called;
The path of ES_HOME is shown in nodes.settings.path
Hi Mitchell Anicas,
I’m able to setup Logstash on Ubuntu, but I’m not able to ship IIS logs from Windows machine. I tried below link to ship IIS logs, unfortunately it’s not working. http://jacob.ludriks.com/iis-logging-to-the-elk-stack/
Please let me know if you have any suggestion for log shipper for windows machines.
Thanks Dinesh
I have not personally set up a log shipper on Windows, but I’ve read that you can use nxlog to do that. Sorry!
Very useful tutorial. But I have a question. In see that the ruby gem jdbc-mysql-5.1.30 is missing in /opt/logstash/vendor/bundle/jruby/2.1(1.9)/gems and I need it for a custom input plugin
How can I do in order to have it installed inside Logstash? I tried to use the ruby require command to have the pluging loading the mentioned gem from my local ruby installation but it does not solve the problem. I got the same issue with the sequel gem but them requiring the one already prensent in Logstash I solved the problem, but jdbc-mysql is missing.
How can I do?
Thanks
This tutorial was a big help… I am able to view my parsed logs on kibana, but I am having trouble deleting the logs that are saved in elasticsearch. Can someone help me please!?
Great tutorial Mitchell, thanks!!
After installing all the software as per these tutorial - java process (logstash) consumes > 100% CPU (for 2 core container).
Why does it consume that much for logging literally couple events per minute?
I have the same problem as @isra below. logstash-forwarder will not start. Ironically, I can’t seem to find the log to tell me why.
When restarting it says “no such process”
Doesn’t seem to start.
Also. I am on a single box, Ubuntu 14.04
logstash-forwarder
logs to/var/log/syslog
. Are there any errors in there?I am having the same problem and nothing shows in syslog.
This comment has been deleted
Regarding this code:
The “stdout { codec => rubydebug }” will cause logs to accumulate in /var/log/upstart/logstash.log because every single log message is duplicated there. I have several gigs of logs, and this caused me to run out of disk space.
cd ~; wget https://github.com/elasticsearch/kibana/raw/master/sample/nginx.conf
gets a 404!
I found a reasonable facsimile and added to a gist. Feel free to confirm or move.
nginx.conf
Thanks for finding that! The original file was removed. I changed the link (it has the same contents as the gist you posted).
Thanks for the great tutorial, I was able to follow it successfully until the parts about setting up the forwarders. I am attempting to use the ELK stack on a local machine without any external servers forwarding to it.
As such, I don’t think the bit about setting up the forwarders applies. What do I do to get the logs to show up when I access the web interface of Kibana?
Instead of using a “lumberjack” input, use a “file” input.
Example
First of all, thank you for this great tutorial! I have one question:
How would you deal with multiline java events, fully integrated with your configuration?
I mean, using the Logstash Forwarder to send the logs, so that it can still take the benefit from the usage of SSL.
Could you provide the input, filter, output of the whole thing, please?
I have been researching on this for quite a long time, and I have not had successful results.
Thanks in advance!
i dont know what am doing wrong here. keep getting the following messages in the syslog on the server where the forwarder is installed:
Failure connecting to <logstash-server-ip>: dial tcp <logstash-server-ip>:5000: i/o timeout
hey mitch!
great tutorial, i am getting both syslog and windows event (using nxlog) but i am having trouble with configuring modsecurity and snort\surricata logs can you possibly help me? its in a different server and i am trying to use the logstash forwarder to send the logs to my elk server all the soultions i saw online talks about a local server having everything but what is the configuration to send modsecurity using the logstash-forwarder (or anything else that may help) ill appreciate the help from anyone in that matter.
@manicas great tutorial. One little thing (that ended up being a big waste of time for me), logstash also ships with logstash-web, which wants to listen on port 80. Since Ubuntu uses upstart, /etc/init/logstash-web.conf keeps respawning java processes because it can’t bind on port 80 (we use it with nginx in front of kibana). So, we have to disable logstash-web at boot. This can be done with:
echo manual | sudo tee /etc/init/logstash-web.override
This comment has been deleted
wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
Isn’t working, is it? always says cant reach host and fails while connecting
Can you post the output of the following two commands? It looks like your droplet is failing to resolve
package.elasticsearch.org
to an IP address.thomas@logstash:~$ dig packages.elasticsearch.org
; <<>> DiG 9.9.5-3-Ubuntu <<>> packages.elasticsearch.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62269 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4000 ;; QUESTION SECTION: ;packages.elasticsearch.org. IN A
;; ANSWER SECTION: packages.elasticsearch.org. 2509 IN CNAME packages.elasticsearch.org.s3.amazonaws.com. packages.elasticsearch.org.s3.amazonaws.com. 2141 IN CNAME s3-1-w.amazonaws.com. s3-1-w.amazonaws.com. 12 IN A 54.231.64.193
;; Query time: 80 msec ;; SERVER: 192.168.65.13#53(192.168.65.13) ;; WHEN: Thu Nov 06 18:35:59 CET 2014 ;; MSG SIZE rcvd: 149
thomas@logstash:~$ cat /etc/resolv.conf
Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
DO NOT EDIT THIS FILE BY HAND – YOUR CHANGES WILL BE OVERWRITTEN
Your server isn’t able to resolve hostnames to IP addresses because there are no DNS nameservers configured, which seems odd to me.
Edit
/etc/resolv.conf
and add the following two lines to it:This fixes the problem until you reboot your server. In order to permanently fix it, edit
/etc/network/interfaces
and make sure theiface eth0 inet static
block has the following line in it:So, it should look like this:
There is another mistake now. It Says 100%[======================================>] 1.768 --.-K/s in 0s
2014-11-11 11:52:10 (42,2 MB/s) - [1768/1768]
When i’m trying to install there is : Waiting vor Headline 0% and it doesnt change
/etc/network/interfaces says
This file describes the network interfaces available on your system
and how to activate them. For more information, see interfaces(5).
The loopback network interface
auto lo iface lo inet loopback
The primary network interface
auto eth0 iface eth0 inet dhcp
Whenever I do “sudo service logstash-forwarder restart” it outputs:
I’ve been stuck here, anyone might know why…? Thanks!
I have followed the instructions but I stopped at the servers sending logs. Is there any way for me to verify the connectivity on the same logstash server without getting other servers involved?
I’d like to perform a simple verification.
Currently, the kibana screen somes up but there is no data. Is there a simple way of getting data into elasticsearch/logstash before involving the other client servers?
Im trying to use a local apache log file.
I can get some console feedback when I run this command: sudo bin/logstash -e ‘input {stdin {} } output { elasticsearch { host => localhost } stdout {} }’
I just type ‘Hello World’ and it comes to the screen shortly afterwards.
Here is my config:
input { file { path => “/home/ubuntu/access.log” type => “apache-access” } }
filter { if [type] == “apache-access” { grok { match => { “message” => “%{COMBINEDAPACHELOG}” } } } }
output { elasticsearch { host => localhost } stdout { codec => rubydebug } }
Hi, Great tutorials here, learned a lot!!! after finishing the server side part and trying to view http://logstashServerIP, I get a blank page with 404 Not Found and nginx/1.4.6 (Ubuntu). where did I go wrong?
This comment has been deleted
Hi, i installed marvel plugin using this documentation but i am not able to access the marvel dashboard, as i think there is some issue with ports can you help me to figure it out.
Hi ,
I have setup logtstash server using the above steps , but when i tried to send my logs using TimberWinR from windows machine it says unable to connect to the elastic search remote machine, can some one help me how to setup logstash forwarder in windows machine to send my logs.
Hi,
I tried to install ELK & logstash-forwarder. when i restart logstash-forwarder shows
start-stop-daemon: warning: failed to kill 1712: No such process [ OK ]
please help me to fix.
Thanks
Hi! That doesn’t necessarily sound like a problem. You’ll see a message like that if you try to restart a service that hasn’t been started yet. What’s the output of
sudo service logstash-forwarder status
?Hi,
Thanks for your reply.
i tried to check sudo service logstash-forwarder status shows:
Hi,
on the Browser shows: No results There were no results because no indices were found that match your selected time span
Help me …
Thanks
First of all great Tutorial!
I haven’t read all the comments but i think it would be even better if you could include the autostart line for the logstash service, like you did with elasticsearch:
something like:
sudo update-rc.d logstash defaults 95 10
otherwise logstash and logstash-web services will not start automatically when your ELK Server reboots.
or tell me if i am being wrong ;)
but again great work!
best regards
Hello,
Thanks for the great tutorial. I followed all the steps mentioned here. I could see the Kibana Dashboars (default one) too.
But when i try to telnet localhost 5000, I am not able to connect to that port.
netstat -lnp | grep 5000 doesn’t return anything.
So I am assuming the logstash server (lumberjack input) is not listening to this specific port. How do it fix this issue?
Thanks, Karthik
I narrowed down the error. The logstash log says,
Permission denied - /etc/logstash/conf.d/01-lumberjack-input.conf.
How do I fix this?
Hi,
Thanks for the wonderful tutorial. Here is my doubt,
All my configurations seem to be fine,
But still my logstash.log gives me the following error,
Please help me fix this issue.
thanks, Karthik
Any help on this?
Hi,
I just want to notify you that the elasticsearch version used in this version has a severe vulnerability and that can lead to complete takeover of your server.
Logstash recommends using the Elasticsearch 1.1.x series with Logstash 1.4.2. From their official docs:
Newer versions of Elasticsearch have disabled dynamic scripting by default. From the 1.2.0 release notes:
That’s why this tutorial takes care to disable it as well by adding
to the
/etc/elasticsearch/elasticsearch.yml
file. Additionally, settingnetwork.host: localhost
restricts outside access to Elasticsearch.If you were referring to something else, please let us know in more detail!
how to reload the Skipping old files in ELK and need to know where the file information will be saved in Elastic search
Dec 16 07:18:10 :10-05:00 ogstash-forwarder[15910]: 2014/12/15 07:18:10.889356 Loading registrar data Dec 16 07:18:10 :10-05:00 ogstash-forwarder[15910]: 2014/12/15 07:18:10.889520 Skipping old file: /home/lab/logs/mp.log Dec 16 07:18:10 :10-05:00 ogstash-forwarder[15910]: 2014/12/15 07:18:10.889626 Loading registrar data Dec 16 07:18:10 :10-05:00 ogstash-forwarder[15910]: 2014/12/15 07:18:10.889723 Skipping old file: /home/lab/data/samplexml.log Dec 16 07:18:10 :10-05:00 ogstash-forwarder[15910]: 2014/12/15 07:18:10.889802 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
I already have installed kibana(version 3) on my server. I want to read log from another server. My current logstash configuration file in /var/ossim/logstash so I created 01-lumberjack-input.conf, 10-syslog.conf and 30-lumberjack-output.conf, after ssl certificate creation.
I installed logstash forwarder on server . It is installed and running but it is not connecting to logstash server. Is it needed to install lumberjack on logstash server ?
Hi, Does anyone know how to forward logs from VMware esxi servers & windows machines to Logstash server?
Great Tutorial…! Can you please tell me where the logs will be saved in server which we are getting from clients using logstash-forwader .
*please help!!! Logstash can not read /var/log/ **
tail /var/log/logstash/logstash.log
{:timestamp=>“2015-01-12T14:45:02.464000-0300”, :message=>“failed to open /var/log/cron.log: Permission denied - /var/log/cron.log”, :level=>:warn} {:timestamp=>“2015-01-12T14:45:02.465000-0300”, :message=>“failed to open /var/log/mail.log: Permission denied - /var/log/mail.log”, :level=>:warn} {:timestamp=>“2015-01-12T14:47:14.644000-0300”, :message=>“failed to open /var/log/syslog: Permission denied - /var/log/syslog”, :level=>:warn} {:timestamp=>“2015-01-12T14:47:56.698000-0300”, :message=>“failed to open /var/log/auth.log: Permission denied - /var/log/auth.log”, :level=>:warn} {:timestamp=>“2015-01-12T14:52:02.047000-0300”, :message=>“failed to open /var/log/cron.log: Permission denied - /var/log/cron.log”, :level=>:warn} {:timestamp=>“2015-01-12T14:52:15.067000-0300”, :message=>“failed to open /var/log/syslog: Permission denied - /var/log/syslog”, :level=>:warn} {:timestamp=>“2015-01-12T14:53:01.127000-0300”, :message=>“failed to open /var/log/auth.log: Permission denied - /var/log/auth.log”, :level=>:warn} {:timestamp=>“2015-01-12T14:55:01.303000-0300”, :message=>“failed to open /var/log/mail.log: Permission denied - /var/log/mail.log”, :level=>:warn} {:timestamp=>“2015-01-12T14:57:18.493000-0300”, :message=>“failed to open /var/log/syslog: Permission denied - /var/log/syslog”, :level=>:warn} {:timestamp=>“2015-01-12T14:58:01.552000-0300”, :message=>“failed to open /var/log/cron.log: Permission denied - /var/log/cron.log”, :level=>:warn}
**cat /etc/logstash/conf.d/localhost.conf **
input { file { type => “linux-syslog” path => [ “/var/log/*.log”, “/var/log/messages”, “/var/log/syslog” ] }
file { type => “apache-access” path => “/var/log/apache2/access.log” }
file { type => “apache-error” path => “/var/log/apache2/error.log” } }
filter {
Parse the
time
attribute as a UNIX timestamp (seconds since epoch)and store it in
@timestamp
attribute. This will be used in Kibana later on.date { match => [ “time”, “UNIX” ] }
Add geolocalization attributes based on ip.
geoip { source => “ip” } }
output {
Emit events to stdout for easy debugging of what is going through
logstash.
This will use elasticsearch to store your logs.
The ‘embedded’ option will cause logstash to run the elasticsearch
server in the same process, so you don’t have to worry about
how to download, configure, or run elasticsearch!
elasticsearch { embedded => true host => localhost }
stdout { } }
cat /etc/init.d/logstash | grep LS_GROUP=
LS_GROUP=adm
solved
1.- sudo usermod -a -G adm logstash 2.- sudo nano /etc/init.d/logstash line 58: nice -n ${LS_NICE} chroot --userspec $LS_USER:$LS_GROUP / sh -c " ====>>>> nice -n ${LS_NICE} chroot --userspec $LS_USER:$LS_GROUP --groups adm / sh -c "
Hi, Following this tutorial I came across an error where my logstash-forwarder would not start. Just follow this link and the error will be solved,
http://serverfault.com/questions/611120/failed-tls-handshake-does-not-contain-any-ip-sans/611121#611121
Hi Mitchell,
When installing the Logstash Forwarder in the remote Server you are assuming that the Elasticsearch public key is already installed, however that´s not correct.
So, as you already stated it, we should execute a:
Before executing “apt-get update”, in order to avoid this ugly message:
W: GPG error: http://packages.elasticsearch.org stable Release: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY D27D666 CD88E42B4
Thank you very much for your fantastic guidance! Your tutorials are crystal-clear!!
Thanks for pointing that out! I updated the tutorial.
Hi Mitchell Anicas, thank you for this tutorial. It works so well! But, now i have to configure logstash to receive logs from windows machines. Can you write another tutorial to add this feature on the same server who receives logs from linux machines (logstash forwarder)?
Thank you!
Great job! Can not save custom Dashboard. This is the error message - “Save failed Dashboard could not be saved to Elasticsearch” Is there way to resolve that problem?
solved with .htpasswd Again many thanks for this tutorial.
An updated version of this guide can be found here: How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04.
Great tutorial! thank you!
Just to let you know the logstash-forwarded.init (https://raw.github.com/elasticsearch/logstash-forwarder/master/logstash-forwarder.init) got deleted a couple days ago
https://github.com/elasticsearch/logstash-forwarder/commit/e15e910bad6bb6243ec5761a5711ce34f54d5575 :(
I posted before reading you. Did you found an alternative?
@Guinoutortue: It can still be accessed directly at:
https://raw.githubusercontent.com/elasticsearch/logstash-forwarder/a73e1cb7e43c6de97050912b5bb35910c0f8d0da/logstash-forwarder.init
An updated version of this guide can be found here: How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04.
Thanks a lot to you this tutorial help me a lot! I have a good installation that work fine for 2 weeks yet!!!
But I have a problem when installing logstash-forwarder. I have the Unable to locate package logstash-forwarder error. I’m have an Ubuntu 14.04 LTS 64bits but I tried to download the link for 32bit distribution. When I install this, the /etc/logstash-forwarder file is empty.
I tried to make my own init scrip tor take one on web but :-(
An updated version of this guide can be found here: How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04.
when I implement steps to install logstash - forwarder is having problems with repo - it does not exist , please check it . Repo operations it normal?
Hit http://us.archive.ubuntu.com trusty/universe amd64 Packages Err http://packages.elasticsearch.org stable/main amd64 Packages 404 Not Found [IP: 54.243.77.158 80] Hit http://us.archive.ubuntu.com trusty/multiverse amd64 Packages Err http://packages.elasticsearch.org stable/main i386 Packages 404 Not Found [IP: 54.243.77.158 80] Hit http://us.archive.ubuntu.com trusty/main i386 Packages Hit http://us.archive.ubuntu.com trusty/restricted i386 Packages Ign http://packages.elasticsearch.org stable/main Translation-en_US
An updated version of this guide can be found here: How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04.
Hi, I am not able to see syslogs on kibana UI… The logstash-forwarder.init script is not available on mentioned resource… Can you please provide it. And the config files for logstash are to write as it is or we need to enter some parameters?
An updated version of this guide can be found here: How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04.
I don’t know why but I went through this tutorial about 3 to 5 times and I can’t get this to work. The logstash server seems good, since I can log into kibana interface, but on the server(client) side I can’t get the service logstash-forwarder to start. When I use the command service logstash-forwarder start, I don’t get any error or warning messages but when I check its status it stills stop.
I’m working on an updated version of this tutorial, which hopefully should be up in a few days.
This comment has been deleted
An updated version of this guide can be found here: How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04.
Can the ELK set-up be done on the production server itself for which I want to analyse the logs? I am using Ubuntu 14.04. If yes are there any additional steps I need to follow?
You can set up ELK on your production server, but it’s not recommended because it will take a non-trivial amount of resources away from your application.
If you want to do it anyway, just use a
file
as the input, instead of logstash forwarder (lumberjack), like this:Thank you very much for the updated version
I thought it was not gonna be easy, but at "Kibana is now accessible via your FQDN " part it is not accessible. I’ve tried with the available port 7777 as 80 is taken. Port is ok now, but I get 404.
An updated version of this guide can be found here: How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04.
Yes, since then I’ve found it and actually managed to do it ;) thanks
I would like to find out whether there is a ELK stack tutorial for CentOS 6.6/7 installation and configuration? Does ELK stack require database ? Where is database or repository?
The CentOS 7 version of this tutorial can be found here: How To Install Elasticsearch, Logstash, and Kibana 4 on CentOS 7.
The log data is stored in Elasticsearch indices.
is it possible for you make a guide on how to use shield for elasticsearch?
Hi
thanks for great guide, I use it on ubuntu and everything work well I tried to install logstash-forwarder on RED-HAT but it doesn’t work
Is there a chance you can publish short article how to install it on RED-HAT? I guess there are several issues that should be done in different way
Regards Tomer
Hi ,
I follow the guide exactly but the the site doesn’t show any data.
I’m getting the below error on my logstash-forwarder file and its not forwarding the logs.
2015/06/20 19:49:40.000790 Read error looking for ack: read tcp 172.20.3.10:1640: i/o timeout 2015/06/20 19:49:40.000934 Setting trusted CA from file: /etc/pki/tls/certs/bundle.crt 2015/06/20 19:49:40.001497 Connecting to [172.20.3.10]:1640 (logstash.example.com) 2015/06/20 19:49:40.063573 Connected to 172.20.3.10
Please share your suggestion. Fyi, I’m running both forwarder and logstash server on same machine for my testing.
Server Config : input { lumberjack { port => 1640 type => “logs” ssl_certificate => “/etc/pki/tls/certs/example.com.crt” ssl_key => “/etc/pki/tls/private/example.com.key” } } filter { if [type] == “nginx” { grok { match => { “message” => “%{NGINXACCESS}” } } } } output { elasticsearch { host => localhost } stdout { codec => rubydebug } }
Forwarder Config :
{ “network”: { “servers”: [ “logstash.example.com:1640” ], “timeout”: 15, “ssl ca”: “/etc/pki/tls/certs/bundle.crt”, “ssl certificate”: “/etc/pki/tls/certs/example.com.crt” }, “files”: [ { “paths”: [ “/var/log/nginx/test.example.com/ssl-access.log” ], “fields”: { “type”: “nginx” } } ] }
@manicas , If possible please share your suggestion on what is reason for this problem. I followed the same step that you have described on this forum.
I would guess that, in your case,
Read error looking for ack: read tcp xxx.xxx.xxx.xxx: i/o timeout
is caused by an error in your configuration. Check that yourlogstash.conf
andelasticsearch.yml
configurations are correct.In particular, make sure that
network.host: localhost
is set. If it still isn’t working, try addingnetwork.publish_host: localhost
(someone commented that this fixed some issue for them).i just noticed that logstash forwarder in newer versions use: /etc/logstash-forwarder.conf
There’s a newer version of this tutorial here: How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04
Hi Mitch, good instruction on how to install ELK on a single box using the out of the box config. im trying a used case where i need to send syslog from network devices directly to ELK without any agent/forwarder. My question is without agent install and without copying the cert can we forward the logs?? please clarify
Find out more : Elastic search
Hi, Thanks for this great tutorial. But I have an issue with this. I was unable to install logstash-forwarder. It shows that the package architecture did not match.
Detailed Error:
Please post an updated deb file, or else is there any other way to install the package?