In this tutorial, we will go over the installation of the Elasticsearch ELK Stack on CentOS 7—that is, Elasticsearch 2.2.x, Logstash 2.2.x, and Kibana 4.4.x. We will also show you how to configure it to gather and visualize the syslogs of your systems in a centralized location, using Filebeat 1.1.x. Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.
Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.
It is possible to use Logstash to gather logs of all types, but we will limit the scope of this tutorial to syslog gathering.
The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.
Our ELK stack setup has four main components:
We will install the first three components on a single server, which we will refer to as our ELK Server. Filebeat will be installed on all of the client servers that we want to gather logs for, which we will refer to collectively as our Client Servers.
To complete this tutorial, you will require root access to an CentOS 7 VPS. Instructions to set that up can be found here (steps 3 and 4): Initial Server Setup with CentOS 7.
If you would prefer to use Ubuntu instead, check out this tutorial: How To Install ELK on Ubuntu 14.04.
The amount of CPU, RAM, and storage that your ELK Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a VPS with the following specs for our ELK Server:
In addition to your ELK Server, you will want to have a few other servers that you will gather logs from.
Let’s get started on setting up our ELK Server!
Elasticsearch and Logstash require Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elasticsearch recommends. It should, however, work fine with OpenJDK, if you decide to go that route. Following the steps in this section means that you accept the Oracle Binary License Agreement for Java SE.
Change to your home directory and download the Oracle Java 8 (Update 73, the latest at the time of this writing) JDK RPM with these commands:
Then install the RPM with this yum command (if you downloaded a different release, substitute the filename here):
Now Java should be installed at /usr/java/jdk1.8.0_73/jre/bin/java
, and linked from /usr/bin/java
.
You may delete the archive file that you downloaded earlier:
Now that Java 8 is installed, let’s install ElasticSearch.
Elasticsearch can be installed with a package manager by adding Elastic’s package repository.
Run the following command to import the Elasticsearch public GPG key into rpm:
Create a new yum repository file for Elasticsearch. Note that this is a single command:
Install Elasticsearch with this command:
Elasticsearch is now installed. Let’s edit the configuration:
You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can’t read your data or shutdown your Elasticsearch cluster through the HTTP API. Find the line that specifies network.host
, uncomment it, and replace its value with “localhost” so it looks like this:
network.host: localhost
Save and exit elasticsearch.yml
.
Now start Elasticsearch:
Then run the following command to start Elasticsearch automatically on boot up:
Now that Elasticsearch is up and running, let’s install Kibana.
The Kibana package shares the same GPG Key as Elasticsearch, and we already installed that public key.
Create and edit a new yum repository file for Kibana:
Add the following repository configuration:
- [kibana-4.4]
- name=Kibana repository for 4.4.x packages
- baseurl=http://packages.elastic.co/kibana/4.4/centos
- gpgcheck=1
- gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
- enabled=1
Save and exit.
Install Kibana with this command:
Open the Kibana configuration file for editing:
In the Kibana configuration file, find the line that specifies server.host
, and replace the IP address (“0.0.0.0” by default) with “localhost”:
server.host: "localhost"
Save and exit. This setting makes it so Kibana will only be accessible to the localhost. This is fine because we will install an Nginx reverse proxy, on the same server, to allow external access.
Now start the Kibana service, and enable it:
Before we can use the Kibana web interface, we have to set up a reverse proxy. Let’s do that now, with Nginx.
Because we configured Kibana to listen on localhost
, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose.
Note: If you already have an Nginx instance that you want to use, feel free to use that instead. Just make sure to configure Kibana so it is reachable by your Nginx server (you probably want to change the host
value, in /opt/kibana/config/kibana.yml
, to your Kibana server’s private IP address). Also, it is recommended that you enable SSL/TLS.
Add the EPEL repository to yum:
Now use yum to install Nginx and httpd-tools:
Use htpasswd to create an admin user, called “kibanaadmin” (you should use another name), that can access the Kibana web interface:
Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.
Now open the Nginx configuration file in your favorite editor. We will use vi:
Find the default server block (starts with server {
), the last configuration block in the file, and delete it. When you are done, the last two lines in the file should look like this:
include /etc/nginx/conf.d/*.conf;
}
Save and exit.
Now we will create an Nginx server block in a new file:
Paste the following code block into the file. Be sure to update the server_name
to match your server’s name:
- server {
- listen 80;
-
- server_name example.com;
-
- auth_basic "Restricted Access";
- auth_basic_user_file /etc/nginx/htpasswd.users;
-
- location / {
- proxy_pass http://localhost:5601;
- proxy_http_version 1.1;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection 'upgrade';
- proxy_set_header Host $host;
- proxy_cache_bypass $http_upgrade;
- }
- }
Save and exit. This configures Nginx to direct your server’s HTTP traffic to the Kibana application, which is listening on localhost:5601
. Also, Nginx will use the htpasswd.users
file, that we created earlier, and require basic authentication.
Now start and enable Nginx to put our changes into effect:
Note: This tutorial assumes that SELinux is disabled. If this is not the case, you may need to run the following command for Kibana to work properly: sudo setsebool -P httpd_can_network_connect 1
Kibana is now accessible via your FQDN or the public IP address of your ELK Server i.e. http://elk_server_public_ip/. If you go there in a web browser, after entering the “kibanaadmin” credentials, you should see a Kibana welcome page which will ask you to configure an index pattern. Let’s get back to that later, after we install all of the other components.
The Logstash package shares the same GPG Key as Elasticsearch, and we already installed that public key, so let’s create and edit a new Yum repository file for Logstash:
Add the following repository configuration:
- [logstash-2.2]
- name=logstash repository for 2.2 packages
- baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
- gpgcheck=1
- gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
- enabled=1
Save and exit.
Install Logstash with this command:
Logstash is installed but it is not configured yet.
Since we are going to use Filebeat to ship logs from our Client Servers to our ELK Server, we need to create an SSL certificate and key pair. The certificate is used by Filebeat to verify the identity of ELK Server. Create the directories that will store the certificate and private key with the following commands:
Now you have two options for generating your SSL certificates. If you have a DNS setup that will allow your client servers to resolve the IP address of the ELK Server, use Option 2. Otherwise, Option 1 will allow you to use IP addresses.
If you don’t have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your ELK Server—you will have to add your ELK Server’s private IP address to the subjectAltName
(SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:
Find the [ v3_ca ]
section in the file, and add this line under it (substituting in the ELK Server’s private IP address):
- subjectAltName = IP: ELK_server_private_ip
Save and exit.
Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:
The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let’s complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.
If you have a DNS setup with your private networking, you should create an A record that contains the ELK Server’s private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server’s public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your ELK Server.
Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/…), with the following command (substitute in the FQDN of the ELK Server):
The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let’s complete our Logstash configuration.
Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.
Let’s create a configuration file called 02-beats-input.conf
and set up our “filebeat” input:
Insert the following input configuration:
- input {
- beats {
- port => 5044
- ssl => true
- ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
- ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
- }
- }
Save and quit. This specifies a beats
input that will listen on tcp port 5044
, and it will use the SSL certificate and private key that we created earlier.
Now let’s create a configuration file called 10-syslog-filter.conf
, where we will add a filter for syslog messages:
Insert the following syslog filter configuration:
- filter {
- if [type] == "syslog" {
- grok {
- match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
- add_field => [ "received_at", "%{@timestamp}" ]
- add_field => [ "received_from", "%{host}" ]
- }
- syslog_pri { }
- date {
- match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
- }
- }
- }
Save and quit. This filter looks for logs that are labeled as “syslog” type (by Filebeat), and it will try to use grok
to parse incoming syslog logs to make it structured and query-able.
Lastly, we will create a configuration file called 30-elasticsearch-output.conf
:
Insert the following output configuration:
- output {
- elasticsearch {
- hosts => ["localhost:9200"]
- sniffing => true
- manage_template => false
- index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
- document_type => "%{[@metadata][type]}"
- }
- }
Save and exit. This output basically configures Logstash to store the beats data in Elasticsearch which is running at localhost:9200
, in an index named after the beat used (filebeat, in our case).
If you want to add filters for other applications that use the Filebeat input, be sure to name the files so they sort between the input and the output configuration (i.e. between 02- and 30-).
Test your Logstash configuration with this command:
It should display Configuration OK
if there are no syntax errors. Otherwise, try and read the error output to see what’s wrong with your Logstash configuration.
Restart and enable Logstash to put our configuration changes into effect:
Next, we’ll load the sample Kibana dashboards.
Elastic provides several sample Kibana dashboards and Beats index patterns that can help you get started with Kibana. Although we won’t use the dashboards in this tutorial, we’ll load them anyway so we can use the Filebeat index pattern that it includes.
First, download the sample dashboards archive to your home directory:
Install the unzip
package with this command:
Next, extract the contents of the archive:
And load the sample dashboards, visualizations and Beats index patterns into Elasticsearch with these commands:
These are the index patterns that we just loaded:
When we start using Kibana, we will select the Filebeat index pattern as our default.
Because we are planning on using Filebeat to ship logs to Elasticsearch, we should load a Filebeat index template. The index template will configure Elasticsearch to analyze incoming Filebeat fields in an intelligent way.
First, download the Filebeat index template to your home directory:
Then load the template with this command:
If the template loaded properly, you should see a message like this:
Output:{
"acknowledged" : true
}
Now that our ELK Server is ready to receive Filebeat data, let’s move onto setting up Filebeat on each client server.
Do these steps for each CentOS or RHEL 7 server that you want to send logs to your ELK Server. For instructions on installing Filebeat on Debian-based Linux distributions (e.g. Ubuntu, Debian, etc.), refer to the Set Up Filebeat (Add Client Servers) section of the Ubuntu variation of this tutorial.
On your ELK Server, copy the SSL certificate—created in the prerequisite tutorial—to your Client Server (substitute the client server’s address, and your own login):
After providing your login’s credentials, ensure that the certificate copy was successful. It is required for communication between the client servers and the ELK Server.
Now, on your Client Server, copy the ELK Server’s SSL certificate into the appropriate location (/etc/pki/tls/certs):
Now we will install the Topbeat package.
On Client Server, create run the following command to import the Elasticsearch public GPG key into rpm:
Create and edit a new yum repository file for Filebeat:
Add the following repository configuration:
- [beats]
- name=Elastic Beats Repository
- baseurl=https://packages.elastic.co/beats/yum/el/$basearch
- enabled=1
- gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
- gpgcheck=1
Save and exit.
Install Filebeat with this command:
Filebeat is installed but it is not configured yet.
Now we will configure Filebeat to connect to Logstash on our ELK Server. This section will step you through modifying the example configuration file that comes with Filebeat. When you complete the steps, you should have a file that looks something like this.
On Client Server, create and edit Filebeat configuration file:
Note: Filebeat’s configuration file is in YAML format, which means that indentation is very important! Be sure to use the same number of spaces that are indicated in these instructions.
Near the top of the file, you will see the prospectors
section, which is where you can define prospectors that specify which log files should be shipped and how they should be handled. Each prospector is indicated by the -
character.
We’ll modify the existing prospector to send secure
and messages
logs to Logstash. Under paths
, comment out the - /var/log/*.log
file. This will prevent Filebeat from sending every .log
in that directory to Logstash. Then add new entries for syslog
and auth.log
. It should look something like this when you’re done:
...
paths:
- /var/log/secure
- /var/log/messages
# - /var/log/*.log
...
Then find the line that specifies document_type:
, uncomment it and change its value to “syslog”. It should look like this after the modification:
...
document_type: syslog
...
This specifies that the logs in this prospector are of type syslog (which is the type that our Logstash filter is looking for).
If you want to send other files to your ELK server, or make any changes to how Filebeat handles your logs, feel free to modify or add prospector entries.
Next, under the output
section, find the line that says elasticsearch:
, which indicates the Elasticsearch output section (which we are not going to use). Delete or comment out the entire Elasticsearch output section (up to the line that says logstash:
).
Find the commented out Logstash output section, indicated by the line that says #logstash:
, and uncomment it by deleting the preceding #
. In this section, uncomment the hosts: ["localhost:5044"]
line. Change localhost
to the private IP address (or hostname, if you went with that option) of your ELK server:
### Logstash as output
logstash:
# The Logstash hosts
hosts: ["ELK_server_private_IP:5044"]
This configures Filebeat to connect to Logstash on your ELK Server at port 5044 (the port that we specified an input for earlier).
Directly under the hosts
entry, and with the same indentation, add this line:
bulk_max_size: 1024
Next, find the tls
section, and uncomment it. Then uncomment the line that specifies certificate_authorities
, and change its value to ["/etc/pki/tls/certs/logstash-forwarder.crt"]
. It should look something like this:
...
tls:
# List of root certificates for HTTPS server verifications
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
This configures Filebeat to use the SSL certificate that we created on the ELK Server.
Save and quit.
Now start and enable Filebeat to put our changes into place:
Again, if you’re not sure if your Filebeat configuration is correct, compare it against this example Filebeat configuration.
Now Filebeat is sending your syslog messages
and secure
files to your ELK Server! Repeat this section for all of the other servers that you wish to gather logs for.
If your ELK stack is setup properly, Filebeat (on your client server) should be shipping your logs to Logstash on your ELK server. Logstash should be loading the Filebeat data into Elasticsearch in an date-stamped index, filebeat-YYYY.MM.DD
.
On your ELK Server, verify that Elasticsearch is indeed receiving the data by querying for the Filebeat index with this command:
You should see a bunch of output that looks like this:
Sample Output:...
{
"_index" : "filebeat-2016.01.29",
"_type" : "log",
"_id" : "AVKO98yuaHvsHQLa53HE",
"_score" : 1.0,
"_source":{"message":"Feb 3 14:34:00 rails sshd[963]: Server listening on :: port 22.","@version":"1","@timestamp":"2016-01-29T19:59:09.145Z","beat":{"hostname":"topbeat-u-03","name":"topbeat-u-03"},"count":1,"fields":null,"input_type":"log","offset":70,"source":"/var/log/auth.log","type":"log","host":"topbeat-u-03"}
}
...
If your output shows 0 total hits, Elasticsearch is not loading any logs under the index you searched for, and you should review your setup for errors. If you received the expected output, continue to the next step.
When you are finished setting up Filebeat on all of the servers that you want to gather logs for, let’s look at Kibana, the web interface that we installed earlier.
In a web browser, go to the FQDN or public IP address of your ELK Server. After entering the “kibanaadmin” credentials, you should see a page prompting you to configure a default index pattern:
Go ahead and select [filebeat]-YYY.MM.DD from the Index Patterns menu (left side), then click the Star (Set as default index) button to set the Filebeat index as the default.
Now click the Discover link in the top navigation bar. By default, this will show you all of the log data over the last 15 minutes. You should see a histogram with log events, with log messages below:
Right now, there won’t be much in there because you are only gathering syslogs from your client servers. Here, you can search and browse through your logs. You can also customize your dashboard.
Try the following things:
host: "hostname"
)Kibana has many other features, such as graphing and filtering, so feel free to poke around!
Now that your syslogs are centralized via Elasticsearch and Logstash, and you are able to visualize them with Kibana, you should be off to a good start with centralizing all of your important logs. Remember that you can send pretty much any type of log or indexed data to Logstash, but the data becomes even more useful if it is parsed and structured with grok.
To improve your new ELK stack, you should look into gathering and filtering your other logs with Logstash, and creating Kibana dashboards. You may also want to gather system metrics by using Topbeat with your ELK stack. All of these topics are covered in the other tutorials in this series.
Good luck!
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame. This series will teach you how to install Logstash and Kibana on Ubuntu, then how to add more filters to structure your log data. Then it will teach you how to use Kibana.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
1st - thank you for nice tutorial. Unfortunately, Nginx frontend doesn’t work for me:
Error: Bad Request at respond (http://x.x.x.x/index.js?_b=5930:81566:15) at checkRespForFailure (http://x.x.x.x/index.js?_b=5930:81534:7) at http://x.x.x.x/index.js?_b=5930:80203:7 at wrappedErrback (http://x.x.x.x/index.js?_b=5930:20882:78) at wrappedErrback (http://x.x.x.x/index.js?_b=5930:20882:78) at wrappedErrback (http://x.x.x.x/index.js?_b=5930:20882:78) at http://x.x.x.x/index.js?_b=5930:21015:76 at Scope.$eval (http://x.x.x.x/index.js?_b=5930:22002:28) at Scope.$digest (http://x.x.x.x/index.js?_b=5930:21814:31) at Scope.$apply (http://x.x.x.x/index.js?_b=5930:22106:24)
any idea? TIA, Vitaly
I found a solution to fix these config : i use a static adress IP for my ELK server : 10.82.136.52 In logstash agent pipeline, in output : /etc/logstash/conf.d/30-*.conf for elasticsearch, enter the IP in double quote : output { elasticsearch { host => “10.82.136.52” } } Thanks a lots. It do run now :-) I continue the next step of the tutorial.
Great tutorial, thank you. I get an error on the kibana page prompting “Configure an index pattern” and iget stuck there it says "Unable to fetch mapping. Do you have indices matching the pattern? Any ideas? TIA
I’m getting the exact same error. Does anyone know how to fix this error?
I had the same problem and it turned out that my logstash-forwarder install wasn’t actually forwarding log messages to logstash.
I’m running a centralized loghost and my logstash-forwarder is installed locally on the same VM that runs logstash. I told logstash-forwarder to send to 127.0.0.1:5000 but I noticed via “netstat -anp | grep <logstash pid>” that logstash is listening on my public interface (e.g. 192.168.1.10) as opposed to 127.0.0.1.
When I changed my logstash-forwarder config to use the public IP I also noticed (via tail -f /var/log/elasticsearch/elasticsearch.log) a bunch of log messages start rolling.
Then all of the sudden the “Configure and index pattern” web prompt worked.
On configuration of Logstash Forwarder, you have to give FQDN instead of logstash_server_private_IP.
I had the same problem. I started looking through the logstash-forwarder.conf file and noticed a section with an opening bracket that didn’t have a close bracket.
If you just edited the pre-existing logstash-forwarder.conf file, you probably ran into that as well. I didn’t notice it initially, but the close curly bracket encompasses the paths section also opens another paths section with an open curly bracket. Look for a “}, {” in the conf file.
Follo. worked for me: See “Filebeat Configuration”, filebeat.yml excerpt 1 of 4 *… paths: - /var/log/auth.log - /var/log/syslog
- /var/log/.log
… Check if auth.log or syslog are present in /var/log/ dir Else, write there follo. instead paths: - /var/log/secure - /var/log/messages It works. //There’s been a mistake in above tutorial: it says we’ll send messages and secure logs but gives auth.log and syslog
Thanks, I corrected the issue.
Thank you Mitchel for this tutorial! I have a question. I need to send httpd logs to kibana. Do you have any templates to filter that entries? I’m from Brazil, sorry for my bad english ;)
I used your previous tutorial and it worked nice. Thanks!!! Just one last little problem.
My Grok filter for LogStash:
It is perfect for my Linux logins logs:
The problem are the windows logs (little sintax differences), so I can’t get the syslog_pid:
How can I change the grok filter for both logs (windows and linux) and get the two syslog_pid?
Thanks in advance and sorry for my English 0:-)
The log patterns are different, so you probably should send the Windows logs as a different type and write a filter to match it.
https://grokdebug.herokuapp.com/ is pretty handy for writing grok patterns.
This comment has been deleted
Thanks for your advise Mitchel!!!
I feel closer to the end. Using your suggested web, I have the 2 grok filters, so my filter part in logstash.conf is:
It works for Linux and Windows logs. The last problem is how can I decide which type of log the logstash is receiving. How can I stablish the type in the input part? Actually, it is:
Thanks again! You are being so helpful.
Thank you for these tutorials they are a life saver. Run into a bit of a snag with nginx. It states 502 Bad Gateway when trying to access Kibana. Direct access works fine so Kibana is okay. Nginx error log states the following:
2015/03/12 14:46:17 [crit] 8741#0: *1 connect() to 127.0.0.1:5601 failed (13: Permission denied) while connecting to upstream, client: xxx.xxx.xxx.xxx, server: log.server.com, request: “GET / HTTP/1.1”, upstream: “http://127.0.0.1:5601/”, host: “log.server.com”
2015/03/12 14:46:17 [error] 8741#0: *1 no live upstreams while connecting to upstream, client: xxx.xxx.xxx.xxx, server: log.server.com, request: “GET /favicon.ico HTTP/1.1”, upstream: “http://localhost/favicon.ico”, host: “log.server.com”
What permission is denied?
I managed to get it working and getting rid of the bad gateway error by running:
Sorry about that! Most of our CentOS tutorials assume that SELinux is disabled.
Hi,
Thanks for your tutorial. Is there a way to setup a default page on kibana dahsboard, for example one of the dashboard I created as default page?
Not as far as I know. Maybe bookmarking the share link will work for you?
OK, I have finished. I configured the Windows Servers to send the logs to 5000 port, and the Linux to the 5001 port. My succesfully finish logstash.conf is:
It works!!! I can see in my Kibana the logins/logouts… I am happy!!! Thanks for the help, Mitchell, great blog.
Nice work!
Hey Mitchell,
Thanks for another amazing post. Let me just say that your tutorials are of the best quality out there and are invaluable to those of us who read them.
I have one final problem Mitchell.
I don’t have the year in my incoming logs (so do you), so when I get the “syslog_timestamp” it is:
Mar 17 15:09:17
(like in your Kibana logs)If I go to “Settings” in Kibana, it says that “syslog_timestamp” is a string field (not a date), so i can’t order by “syslog_timestamp”, only by @timestamp.
How can I resolve this? Adding the year to the “syslog_timestamp”? Changing the field type in ElasticSearch?
Thanks again in advance…
If you want to run logstash and listen on :5514 for incoming syslog messages and have rsyslog forward messages to you then you will either need to disable SELinux (setenforce 0; systemctl restart rsyslog) or you’ll need to extend your SELinux policy and include :5514 as a port rsyslog can connect to.
logstash can’t listen on :514 because it is a privileged port so it listens on :5514.
However, the SELinux for syslog forbids rsyslog from connecting to any port other than :514.
This bug/errata has more details: https://bugzilla.redhat.com/show_bug.cgi?id=728591
You’ll need to run the following command (as root) in order to permit rsyslog to connect to :5514 (logstash): semanage port -a -t syslogdportt -p tcp 5514
Thank you for the tutorial, I followed it up to installing nginx, I’m installing this on my webserver and want to use it to manage my logs including my apache logs, I don’t want to install nginx as well as the already present apache. Is there a way to continue the tutorial but using apache instead?
In this setup, Nginx is being used as a reverse proxy to serve the Kibana application. If you want to use Apache, you can use the
mod_rewrite
module and include these configuration lines:Also, the second tutorial in this series covers how to gather Apache logs.
Hi Mitchell,
Thank you for the tutorial. I would like to create a elasticsearch cluster on single host, could you guide me on how to do that? I installed elasticsearch 1.5.0 and it is running as a service right now.
HI David,
This comment has been deleted
HI Manicas,
I want to Index it by Source hostname. Can you suggest me how to do it ?
regards,
Hemanath
First, a great thank for this tutotial that helped me to start with ELK. Now, I am well on my POC but I’ve problem of industrialization “data security”. Of course, Shield exists but we don’t want a paid tool. My question is : In real life, nginx would be enough to secure exchanges between final customer and Kibana? How can compartmentalize different customers (of course with different indices)?
Thanks
Hi Mitchel, these are great tutorials. I have tried installing on Centos and Ubuntu but I had not luck getting the indices to work. I only see the logstash-* , but I dont see a drop down box to select @timestamp, or the create button, the only message that it shows is “unable to fetch mapping, Do you have indices matching the pattern?” I been wanting to use Kibana as a syslog for cisco Switches, routers, nexus and other.
That means that Logstash isn’t receiving and storing logs in Elasticsearch. It is likely that your Logstash or Logstash Forwarder are not configured properly.
Hi Mitchell,
I am newbie to ELK setup. :)
I have followed same steps. Installed elasticsearch on one machine, logstash on other machine. All installations done by root only. These are new Linux boxes which I got for installations. Then I started logstash by following command: sudo service logstash start But after 2-3 minutes when I check status of logstash, it shows that it’s not running like:
I checked logstash.err file it gives below:
and logstash.log file shows below:
My trial.conf file is like this:
Can you please help me here as what might have gone wrong?
Eagerly waiting for your response.
Thanks and Regards, amitsg
Make sure Elasticsearch’s
network.host
is set to the IP address that you are specifying in your Logstash config.Change to private IP address:
Then restart Elasticsearch:
Then restart Logstash.
Hello Mod,
What should i do if i configure Rsyslog + Elasticsearch + Kibana? How the Rsyslog configuration file look like? Otherwise, When i start Elasticsearch it’s error:
./elasticsearch start Failed to configure logging… org.elasticsearch.ElasticsearchException: Failed to load logging configuration at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:139) at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:89) at org.elasticsearch.bootstrap.Bootstrap.setupLogging(Bootstrap.java:100) at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:184) at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32) Caused by: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55) at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144) at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:97) at java.nio.file.Files.readAttributes(Files.java:1686) at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:109) at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:69) at java.nio.file.Files.walkFileTree(Files.java:2602) at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:123) … 4 more log4j:WARN No appenders could be found for logger (node). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Any one can tell me which the file configuration that make me wrong? P/S: System: Centos6.6 Rsyslog: v8 Elasticsearch: 1.5.2
Thanks a lot
If I change network.host from localhost to IP address in elasticsearch.yml then Kibana 4stops working, I try trying to write to elasticsearch directly.
ErrorAbstract@http://10.64.1.141/index.js?_b=5930:80255:19 Generic@http://10.64.1.141/index.js?_b=5930:80287:3 respond@http://10.64.1.141/index.js?_b=5930:81568:1 checkRespForFailure@http://10.64.1.141/index.js?_b=5930:81534:7 [198]</AngularConnector.prototype.request/<@http://10.64.1.141/index.js?_b=5930:80203:7 qFactory/defer/deferred.promise.then/wrappedErrback@http://10.64.1.141/index.js?_b=5930:20882:31 qFactory/defer/deferred.promise.then/wrappedErrback@http://10.64.1.141/index.js?_b=5930:20882:31 qFactory/defer/deferred.promise.then/wrappedErrback@http://10.64.1.141/index.js?_b=5930:20882:31 qFactory/createInternalRejectedPromise/<.then/<@http://10.64.1.141/index.js?_b=5930:21015:29 $RootScopeProvider/this.$get</Scope.prototype.$eval@http://10.64.1.141/index.js?_b=5930:22002:16 $RootScopeProvider/this.$get</Scope.prototype.$digest@http://10.64.1.141/index.js?_b=5930:21814:15 $RootScopeProvider/this.$get</Scope.prototype.$apply@http://10.64.1.141/index.js?_b=5930:22106:13 done@http://10.64.1.141/index.js?_b=5930:17641:34 completeRequest@http://10.64.1.141/index.js?_b=5930:17855:7 createHttpBackend/</xhr.onreadystatechange@http://10.64.1.141/index.js?_b=5930:17794:1
Cheers
hi, anyone has make filters to sugarcrm logs?
i did filters to apache logs:
filter { if [type] == “http-access” { grok { match => { “message” => “%{IPORHOST:clientip} %{USER:ident} %{USER:auth} %{USER:LoadTime} [%{HTTPDATE:timestamphttp}] (?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest}) %{NUMBER:response} (?:%{NUMBER:bytes}|-)” } } date { match => [ “timestamphttp”, “dd/MMM/yyyy:HH:mm:ss Z” ] } } }
filter { if [type] == “http-error” { grok { match => { “message” => “[%{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}] [%{WORD:severity}] [client %{IP:clientip}] %{GREEDYDATA:message}” } } } }
Sorry, i am not good in english
Quick comment, but not a complaint: this process appears to leave one’s elasticsearch install in such a state that one cannot easily manage it from the web-based _plugins facility.
That is, I can curl to localhost:9200/_plugin but I cannot, of course, browse to it from my desktop computer.
This isn’t so much a problem but it does make using web-based frontends for ES more difficult to use.
First off, this is such a great tutorial. I have followed it to the ‘T’ and when I try to connect to Kibana using the FQDN/server IP, I get 502 Bad Gateway in /var/log/nginx/error.log. What gives? I get “no live upstream while connecting to upstream, client: x, server: logstash.domain” Thanks in advance!
This tutorial assumes that SELinux is disabled. You might be able to run this to fix the problem:
Then restart Nginx.
Hello,
First I want to thank the Tutorial … Very good …
I did all the above processes successfully, however when connecting to Kibana, at the stage of “configure an index pattern” is not available the “create” button, gets the message "Unable to fetch mapping. Do you have indices matching the pattern? "
And in the left corner superir a message: “Index Patterns:. Warning No default index pattern You must select or create one to continue.”
By analyzing the customer logs I copied the key “logstash-forwarder.crt”
Log: /var/log/logstash-forwarder/logstash-forwarder.err
06/12/2015 10: 10: 54.026115 Waiting for two prospectors to initialise 06/12/2015 10: 10: 54.026286 Launching harvester on new file: / var / log / messages 06/12/2015 10: 10: 54.026331 Launching harvester on new file: / var / log / secure 06/12/2015 10: 10: 54.026488 Launching harvester on new file: /var/log/boot.log 06/12/2015 10: 10: 54.026531 Launching harvester on new file: /var/log/yum.log 06/12/2015 10: 10: 54.026765 harvest: “/ var / log / messages” (offset snapshot: 0) 06/12/2015 10: 10: 54.027392 harvest: “/ var / log / secure” (offset snapshot: 0) 06/12/2015 10: 10: 54.027513 harvest, “/var/log/boot.log” (offset snapshot: 0) 06/12/2015 10: 10: 54.027609 harvest, “/var/log/yum.log” (offset snapshot: 0) 06/12/2015 10: 10: 54.027642 All prospectors initialised with 0 states to persist 06/12/2015 10: 10: 54.027932 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt 06/12/2015 10: 11: 16.406232 Connecting to [xxxx65]: 5000 (xxxx) 06/12/2015 10: 11: 16.406517 Failure connecting to xxxx: xxxx dial tcp: 5000: connection refused
No longer analyze what … Suspeiro that the key has not been generated corretamenta with said command in the tutorial, it’s my last suspicion.
Can anyone help me?
Logstash forwarder can’t connect to Logstash. This means that either component is misconfigured or the certificates weren’t made correctly.
I’m getting these errors:
{:timestamp=>“2015-06-14T12:38:56.651000+0300”, :message=>“retrying failed action with response code: 503”, :level=>:warn}
and also
{:timestamp=>“2015-06-14T12:42:04.361000+0300”, :message=>"too many attempts at sending event. dropping: 2015-06-14T00:18:45.000Z
In Kibana, no results are found.
Please help.
Very helpful tutorial. I have Centos 6.x with ELK (old first tutorial) server. I have lots of log files. How to migrate from old server to new (this tutorial) while preserving old log files?
I am a new beginner on ELK. My linux machine is a centos7 minima version OS on vm . My ELK server IP is 10.82.136.52. I get 2 troubleshootings : 1/i can’t see the kibana web site with ngnix config : error 502 bad gateway and locally on the server : wget http://localhost:5601 => failed connection refused 2/ for openssl generation, I get theses errors : no such file bss_file.c and DEF_LOAD conf_def.c . Would you please helping me to solve these problems. Best regards.
Most likely, there is something wrong with your
openssl.cnf
file.Manicas, can you post your openssl.conf file. Should I re-install openssl on my ‘centos7 minima’ version ?
I am a new beginner on ELK. My linux machine is a centos7 minima version OS on vm . My ELK server IP is 10.82.136.52. i can’t see the kibana web site with ngnix config : error 502 bad gateway and locally on the logstash server : wget http://localhost:5601 => failed connection refused Would you please helping me to solve these problems. Best regards.
My ELK server IP is 10.82.136.52. I this case the logstatsh server private IP adress is the same as public IP adress you write in the tutorial, isn’it ?
Thank a lot for this tutorial as beginner. I am on centos7 minima version on a vm virtualbox, @IP=10.82.136.52 in local network. I have no public IP 192.168.xxx, no FQDN. I instal all ELK on this server with default config but not ngnix,
Is SELinux enabled?
yes sestatus return disabled
how to get logs on logstash ? or put logstash on debug mode ? to solve starting logstash as a service
I am behind a entreprise proxy !
I am behing an enterprise proxy and i do all config to define it : wget, yum, ~.bash_profile, … can you give me any advice please ? best regards
Thanks you for your help. I resume my troubleshooting : I follow the tutorial : https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-4-on-centos-7 I am on centos7 minima version on a vm virtualbox, @IP=10.82.136.52 in local network. I have no public IP 192.168.xxx, no FQDN. I instal all ELK on this server with default config but no ngnix, no firewall-cmd (not started)
Hello, thank you so much for this very complete tutorial.
But I’ve a problem with the syslog gathering. I try to collect logs from a Brocade switch. The switch is correctly configured, because I can see this with a “tcpdump” on my ELK server :
I’ve created an input file (02-brocade-input.conf) in /etc/logstash/conf.d :
(I tried a lot of differents configurations of the input file, but it doesn’t work)
And also an output file :
But I can’t see anything with Kibana :(. However Kibana works, because I can see the logs from one of my CentOS servers.
Can you help me please ?
Best regards.
Please am getting an error while starting Nginx:
nginx: [emerg] “server” directive is not allowed here in /etc/nginx/conf.d/kibana.conf:1
Any idea why this error is coming up?
Ok I figured out the error…
But my web page is still showing the default NGINX Test page
Can someone please post the content of their /etc/nginx/nginx.conf and /etc/nginx/conf.d/kibana.conf
Great tutorial, thank you.
At the end of the the ‘Configure Logstash’ section, it says ‘sudo service logstash restart’ Should that be ‘systemctl restart logstash’ (or sudo if you’re not root)?
Hello, How i can configure do NXLog send the logs to this logstash? I receive “CertFile” error.
I used this for create the filters: http://www.ragingcomputer.com/2014/02/logstash-elasticsearch-kibana-for-windows-event-logs
And this for NxLog http://www.ragingcomputer.com/2014/02/sending-windows-event-logs-to-logstash-elasticsearch-kibana-with-nxlog
But doesn’t work, because i need to complete with the CertFile, can you help-me? Thanks
Hey there. I don’t have a Windows machine to test on. Does the info in this link help at all?
Hi,
I’m unable to configure an index pattern.
It gives this error - “Unable to fetch mapping. Do you have indices that match the pattern?”
I changed my logstash.conf server and client to listen at 5001. Could someone help me out with this?
That happens when Logstash isn’t receiving any logs from Logstash Forwarder, usually due to a configuration issue. Try looking through the previous comments for potential solutions.
I just wanted to thank you for this amazing guide incredible. Im just somewhat stuck at the part Copy SSL Certificate and Logstash Forwarder Package im trying to forward all my pfsense2.2.2(192.168.3.254) logs to my ELK(192.168.3.199) server when you say: scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp
would that be on my ELK server or on my pfsense command i need to run?
I tried running scp /etc/pki/tls/certs/logstash-forwarder.crt root@192.168.3.254:/tmp but it did not work says The authenticity of host ‘192.168.3.254 (192.168.3.254)’ can’t be established.
I was wondering if someone could help me out?
Also when you say:Paste the following code block into the file. Be sure to update the server_name to match your server’s name: would it be my ELK server or my pfsense firewall?
Thank you
This tutorial is for logging for Linux servers. I think if you want to monitor a Pfsense firewall, you can log into your dashboard and add the Logstash Server as a remote syslog server. You would also have to configure a syslog input on your Logstash server.
Thank you for your response. I think i got the part for pfSense, what im confused with the part when you say " be sure to update the server_name to match your server’s name:" that part can i name it from example.com to logserver or 192.168.3.199? And once I finish and try to access Kibana on my browser i enter 192.168.3.199 which is the logstash server but no luck did I miss something or what part should I redo?
Thank you again
If you’re not using a domain name, you should use the public IP address of your ELK server there.
Hi there thank you for your reply. this is what i changed
Then on my URL i would input 192.168.3.199:5601 no luck then on putty i tried
and i would get Connecting to localhost (localhost)|::1|:5601… failed: Connection refused.
Did i Miss something?
Thank you again
192.168.x.x is a private address. You should be using the server’s public IP address there. Then you need to access
http://your_public_ip
to get to Kibana.Hi, Thank you again for your reply sorry for my ignorance. As i put in the my Extenal ip on the part where you said I would need to also open the NAT ports on 80,5061 which I did and no luck :( I was wondering if instead of external IP would it be possible just on the LAN as if just accessing 192.168.3.199 on my computer when connected to the LAN. In which part would i need to edit to make it possible
Thank you again and sorry for my incompetence
Yeah, you should be able to access it over 192.168.3.199 on port 80. Sorry, I assumed you were using a cloud server. You may have a firewall issue. It might help to describe your setup and what you are trying to do.
Hi, Thank you again for replying and I should be the one saying sorry not you :). Here is my setup
http://s16.postimg.org/pwdhjmlr9/Drawing2.jpg
Thank you again
So everything is going through your router? Just use the private IP address for everything, and make sure that all of the ports in use are allowed on the router (among your servers).
Hi, Thank you so much for replying again , I reinstalled again from scratch but this time every part in the guide that said localhost i would replace it with 192.168.3.199 but still nothing :( Please see pic Did I miss something?
http://s10.postimg.org/ni35qxddl/Clipboarder_2015_08_23.png http://s10.postimg.org/6wvjb9m9l/Clipboarder_2015_08_23_002.png http://s10.postimg.org/gg584q9rt/Clipboarder_2015_08_23_005.png
Thank you again
**NVM, it works now. Must be intermittent connection issue. **
Yum error when installing logstash. CentOS Linux release 7.1.1503 (Core)
After logged in to Kibana for first time. I did not see the dropdown menu with @timestamp, instead it says “Unable to fetch mapping. Do you have indices matching the pattern?”
This usually means that one or more components are misconfigured, and your logs aren’t getting stored in Elasticsearch. If you take a look at the diagram in the Our Goal section, the components in question are everything between Elasticsearch and Logstash Forwarder.
Great tutorial Saved me TONS of time and scratching my head…
Now that I can get into kibana… it wants me to “Configure an index pattern” I get the concept… but have no idea where to do that. Some googling reveals that I might need to make a .kibana file with a default index type in it. Looking in my log files confirms that (to some degree) “POST /elasticsearch/.kibana/visualization/_search?size=100 HTTP/1.1”, upstream: “http://[::1]:5601/elasticsearch/.kibana/visualization/_search?size=100”, But I am not sure exactly which “elastisearch” directory I am supposed to put that in. I have over a dozen various paths with a directory named “elastisearch” in them.
Any clues THANK YOU !
Any reason you haven’t used the Kibana repo?
This comment has been deleted
Hey, on a vanilla ELK stack droplet, i am trying to install a new logstash plugin
…and that is it, the process hangs right there. The process list shows an idle logstash process, but after some minutes it just dies away, and the shell connection breaks with a broken pipe.
Any idea where to start looking?
Thanks
This comment has been deleted
Hi, I am very impressed by this tutorial. Unfortunately I am getting the dreaded message: “Unable to fetch mapping. Do you have indices matching the pattern?” Googling did not help. The configuration on the client seems correct. The server side too looks good. [root@elk ~]# curl ‘localhost:9200/_cat/indices?v’ health status index pri rep docs.count docs.deleted store.size pri.store.size yellow open .kibana 1 1 1 0 2.5kb 2.5kb [root@elk ~]# Are there any log files you’d like me to tail and paste here.
I basically followed the tutorial.
This is from the nginx error log: tail -f /var/log/nginx/error.log
2015/09/24 04:54:26 [error] 2076#0: 33 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.14, server: elk.kartikv.com, request: "GET /elasticsearch/logstash-/_mapping/field/?ignore_unavailable=false&allow_no_indices=false&include_defaults=true HTTP/1.1", upstream: "http://[::1]:5601/elasticsearch/logstash-/_mapping/field/*?ignore_unavailable=false&allow_no_indices=false&include_defaults=true", host: “elk.kartikv.com”, referrer: “http://elk.kartikv.com/”
[root@elk ~]# netstat -anp | grep :5601 tcp 0 0 127.0.0.1:5601 0.0.0.0:* LISTEN 682/node [root@elk ~]#
This comment has been deleted
[root@sys1 ~]# telnet elk 5001 Trying 192.168.1.235… Connected to elk. Escape character is ‘^]’. ^CConnection closed by foreign host.
[root@elk logstash]# tail -f logstash.log <snip>