Tutorial

Adding Logstash Filters To Improve Centralized Logging

Adding Logstash Filters To Improve Centralized Logging

Introduction

Logstash is a powerful tool for centralizing and analyzing logs, which can help to provide and overview of your environment, and to identify issues with your servers. One way to increase the effectiveness of your ELK Stack (Elasticsearch, Logstash, and Kibana) setup is to collect important application logs and structure the log data by employing filters, so the data can be readily analyzed and query-able. We will build our filters around “grok” patterns, that will parse the data in the logs into useful bits of information.

This guide is a sequel to the How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04 tutorial, and focuses primarily on adding Logstash filters for various common application logs.

Prerequisites

To follow this tutorial, you must have a working Logstash server that is receiving logs from a shipper such as Filebeat. If you do not have Logstash set up to receive logs, here is the tutorial that will get you started: How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04.

ELK Server Assumptions

  • Logstash is installed in /opt/logstash
  • Your Logstash configuration files are located in /etc/logstash/conf.d
  • You have an input file named 02-beats-input.conf
  • You have an output file named 30-elasticsearch-output.conf

You may need to create the patterns directory by running this command on your Logstash Server:

  1. sudo mkdir -p /opt/logstash/patterns
  2. sudo chown logstash: /opt/logstash/patterns

Client Server Assumptions

  • You have Filebeat configured, on each application server, to send syslog/auth.log to your Logstash server (as in the Set Up Filebeat section of the prerequisite tutorial)

If your setup differs, simply adjust this guide to match your environment.

About Grok

Grok works by parsing text patterns, using regular expressions, and assigning them to an identifier.

The syntax for a grok pattern is %{PATTERN:IDENTIFIER}. A Logstash filter includes a sequence of grok patterns that matches and assigns various pieces of a log message to various identifiers, which is how the logs are given structure.

To learn more about grok, visit the Logstash grok page, and the Logstash Default Patterns listing.

How To Use This Guide

Each main section following this will include the additional configuration details that are necessary to gather and filter logs for a given application. For each application that you want to log and filter, you will have to make some configuration changes on both the client server (Filebeat) and the Logstash server.

Logstash Patterns Subsection

If there is a Logstash Patterns subsection, it will contain grok patterns that can be added to a new file in /opt/logstash/patterns on the Logstash Server. This will allow you to use the new patterns in Logstash filters.

Logstash Filter Subsection

The Logstash Filter subsections will include a filter that can can be added to a new file, between the input and output configuration files, in /etc/logstash/conf.d on the Logstash Server. The filter determine how the Logstash server parses the relevant log files. Remember to restart the Logstash service after adding a new filter, to load your changes.

Filebeat Prospector Subsection

Filebeat Prospectors are used specify which logs to send to Logstash. Additional prospector configurations should be added to the /etc/filebeat/filebeat.yml file directly after existing prospectors in the prospectors section:

Prospector Examples
filebeat:
  # List of prospectors to fetch data.
  prospectors:
    -
      - /var/log/secure
      - /var/log/messages
      document_type: syslog
    -
      paths:
        - /var/log/app/*.log
      document_type: app-access
...

In the above example, the red highlighted lines represent a Prospector that sends all of the .log files in /var/log/app/ to Logstash with the app-access type. After any changes are made, Filebeat must be reloaded to put any changes into effect.

Now that you know how to use this guide, the rest of the guide will show you how to gather and filter application logs!

Application: Nginx

Logstash Patterns: Nginx

Nginx log patterns are not included in Logstash’s default patterns, so we will add Nginx patterns manually.

On your ELK server, create a new pattern file called nginx:

  1. sudo vi /opt/logstash/patterns/nginx

Then insert the following lines:

Nginx Grok Pattern
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:timestamp}\] "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response} (?:%{NUMBER:bytes}|-) (?:"(?:%{URI:referrer}|-)"|%{QS:referrer}) %{QS:agent}

Save and exit. The NGINXACCESS pattern parses, and assigns the data to various identifiers (e.g. clientip, ident, auth, etc.).

Next, change the ownership of the pattern file to logstash:

  1. sudo chown logstash: /opt/logstash/patterns/nginx

Logstash Filter: Nginx

On your ELK server, create a new filter configuration file called 11-nginx-filter.conf:

  1. sudo vi /etc/logstash/conf.d/11-nginx-filter.conf

Then add the following filter:

Nginx Filter
filter {
  if [type] == "nginx-access" {
    grok {
      match => { "message" => "%{NGINXACCESS}" }
    }
  }
}

Save and exit. Note that this filter will attempt to match messages of nginx-access type with the NGINXACCESS pattern, defined above.

Now restart Logstash to reload the configuration:

  1. sudo service logstash restart

Filebeat Prospector: Nginx

On your Nginx servers, open the filebeat.yml configuration file for editing:

  1. sudo vi /etc/filebeat/filebeat.yml

Add the following Prospector in the filebeat section to send the Nginx access logs as type nginx-access to your Logstash server:

Nginx Prospector
    -
      paths:
        - /var/log/nginx/access.log
      document_type: nginx-access

Save and exit. Reload Filebeat to put the changes into effect:

  1. sudo service filebeat restart

Now your Nginx logs will be gathered and filtered!

Application: Apache HTTP Web Server

Apache’s log patterns are included in the default Logstash patterns, so it is fairly easy to set up a filter for it.

Note: If you are using a RedHat variant, such as CentOS, the logs are located at /var/log/httpd instead of /var/log/apache2, which is used in the examples.

Logstash Filter: Apache

On your ELK server, create a new filter configuration file called 12-apache.conf:

  1. sudo vi /etc/logstash/conf.d/12-apache.conf

Then add the following filter:

Apache Filter
filter {
  if [type] == "apache-access" {
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
  }
}

Save and exit. Note that this filter will attempt to match messages of apache-access type with the COMBINEDAPACHELOG pattern, one the default Logstash patterns.

Now restart Logstash to reload the configuration:

  1. sudo service logstash restart

Filebeat Prospector: Apache

On your Apache servers, open the filebeat.yml configuration file for editing:

  1. sudo vi /etc/filebeat/filebeat.yml

Add the following Prospector in the filebeat section to send the Apache logs as type apache-access to your Logstash server:

Apache Prospector
    -
      paths:
        - /var/log/apache2/access.log
      document_type: apache-access

Save and exit. Reload Filebeat to put the changes into effect:

  1. sudo service filebeat restart

Now your Apache logs will be gathered and filtered!

Conclusion

It is possible to collect and parse logs of pretty much any type. Try and write your own filters and patterns for other log files.

Feel free to comment with filters that you would like to see, or with patterns of your own!

If you aren’t familiar with using Kibana, check out this tutorial: How To Use Kibana Visualizations and Dashboards.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products


Tutorial Series: Centralized Logging with ELK Stack (Elasticsearch, Logstash, and Kibana) On Ubuntu 14.04

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

This series will teach you how to install Logstash and Kibana on Ubuntu, then how to add more filters to structure your log data. Then it will teach you how to use Kibana.

Tutorial Series: Centralized Logging with Logstash and Kibana On CentOS 7

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame. This series will teach you how to install Logstash and Kibana on Ubuntu, then how to add more filters to structure your log data. Then it will teach you how to use Kibana.

About the author(s)

Mitchell Anicas
Mitchell Anicas
See author profile
Category:
Tutorial

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
30 Comments
Leave a comment...

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Hi. I am trying to troubleshoot logstash not collecting nginx logs. I followed the tutorial on setting up elasticsearch, kibana and logstash now this one, but the nginx logs don’t seem to be flowing through.

Is this logstash-forwarder config correct?

  "network": {
    "servers": [ "localhost:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },

  "files": [
    {
      "paths": [
        "/var/log/syslog",
        "/var/log/auth.log"
       ],
      "fields": { "type": "syslog" }
    }


    {
      "paths": [
        "/var/log/nginx/ElasticSearch01.access.log"
       ],
      "fields": { "type": "nginx-access" }
    }
           ]

Thanks Jock

Kamal Nasser
DigitalOcean Employee
DigitalOcean Employee badge
August 12, 2014

@jock.forrester: There’s a missing comma between the two “paths” objects. Try using this instead:

   "network": {  
      "servers": [  
         "localhost:5000"
      ],
      "timeout": 15,
      "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
   },
   "files": [  
      {  
         "paths": [  
            "/var/log/syslog",
            "/var/log/auth.log"
         ],
         "fields": {  
            "type": "syslog"
         }
      },
      {  
         "paths": [  
            "/var/log/nginx/ElasticSearch01.access.log"
         ],
         "fields": {  
            "type": "nginx-access"
         }
      }
   ]

Hi, I’m using cisco ASA 5505. When i enter to /opt/logstash/patterns/firewalls i dont find the ASA 5505. Also i cant change the option … All I want is to have ip source ; ip destination ; port source ; port destination as field in kabana. Thanks

Mitchell Anicas
DigitalOcean Employee
DigitalOcean Employee badge
August 20, 2014

@sammdoun post a sample log

{“message”:“<166>Aug 20 2014 05:51:34: %ASA-6-302014: Teardown TCP connection 8440 for inside:192.168.2.209/51483 to outside:104.16.13.8/80 duration 0:00:53 bytes 13984 TCP FINs\n”,“@version”:“1”,“@timestamp”:“2014-08-20T14:17:58.452Z”,“host”:“192.168.2.1”,“tags”:[“_grokparsefailure”],“priority”:13,…

Mitchell Anicas
DigitalOcean Employee
DigitalOcean Employee badge
August 20, 2014

@sammdoun: Assuming your message is (and the rest of the relevant logs are similar):

<166>Aug 20 2014 05:51:34: %ASA-6-302014: Teardown TCP connection 8440 for inside:192.168.2.209/51483 to outside:104.16.13.8/80 duration 0:00:53 bytes 13984 TCP FINs\n

The following pattern should match and name the fields you specified:

<%{INT}>%{MONTH} %{MONTHDAY} %{YEAR} %{TIME}: %{GREEDYDATA}inside:%{IP:source_ip}/%{INT:source_port} to outside:%{IP:destination_ip}/%{INT:destination_port}

I’m assuming the first IP/port is source and the second is destination.

ok :) but how i can add them as field in my kibana

Can you see anything wrong with my logstash-forwarder config? Logs are not sent when configured as follows;

{
  "network": {
    "servers": [ "host:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [ "/var/log/apache2/*.log" ],
      "fields": { "type": "apache-all" }
    },
   "files": [  
      {  
         "paths": [  
            "/var/log/syslog",
            "/var/log/auth.log"
         ],
         "fields": {  
            "type": "syslog"
         }
      }
   ]
}

Logs are sent ok when it is configured as follows;

{
  "network": {
    "servers": [ "host:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [ "/var/log/apache2/*.log" ],
      "fields": { "type": "apache" }
    }
   ]
}
Mitchell Anicas
DigitalOcean Employee
DigitalOcean Employee badge
September 2, 2014

You should only have one “files” section. Something like the following:

{
  "network": {
    "servers": [ "host:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [ "/var/log/apache2/*.log" ],
      "fields": { "type": "apache" }
    },
    {
      "paths": [
         "/var/log/syslog",
         "/var/log/auth.log"
      ],
      "fields": {
         "type": "syslog"
     }
  }
 ]
}

@manicas ! thank you verry much ! i’m working on SIEM project and you really help me. Actually the system used to work two weeks ago, but now i have an error message which is " Oops! SearchPhaseExecutionException[Failed to execute phase [query], all shards failed]" i think it’s due to server private ip adress, because it’s dhcp, i’ve generated another ssl certificate, with the new adress, but i still have this error ! can you help me please !

How to install plugins? bin/plugin not inside elasticsearch directory.

if we want to add apache error and modsecurity so it would like for apache error {

if [type] == “apache-error” { grok { match => { “message” => “%{APACHEERRORLOG}” } }

is that the right configuration. Or need some change that’s the filter my access log is working fine so do i need to make another input file for error log kindly help

That is my forwarder { “network”: { “servers”: [ “ip:5000” ], “timeout”: 15, “ssl ca”: “/etc/pki/tls/certs/logstash-forwarder.crt” }, “files”: [

{ “paths”: [ “/var/log/syslog”, “/var/log/auth.log” ], “fields”: { “type”: “syslog” } } , { “paths”: [ “/var/log/apache2/access.log” ], “fields”: { “type”: “apache-access” } } , { “paths”: [ “/var/log/apache2/error.log” ], “fields”: { “type”: “apache-error” } }

]

Here is my 12-apacheerror.conf

filter { if [type] == “apache-error” { grok { match => { “message” => “%{APACHEERRORLOG}” } } } }

But i am unable to see error logs in Kibana where as access logs are coming

Hi,

i am trying to collect the apache logs, i follow your config. after editing the logstash-forwader config i do, sudo service logstash-forwarder force-reload shows no process is running. i check the status: sudo service logstash-forwarder status shows:

  • logstash-forwarder is not running

logstash log:

{:timestamp=>“2014-12-05T18:35:21.509000+0800”, :message=>“±--------------------------------------------------------+\n| An unexpected error occurred. This is probably a bug. |\n| You can find help with this problem in a few places: |\n| |\n| * chat: #logstash IRC channel on freenode irc. |\n| IRC via the web: http://goo.gl/TI4Ro |\n| * email: logstash-users@googlegroups.com |\n| * bug system: https://logstash.jira.com/ |\n| |\n±--------------------------------------------------------+\nThe error reported is: \n Address already in use - bind - Address already in use”}

but i can able to see syslog & auth.log on kibana dashboard.

can you help me …

Thanks

probably you got multiple logstash config files, erase some of them and restart logstash

Thank you for sharing your article series , I have done successfully without encountering problems. I 'm doing one LAB “logstash with mysql” , but no success . you have articles about this issue . thank you

Hi, I’m trying to parse a log file that comes with JavaStackTrace events I use multiline codec but i can’t get the right output. I need to combine the stack trace with its event. Anyone can help please? ** - 2014-01-14 11:09:38,623 [main] ERROR (support.context.ContextFactory) Error getting connection to database jdbc:oracle:thin at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70) ** *input { file { path => “/root/test2.log” start_position => “beginning” codec => multiline { pattern => "^ - %{TIMESTAMP_ISO8601} " negate => true what => “previous” } } }

filter { grok { match => [ “message”, " -%{SPACE}%{SPACE}%{TIMESTAMP_ISO8601:time} [%{WORD:main}] %{LOGLEVEL:loglevel}%{SPACE}%{SPACE}(%{JAVACLASS:class}) %{GREEDYDATA:mydata} %{JAVASTACKTRACEPART}"] } date { match => [ “timestamp” , “dd/MMM/yyyy:HH:mm:ss Z” ] } }

output { elasticsearch { host => “194.3.227.23” }} *

You should change sudo vi /etc/logstash-forwarder to sudo vi /etc/logstash-forwarder.conf to synchronize between part 1 and 2 :-)

You should update this command: sudo service logstash-forwarder force-reload Which gives this output: Usage: {start|force-start|stop|force-start|force-stop|status|restart}

So it should say: sudo service logstash-forwarder restart

Mitchell Anicas
DigitalOcean Employee
DigitalOcean Employee badge
March 13, 2015

Thanks. Updated!

Does this logstash pattern work for the Nginx error log, too?

This comment has been deleted

    the logstash pattern defined in logstash server, NGUSERNAME [a-zA-Z.@-+_%]+ NGUSER %{NGUSERNAME} NGINXACCESS %{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} [%{HTTPDATE:timestamp}] “%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}” %{NUMBER:response} (?:%{NUMBER:bytes}|-) (?:“(?:%{URI:referrer}|-)”|%{QS:referrer}) %{QS:agent}

    those names, for example, NGUSERNAME, NGUSER is a, whatever name defined by user ? What I mean is I could use NGINXUSERNAME, NGINXUSER

    HI I have the same question, when I added Nginx stuff as tutorial says, I got following error and logstash stopped:

    {:timestamp=>“2015-04-19T14:49:27.140000+0430”, :message=>“SIGTERM received. Shutting down the pipeline.”, :level=>:warn} {:timestamp=>“2015-04-19T14:49:27.144000+0430”, :message=>“Exception in lumberjack input”, :exception=>#<LogStash::ShutdownSignal: LogStash::ShutdownSignal>, :level=>:error} {:timestamp=>“2015-04-19T14:49:39.038000+0430”, :message=>“The error reported is: \n pattern %{NGUSERNAME} not defined”} {:timestamp=>“2015-04-19T15:15:40.602000+0430”, :message=>“The error reported is: \n pattern %{NGUSER:ident} not defined”} {:timestamp=>“2015-04-19T15:20:08.404000+0430”, :message=>“SIGTERM received. Shutting down the pipeline.”, :level=>:warn} {:timestamp=>“2015-04-19T15:20:08.408000+0430”, :message=>“Exception in lumberjack input”, :exception=>#<LogStash::ShutdownSignal: LogStash::ShutdownSignal>, :level=>:error}

    what is the soloution? thanks

    Mitchell Anicas
    DigitalOcean Employee
    DigitalOcean Employee badge
    April 20, 2015

    Did you follow all the steps in the Logstash Patterns: Nginx section?

    Thank you Mr.Anicas, did that part again, this time everything worked!

    Hi Mitchell,

    i have a question about how logstash works. What i can’t figure out is when logstash gather a log, and if i am able to gather logs from “simple” files. If i have a simple file to write logs in it, can i gather these logs? For example :

    1. mkdir /var/log/mytest.log
    2. “This is a test log” > mytest.log

    The question is can logstash gather that log?

    Thanx.

    Kamal Nasser
    DigitalOcean Employee
    DigitalOcean Employee badge
    April 27, 2015

    Yes, Logstash can read logs from text files. It would look something like this:

    input {
      file {
        path => "/var/log/mytest.log"
        (optional) type => "test-log"
      }
    }
    

    If you want to import old data that already exists in the file, add start_position => "beginning".

    I have tried that but i get an error. My configuration is this:

    In /etc/logstash/conf.d/logstash.conf i have this : input { file { path => “/var/log/test_access_log” start_position => beginning } }

    filter { if [path] =~ “access” { mutate { replace => { “type” => “apache_access” } } grok { match => { “message” => “%{COMBINEDAPACHELOG}” } } } date { match => [ “timestamp” , “dd/MMM/yyyy:HH:mm:ss Z” ] } }

    output { elasticsearch { host => localhost } stdout { codec => rubydebug } }

    Then test_access_log has inside the following as a sample:

    71.141.244.242 - kurt [18/May/2011:01:48:10 -0700] “GET /admin HTTP/1.1” 301 566 “-” “Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3” 134.39.72.245 - - [18/May/2011:12:40:18 -0700] “GET /favicon.ico HTTP/1.1” 200 1189 “-” “Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2; .NET4.0C; .NET4.0E)” 98.83.179.51 - - [18/May/2011:19:35:08 -0700] “GET /css/main.css HTTP/1.1” 200 1837 “http://www.safesand.com/information.htm” "Mozilla/5.0 (Windows NT 6.0; WOW64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1"71.141.244.242 - kurt [18/May/2011:01:48:10 -0700] “GET /admin HTTP/1.1” 301 566 “-” “Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3” 134.39.72.245 - - [18/May/2011:12:40:18 -0700] “GET /favicon.ico HTTP/1.1” 200 1189 “-” “Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2; .NET4.0C; .NET4.0E)” 98.83.179.51 - - [18/May/2011:19:35:08 -0700] “GET /css/main.css HTTP/1.1” 200 1837 “http://www.safesand.com/information.htm” “Mozilla/5.0 (Windows NT 6.0; WOW64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1”

    I start logstash : ubuntu@logstashserver:/var/log$ sudo service logstash start logstash started.

    ubuntu@logstashserver:/var/log$ sudo service logstash status logstash is not running

    What logstash error logs say is that:

    ubuntu@logstashserver:/etc/logstash/conf.d$ cat /var/log/logstash/logstash.err NotImplementedError: stat.st_dev unsupported or native support failed to load dev_major at org/jruby/RubyFileStat.java:188 _discover_file at /opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:150 each at org/jruby/RubyArray.java:1613 _discover_file at /opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:132 watch at /opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/watch.rb:38 tail at /opt/logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.1/lib/filewatch/tail.rb:68 run at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-0.1.6/lib/logstash/inputs/file.rb:133 each at org/jruby/RubyArray.java:1613 run at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-0.1.6/lib/logstash/inputs/file.rb:133 inputworker at /opt/logstash/lib/logstash/pipeline.rb:174 start_input at /opt/logstash/lib/logstash/pipeline.rb:168

    Can anyone find out where the problem is? Thanx!

    It seems that the problem is in java libs from what i have found through the internet. I tackled the problem by doing the job in a second vm and having the forwarder to ship it to my Logstash Server.

    What i need to know now is when logstash-forwarder understand that a log event occured to ship it. For example if write sth in my dummy log file and then start the forwarder this works fine, but when i write sth new to the log file this log will not be forwarded to the Server until i restarted the logstash-forwarder.

    So is there any way to trace this writes in my log files without restarting the service? Thanx!

    I have below multiline logs and forwarding them using Logstash .

    Here is the filter which I have written ,

    filter { if [path] =~ “xx-xx-xx-xx” { multiline { pattern => “^%{TIMESTAMP_ISO8601:timestamp}” negate => true what => “previous” }

    However , it is only forwarding the first line and ignoring the rest . Please suggest what kind of filter should I use.


    2015-03-26 00:01:06.961 INFO [[ACTIVE] ExecuteThread: ‘34’ for queue: ‘weblogic.kernel.Default (self-tuning)’] [com.xx.xx.xx.xx.xx.xxManager] (xxManager.java:50) - Error Number = 0 Error Category = null Exception = null Transaction Id = xx Timing => Start = 2015-03-26 00:01:06.959 1427299266959 Stop = 2015-03-26 00:01:06.959 1427299266959 Delta = 0ms (0:0:0.000) Request Type = com.xx.xx.xx.xx.xx.xxManager.xxRequestInfo Request => Ipay88RequestInfo = com.xx.xx.xx.xx.xx.xx.mio.value.PaymentRequest=>{MerchantCode=xx&PaymentId=2&RefNo=xx7&Amount=xx&Currency=xx&ProdDesc=xx&UserName=xx&UserEmail=xx&UserContact=xx&Remark=&Lang=UTF-8&Signature=xx&ResponseURL=https://xx/storefront/xx&BackendURL=https://xx:443/xx/xx} Response = null Tracking Id = null Tracking From = null Admin Login Id = null User Login Id = null Module = null Sub Module = null Function = null Sub Results = null Attributes = null Messages = empty

    Regards , Anuj

    Manicas - any thought to doing a tutorial on how to add Nginx LDAP to our fancy new ELK stacks we’ve all built? There seems to be a lot of demand for this, but most people on the internet seem to be intentionally vague when writing how to’s… you seem to be the exception. Help a brother out! :)

    Mitchell Anicas
    DigitalOcean Employee
    DigitalOcean Employee badge
    May 4, 2015

    Hey Brother,

    Do you want to add Nginx LDAP authentication to protect your ELK stack? Or are you looking to parse the logs?

    To protect it/give granular access to coworkers. Ideally via AD Groups.

    On your Logstash server, create a new filter configuration file called 11-nginx.conf:
    

    What is the reasoning behind the naming of these files? Can you pick a random number?

    Mitchell Anicas
    DigitalOcean Employee
    DigitalOcean Employee badge
    July 14, 2015

    It’s a standard Unix/Linux convention to use directories that are suffixed with “.d” to be filled with configuration files that are included or sourced in the application’s configuration. The numbers are useful for controlling the order in which they are loaded, from lowest to highest.

    e.g. 01-lumberjack-in then 11-nginx then 30-lumberjack-out

    For this particular configuration chain, you can add filters (between the input and output) by starting their filenames with numbers between 01 and 30.

    hi,

    I just want to send standard cisco router syslog messages to logstash. How do I do that?

    Mitchell Anicas
    DigitalOcean Employee
    DigitalOcean Employee badge
    July 27, 2015

    You need to write a custom Grok pattern to match the Cisco syslog messages.

    Hi,

    You are using Logstash Forwarder on port 5000 to receive your logs -> Isn’t it just Logstash listening on port 5000?

    Mitchell Anicas
    DigitalOcean Employee
    DigitalOcean Employee badge
    July 31, 2015

    Yeah, sorry about that. I updated the tutorial.

    Oh, I would forget to share with you this amazing apps:

    Mitchell Anicas
    DigitalOcean Employee
    DigitalOcean Employee badge
    July 31, 2015

    :thumbsup:

    Hello, I have used your article to configure ELK, It was really nice,

    I’m having a issue in configuring magento exception log which has multi-line exception (with stack trace) , What I did was, I created new .conf file in logstash server with following, after that it does not shipping the exceptions to elastic search, when I commenting the Input section It start to send with _grokparsefailure.

    input {
        file {
            type => "staging-all-lincraft-logs"
            path => "/home/deploy/lindcraft/current/codepool/var/log/exception.log"
            codec => multiline {
                pattern => "^%{TIMESTAMP_ISO8601}"
                negate => true
                what => previous
            }
        }
    }
    
    
    filter {
        if [type] == "staging-all-lincraft-logs" {
            mutate {
              gsub => ["message", "\n", " "]
            }
    
            grok {
                match => [ "message", "%{TIMESTAMP_ISO8601:date} %{GREEDYDATA:exception} Stack trace: %{GREEDYDATA:stack_trace}" ]
            }
        }
    }
    
    

    And files 01-lumberjack-input.conf and 30-lumberjack-output.conf remain without change.

    in addition to that I have added following to the forwarder config in the app server as follows

      "files": [
    
        {
          "paths": [
            "/home/deploy/lindcraft/current/codepool/var/log/*.log"
           ],
          "fields": { "type": "staging-all-lincraft-logs"
    
    
    	  }
        }
    

    (in forwarder I’m sending all and in the logstach i’m catching only the exception.log file)

    HI, i am trying to get openvpn.log, anyone could teach me for the filter and pattern?

    Try DigitalOcean for free

    Click below to sign up and get $200 of credit to try our products over 60 days!

    Sign up

    Join the Tech Talk
    Success! Thank you! Please check your email for further details.

    Please complete your information!

    Become a contributor for community

    Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

    DigitalOcean Documentation

    Full documentation for every DigitalOcean product.

    Resources for startups and SMBs

    The Wave has everything you need to know about building a business, from raising funding to marketing your product.

    Get our newsletter

    Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

    New accounts only. By submitting your email you agree to our Privacy Policy

    The developer cloud

    Scale up as you grow — whether you're running one virtual machine or ten thousand.

    Get started for free

    Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

    *This promotional offer applies to new accounts only.