Tutorial

How To Map User Location with GeoIP and ELK (Elasticsearch, Logstash, and Kibana)

How To Map User Location with GeoIP and ELK (Elasticsearch, Logstash, and Kibana)

Introduction

IP Geolocation, the process used to determine the physical location of an IP address, can be leveraged for a variety of purposes, such as content personalization and traffic analysis. Traffic analysis by geolocation can provide valuable insight into your user base as it allows you to easily see where they are coming from. This can help you make informed decisions about the ideal geographical location(s) of your application servers and who your current audience is.

In this tutorial, we will show you how to create a visual geo-mapping of the IP addresses of your application’s users, by using Elasticsearch, Logstash, and Kibana.

Here’s a short explanation of how it all works. Logstash uses a GeoIP database to convert IP addresses into a latitude and longitude coordinate pair, i.e. the approximate physical location of an IP address. The coordinate data is stored in Elasticsearch in geo_point fields, and also converted into a geohash string. Kibana can then read the Geohash strings and draw them as points on a map of the Earth. In Kibana 4, this is known as a Tile Map visualization.

Let’s take a look at the prerequisites now.

Prerequisites

To follow this tutorial, you must have a working ELK stack. Additionally, you must have logs that contain IP addresses that can be filtered into a field, like web server access logs. If you don’t already have these two things, you can follow the first two tutorials in this series. The first tutorial will set up an ELK stack, and the second one will show you how to gather and filter Nginx or Apache access logs:

Add geo_point Mapping to Filebeat Index

Assuming you followed the prerequisite tutorials, you have already done this. However, we are including this step again in case you skipped it, because the TileMap visualization requires that your GeoIP coordinates are stored in Elasticsearch as a geo_point type.

On the server that Elasticsearch is installed on, download the Filebeat index template to your home directory:

  1. cd ~
  2. curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json

Then load the template into Elasticsearch with this command:

  1. curl -XPUT 'http://localhost:9200/_template/filebeat' -d@filebeat-index-template.json

Configure Logstash to use GeoIP

To get Logstash to store GeoIP coordinates, you need to identify an application that generates logs that contain a public IP address that you can filter as a discrete field. A fairly ubiquitous application that generates logs with this information is a web server, such as Nginx or Apache. We will use Nginx access logs as the example. If you’re using different logs, make the necessary adjustments to the example.

In the Adding Filters to Logstash tutorial, the Nginx filter is stored in a file called 11-nginx-filter.conf. If your filter is located elsewhere, edit that file instead.

Let’s edit the Nginx filter now:

  1. sudo vi /etc/logstash/conf.d/11-nginx-filter.conf

Under the grok section, add the highlighted portion below:

11-nginx-filter.conf
filter {
  if [type] == "nginx-access" {
    grok {
      match => { "message" => "%{NGINXACCESS}" }
    }
    geoip {
      source => "clientip"
    }
  }
}

This configures the filter to convert an IP address stored in the clientip field (specified in source). We are specifying the source as clientip because that is the name of the field that the Nginx user IP address is being stored in. Be sure to change this value if you are storing the IP address information in a different field.

Save and exit.

To put the changes into effect, let’s restart Logstash:

  1. sudo service logstash restart

If everything was configured correctly, Logstash should now be storing the GeoIP coordinates with your Nginx access logs (or whichever application is generating the logs). Note that this change is not retroactive, so your previously gathered logs will not have GeoIP information added. Let’s verify that the GeoIP functionality is working properly in Kibana.

Connect to Kibana

The easiest way to verify if Logstash was configured correctly, with GeoIP enabled, is to open Kibana in a web browser. Do that now.

Find a log message that your application generated since you enabled the GeoIP module in Logstash. Following the Nginx example, we can search Kibana for type: "nginx-access" to narrow the log selection.

Then expand one of the messages to look at the table of fields. You should see some new geoip fields that contain information about how the IP address was mapped to a real geographical location. For example:

Example GeoIP Fields

Note: If you don’t see any logs, generate some by accessing your application, and ensure that your time filter is set to a recent time.

Also note that Kibana may not be able to resolve a geolocation for every IP address. If you’re just testing with one address and it doesn’t seem to be working, try some others before troubleshooting.

If, after all that, you don’t see any GeoIP information (or if it’s incorrect), you probably did not configure Logstash properly.

If you see proper GeoIP information in this view, you are ready to create your map visualization.

Create Tile Map Visualization

Note: If you haven’t used Kibana visualizations yet, check out the Kibana Dashboards and Visualizations Tutorial.

To map out the IP addresses in Kibana, let’s create a Tile Map visualization.

Click Visualize in the main menu.

Under Create a new visualization, select Tile map.

Under Select a search source you may select either option. If you have a saved search that will find the log messages that you want to map, feel free to select that search. We will proceed as if you clicked From a new search.

When prompted to Select an index pattern choose filebeat-* from the dropdown. This will take you to a page with a blank map:

Kibana default tile map building interface

In the search bar, enter type: nginx-access or another search term that will match logs that contain geoip information. Make sure your time period (upper right corner of the page) is sufficient to match some log entries. If you see No results found instead of the map, you need to update your search terms or time.

Once you have some results, click Geo Coordinates underneath the buckets header in the left-hand column. The green “play” button will become active. Click it, and your geolocations will be plotted on the map:

Kibana tile map with multiple points

When you are satisfied with your visualization, be sure to save it using the Save Visualization button (floppy disk icon) next to the search bar.

Conclusion

Now that you have your GeoIP information mapped out in Kibana, you should be set. By itself, it should give you a rough idea of the geographical location of your users. It can be even more useful if you correlate it with your other logs by adding it to a dashboard.

Good luck!

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products


Tutorial Series: Centralized Logging with ELK Stack (Elasticsearch, Logstash, and Kibana) On Ubuntu 14.04

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

This series will teach you how to install Logstash and Kibana on Ubuntu, then how to add more filters to structure your log data. Then it will teach you how to use Kibana.

Tutorial Series: Centralized Logging with Logstash and Kibana On CentOS 7

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame. This series will teach you how to install Logstash and Kibana on Ubuntu, then how to add more filters to structure your log data. Then it will teach you how to use Kibana.

About the authors

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
10 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Things to note, if you’re using ELK 5.0 and encounter this error:

[ERROR][logstash.filters.geoip ] The GeoLite2 MMDB database provided is invalid or corrupted. {:exception=>com.maxmind.db.InvalidDatabaseException: Could not find a MaxMind DB metadata marker in this file (GeoLiteCity.dat). Is this a valid MaxMind DB file?

It’s because you downloaded the legacy version, download the GeoLite2 City at: curl -O “http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz

Followed every part of the tutorial ,

Getting the Error ,

No Compatible Fields: The “[filebeat-]YYYY.MM.DD” index pattern does not contain any of the following field types: geo_point

have searched Google and unable to find the root cause of this error , can you help

Any pointers on how to do this for fluentd (using logstash format) instead of logstash?

Hi,

Thanks for the nice tutorial.

Just to mention, you may need to change the mappings for geoip.location field and set it to ‘geo_point’ or else you won’t be able to get the Field ‘geiop.location’ displayed under Geo Co-ordinates > Field when you try to create a MAP.

I was using an index called ‘apache’ so initially the filed was set to double

$ curl http://localhost:9200/apache*/_mapping/apache-access/field/geoippretty
{
  "apache" : {
    "mappings" : {
      "apache-access" : {
        "geoip.location" : {
          "full_name" : "geoip.location",
          "mapping":{"location":{"type":"double"}}
        }
      }
    }
  }
}

I had to change it to ‘geo-point’ to get the filed on the map.

$curl http://localhost:9200/apache*/_mapping/apache-access/field/geoip.location?pretty
{
  "apache" : {
    "mappings" : {
      "apache-access" : {
        "geoip.location" : {
          "full_name" : "geoip.location",
          "mapping":{"location":{"type":"geo_point"}}
        }
      }
    }
  }
}

To change the mappings:

  1. copy the default lib/logstash/outputs/elasticsearch/elasticsearch-template.json to something like lib/logstash/outputs/elasticsearch/elasticsearch-apache-template.json
  2. Edit the elasticsearch-apache-template.json and give it a name “template” : “apache”,
  3. You will require to delete your existing index (THIS WILL LOOSE DATA). curl -XDELETE http://localhost:9200/apache*
  4. Edit your logstash.conf file to include the template in the output
output {
  elasticsearch {
     host => localhost
     cluster => es_24
     index   => "apache"
     template => "path_to_elasticsearch-apache-template.json"
     template_name => "apache"
  }
  stdout { codec => rubydebug }
}
  1. Restart ES and LS and it should work.

This is 2017. ELK stack is known as Elastic Stack now.

Other than GeoIP, we can also use the IP2Location filter in Logstash.

https://www.ip2location.com/tutorials/how-to-use-ip2location-filter-plugin-with-elastic-stack

Great article. Any examples of doing this same thing, but instead coming from an IIS weblog instead of Nginx?

Is there a reason to use this config?

geoip {
      source => "clientip"
      target => "geoip"
      database => "/etc/logstash/GeoLiteCity.dat"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float"]
    }

I believe that the below config can do the same, the “location” field contains longitude and latitude and can be used in Kibana tile map.

geoip {
      source => "clientip"
      target => "geoip"
      database => "/etc/logstash/GeoLiteCity.dat"
      fields => ["country_name", "real_region_name", "city_name", "location"]   # this is to output only certain fields
    }

Conflict on geoip.location and the floowing error:

No Compatible Fields: The "filebeat-*" index pattern does not contain any of the following field types: geo_point

Strange how I have 20 tabs open here to try and figure this one out. It looks like a common problem with no clear solution for the ELk stack beginner. I came here cause I wanted solutions not more problems…

I followed the tutorial and was getting this error:

No Compatible Fields: The "filebeat-*" index pattern does not contain any of the following field types: geo_point

This is how I solved it.

My server is CentOS 7 with these versions of relevant packages: elasticsearch-2.3.5-1.noarch filebeat-1.2.3-1.x86_64 kibana-4.5.4-1.x86_64 logstash-2.3.4-1.noarch

I use filebeat as the collector on all my nodes and my index pattern is ‘*filebeat-**’

Contents of: 30-elasticsearch-output.conf

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

My Apache filter file is like the nginx one in the tutorial above.

I continued attempting to resolve the error by reading the instructions here: (Note: I’m not at all certain that the steps at these two links are required. I implemented them while trying to resolve the issue and did not try the solution without these steps. I have a suspicion that they are nor necessary) https://www.elastic.co/guide/en/elasticsearch/reference/current/geo-point.html Which pointed me to apply the changes here: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-percolate.html#geo-percolate and then back to here where I applied the first option: https://www.elastic.co/guide/en/elasticsearch/reference/current/geo-point.html

But I still got the same error.

Eventually, I saw ckyconsultinguk’s comment above from April 22, 2015 and adapted his solution a bit.

Based on his suggestions I edited the file: elasticsearch-template.json (Mine was located here: ‘/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch’ ) and changed the value of “template” from “logstash-" to "filebeat-”:

2c2
<   "template" : "logstash-*",
---
>   "template" : "filebeat-*",

(This saved me the step of having to add a template entry in the output file and since I do not use the ‘logstash-*’ index I do not need the original template.)

I then: Stopped logstash Deleted my old index DELETE /filebeat-* (THIS WILL DELETE ALL OF YOUR ‘filebeat-*’ DATA) Restarted ES Started logstash and went to sleep happy. :-)

I also found a similar implementation of this idea here: https://michael.lustfield.net/misc/geo-point-with-elasticsearch-2x

What a neat tutorial … hats off to author and team … keep up good work !! SF From SW England ! P.S. Is there any place i can see a example of application log which use log4j logs … btway that’s what i m going to do next …

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.