A previous version of this article was written by Toli.
Elasticsearch is a platform for distributed search and analysis of data in real time. It is a popular choice due to its usability, powerful features, and scalability.
This article will guide you through installing Elasticsearch, configuring it for your use case, securing your installation, and beginning to work with your Elasticsearch server.
Before following this tutorial, you will need:
An Ubuntu 20.04 server with 4GB RAM and 2 CPUs set up with a non-root sudo user. You can achieve this by following the Initial Server Setup with Ubuntu 20.04
OpenJDK 11 installed
For this tutorial, we will work with the minimum amount of CPU and RAM required to run Elasticsearch. Note that the amount of CPU, RAM, and storage that your Elasticsearch server will require depends on the volume of logs that you expect.
The Elasticsearch components are not available in Ubuntu’s default package repositories. They can, however, be installed with APT after adding Elastic’s package source list.
All of the packages are signed with the Elasticsearch signing key in order to protect your system from package spoofing. Packages which have been authenticated using the key will be considered trusted by your package manager. In this step, you will import the Elasticsearch public GPG key and add the Elastic package source list in order to install Elasticsearch.
To begin, use cURL, the command line tool for transferring data with URLs, to import the Elasticsearch public GPG key into APT. Note that we are using the arguments -fsSL to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request on a new location if redirected. Pipe the output of the cURL command into the apt-key program, which adds the public GPG key to APT.
- curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Next, add the Elastic source list to the sources.list.d
directory, where APT will search for new sources:
- echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
Next, update your package lists so APT will read the new Elastic source:
- sudo apt update
Then install Elasticsearch with this command:
- sudo apt install elasticsearch
Elasticsearch is now installed and ready to be configured.
To configure Elasticsearch, we will edit its main configuration file elasticsearch.yml
where most of its configuration options are stored. This file is located in the /etc/elasticsearch
directory.
Use your preferred text editor to edit Elasticsearch’s configuration file. Here, we’ll use nano
:
- sudo nano /etc/elasticsearch/elasticsearch.yml
Note: Elasticsearch’s configuration file is in YAML format, which means that we need to maintain the indentation format. Be sure that you do not add any extra spaces as you edit this file.
The elasticsearch.yml
file provides configuration options for your cluster, node, paths, memory, network, discovery, and gateway. Most of these options are preconfigured in the file but you can change them according to your needs. For the purposes of our demonstration of a single-server configuration, we will only adjust the settings for the network host.
Elasticsearch listens for traffic from everywhere on port 9200
. You will want to restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through its [REST API] (https://en.wikipedia.org/wiki/Representational_state_transfer). To restrict access and therefore increase security, find the line that specifies network.host
, uncomment it, and replace its value with localhost
so it reads like this:
. . .
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
. . .
We have specified localhost
so that Elasticsearch listens on all interfaces and bound IPs. If you want it to listen only on a specific interface, you can specify its IP in place of localhost
. Save and close elasticsearch.yml
. If you’re using nano
, you can do so by pressing CTRL+X
, followed by Y
and then ENTER
.
These are the minimum settings you can start with in order to use Elasticsearch. Now you can start Elasticsearch for the first time.
Start the Elasticsearch service with systemctl
. Give Elasticsearch a few moments to start up. Otherwise, you may get errors about not being able to connect.
- sudo systemctl start elasticsearch
Next, run the following command to enable Elasticsearch to start up every time your server boots:
- sudo systemctl enable elasticsearch
With Elasticsearch enabled upon startup, let’s move on to the next step to discuss security.
By default, Elasticsearch can be controlled by anyone who can access the HTTP API. This is not always a security risk because Elasticsearch listens only on the loopback interface (that is, 127.0.0.1
), which can only be accessed locally. Thus, no public access is possible and as long as all server users are trusted, security may not be a major concern.
If you need to allow remote access to the HTTP API, you can limit the network exposure with Ubuntu’s default firewall, UFW. This firewall should already be enabled if you followed the steps in the prerequisite Initial Server Setup with Ubuntu 20.04 tutorial.
We will now configure the firewall to allow access to the default Elasticsearch HTTP API port (TCP 9200) for the trusted remote host, generally the server you are using in a single-server setup, such as198.51.100.0
. To allow access, type the following command:
- sudo ufw allow from 198.51.100.0 to any port 9200
Once that is complete, you can enable UFW with the command:
- sudo ufw enable
Finally, check the status of UFW with the following command:
- sudo ufw status
If you have specified the rules correctly, you should receive output like this:
OutputStatus: active
To Action From
-- ------ ----
9200 ALLOW 198.51.100.0
22 ALLOW Anywhere
22 (v6) ALLOW Anywhere (v6)
The UFW should now be enabled and set up to protect Elasticsearch port 9200.
If you want to invest in additional protection, Elasticsearch offers the commercial Shield plugin for purchase.
By now, Elasticsearch should be running on port 9200. You can test it with cURL and a GET request.
- curl -X GET 'http://localhost:9200'
You should receive the following response:
Output{
"name" : "elasticsearch-ubuntu20-04",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "qqhFHPigQ9e2lk-a7AvLNQ",
"version" : {
"number" : "7.6.2",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
If you receive a response similar to the one above, Elasticsearch is working properly. If not, make sure that you have followed the installation instructions correctly and you have allowed some time for Elasticsearch to fully start.
To perform a more thorough check of Elasticsearch execute the following command:
- curl -XGET 'http://localhost:9200/_nodes?pretty'
In the output from the above command you can verify all the current settings for the node, cluster, application paths, modules, and more.
To start using Elasticsearch, let’s first add some data. Elasticsearch uses a RESTful API, which responds to the usual CRUD commands: create, read, update, and delete. To work with it, we’ll use the cURL command again.
You can add your first entry like so:
- curl -XPOST -H "Content-Type: application/json" 'http://localhost:9200/tutorial/helloworld/1' '{ "message": "Hello World!" }'
You should receive the following response:
Output{"_index":"tutorial","_type":"helloworld","_id":"1","_version":2,"result":"updated","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":1,"_primary_term":1}
With cURL, we have sent an HTTP POST request to the Elasticsearch server. The URI of the request was /tutorial/helloworld/1
with several parameters:
tutorial
is the index of the data in Elasticsearch.helloworld
is the type.1
is the ID of our entry under the above index and type.You can retrieve this first entry with an HTTP GET request.
- curl -X GET -H "Content-Type: application/json" 'http://localhost:9200/tutorial/helloworld/1' '{ "message": "Hello World!" }'
This should be the resulting output:
Output{"_index":"tutorial","_type":"helloworld","_id":"1","_version":1,"found":true,"_source":{ "message": "Hello, World!" }}
To modify an existing entry, you can use an HTTP PUT request.
- curl -X PUT -H "Content-Type: application/json" 'localhost:9200/tutorial/helloworld/1?pretty' '
- {
- "message": "Hello, People!"
- }'
Elasticsearch should acknowledge successful modification like this:
Output{
"_index" : "tutorial",
"_type" : "helloworld",
"_id" : "1",
"_version" : 2,
"result" : "updated",
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 1,
"_primary_term" : 1
}
In the above example we have modified the message
of the first entry to “Hello, People!”. With that, the version number has been automatically increased to 2
.
You may have noticed the extra argument pretty
in the above request. It enables human-readable format so that you can write each data field on a new row. You can also “prettify” your results when retrieving data to get a more readable output by entering the following command:
- curl -X GET -H "Content-Type: application/json" 'http://localhost:9200/tutorial/helloworld/1?pretty'
Now the response will be formatted for a human to parse:
Output{
"_index" : "tutorial",
"_type" : "helloworld",
"_id" : "1",
"_version" : 2,
"_seq_no" : 1,
"_primary_term" : 1,
"found" : true,
"_source" : {
"message" : "Hello, People!"
}
}
}
We have now added and queried data in Elasticsearch. To learn about the other operations please check the API documentation.
You have now installed, configured, and begun to use Elasticsearch. To further explore Elasticsearch’s functionality, please refer to the official Elasticsearch documentation.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Nice guide but external connections don’t work.
So first add your IP address to the firewall
then edit the elasticsearch.yml configuration file. Change
network.host: localhost
tonetwork.host: 0.0.0.0
If you restart the service with
systemctl restart elasticsearch
you’ll get an error.You need to add this at the bottom of the .yml file:
then restart
systemctl restart elasticsearch
SSH to the remote server (xxx.yyy.aaa.bbb) and test with
curl -X GET 'http://xxx.yyy.aaa.abb:9200'
if the IP address of the ElasticSearch server is xxx.yyy.aaa.abbsudo systemctl enable elasticsearch
Synchronizing state of elasticsearch.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable elasticsearch
What if the remote IP is dynamic? Is there a solution to solve the problem when I will need to set up UFW port every time the IP will change? It does not seems to be cleaver.
I tried to open port 9200 to remote IP via UFW but as I try to connect from this IP with curl I am getting an error: connection refused. I am not sure but If elasticsearch.yml config file contains localhost it should not work. Ok so I added the IP to the config file but then systamctl restart elasicsearch fails. As I revert it back to localhost it works. How to allowe secure remote connection?
Thanks for a great tutorial. Digital Ocean is always my trusted source for such tutorials.
FYI: This curl command
throws an IllegalArgumentException:
Because it’s passing a body for a GET request.
Thank you, Erin Glass
Your content helps me to setup elasticsearch in my system.
Do I’ve to enter my server address instead of
198.51.100.0
this ip address right?