Dockerize Elasticsearch & Kibana with X Pack’s Security

Kattula Uma srinivas
4 min readMay 13, 2020

Docker is a set of PaaS products that use kernel-level virtualization to deliver software in packages. It eases the development and operation teams to package, ship, and run the distributed applications anywhere.

In this post, I would like to navigate the reader through one use case where Elasticsearch and Kibana would be integrated as a dockerized container using a compose file with an option of X Pack security enabled.

Elasticsearch is a full-text, document-based search engine built on Apache Lucene. It serializes and indexes JSON documents based on an inverted index data structure that supports very fast and full-text searches.

Inverted Indexing

Kibana is a default visualization tool for the Elasticsearch. It is a web interface that offers to monitor, manipulate, and visualize your Elastic stack data.

X-pack is an elastic stack extension that comes with a bundle of features like security, monitoring, machine learning e.t.c. X-Pack features come with 30 days trial. At the end of the trial period, you can purchase a subscription to keep using the full functionality of the X-Pack. Security feature comes free starting from versions 6.8.0 and 7.1.0.

Prerequisites

  1. Docker
  2. Docker Compose

Steps to follow :

1. The following snippet is for a single node Elasticsearch cluster and it can be replicated for the multi-node clusters by adding a few more docker container services with suitable configuration. That apart, CORS and cluster settings can be included directly in environments of service or by mounting elasticsearch.yml to host machine.

A sample docker-compose.yml file for Elasticsearch container is as follows.

version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
hostname: "elasticsearch"
container_name: elasticsearch
environment:
- "discovery.type=single-node"
ports:
- "9200:9200"

2. Elasticsearch considers the available disk space on a node before deciding whether to allocate new shards (indexed elements) to that node or to actively relocate shards away from that node. We can add below parameters in environments to enable disk allocation decider.

cluster.routing.allocation.disk.threshold_enabled

The default is true. we can set false to disable the disk allocation decider.

cluster.routing.allocation.disk.watermark.low: 65%

Elasticsearch will not allocate shards to nodes that are having more than 65% of disk usage. we can also set byte value like 500mb to prevent shard allocation when it is under that specified available space. This setting has no effect on the primary shards of newly-created indices but will prevent their replicas from being allocated.

cluster.routing.allocation.disk.watermark.high: 70%

Elasticsearch will attempt to relocate shards away from nodes whose disk usage is more than 70%. we can also set byte value similar to the low watermark. This setting affects the allocation of all shards, whether previously allocated or not.

- "cluster.routing.allocation.disk.threshold_enabled= true"
- "cluster.routing.allocation.disk.watermark.low= 65%"
- "cluster.routing.allocation.disk.watermark.high= 70%"

3. Docker Volume has to be used to persist the data in the container residing machine. A folder is created with the name ‘elasticData’ in local machine and the documents in the path ‘/usr/share/elasticsearch/data’ of the container are mounted to it. Even when the container gets deleted we can retain the state of the removed container.

volumes:
- ./elasticData:/usr/share/elasticsearch/data

4. Add the following in environments to enable the X Pack security feature for basic authentication. The default username would be ‘elastic’

- xpack.security.enabled=true
- xpack.security.audit.enabled=true
- ELASTIC_PASSWORD=password

5. Now, the docker-compose file for dockerizing Elasticsearch and Kibana with X Pack security feature enabled would be as follows.

version: "2"
services:
elasticsearch:
image: "docker.elastic.co/elasticsearch/elasticsearch:7.5.0"
container_name: elasticsearch
environment:
- discovery.type=single-node
- cluster.routing.allocation.disk.threshold_enabled=true
- cluster.routing.allocation.disk.watermark.low=65%
- cluster.routing.allocation.disk.watermark.high=70%
- xpack.security.enabled=true
- xpack.security.audit.enabled=true
- ELASTIC_PASSWORD=somethingsecret
ports:
- "9200:9200"
networks:
- eknetwork
kibana:
depends_on:
- elasticsearch
image: "docker.elastic.co/kibana/kibana:7.5.0"
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_URL=http://localhost:9200
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=somethingsecret
networks:
- eknetwork
networks:
eknetwork:
driver: bridge

A bridge network eknetworkis created which allows containers connected to the same bridge network to communicate. In the environments of Kibana, we are adding credentials of Elastic search to provide data access over HTTP. Due to this connection dependency of Kibana on Elasticsearch ‘depends_on’ attribute is used for maintaining the order in starting the services.

6. Finally, in order to run the docker instances as defined in the compose file, use the command docker-compose up from the path where the compose file is saved. This command serves Elasticsearch at port 9200 and Kibana at port 5601. To access, use the links below

Elasticsearch URL: http://localhost:9200

Kibana: http://localhost:5601

Give a cheer if you like this post

Please comment or share your feedback

Happy coding…

In the next post, we will take a sample Elasticsearch index and look at how it can be visualized in Kibana.

--

--