ELK: Elasticsearch, Logstash, Kibana

Satyam Pushkar
4 min readDec 22, 2021

In this article I will explain you about basics of Elasticsearch, Logstash & kibana by setting it up on Docker Desktop and showcasing different use cases of it. You can find the sample code used in this article here.

ELK, popularly known as Elastic stack or ELK stack is a very popular tool among developers. There are many use-cases where it can be used. In this article I am focusing on some basics of ELK and few simple use cases which can help for initial level understanding.

ELK stack consists of:

  • Elasticsearch: is the distributed search engine at the heart of elastic stack. It’s a feature-rich and complex system which is based on Apache Lucene. It’s open source and built with Java. Elasticsearch is also categorized as NoSQL database and has a strong focus on search capabilities and features.
  • Logstash: is in charge of collecting, processing and then dispatching data to a defined destination for storage(stashing). It is defined based on configuration and has 3 main plugins: Input(aggregate data/events from various sources), Filter(enrich, manipulate and process data/events), Output(pus data to different locations/services)
  • Kibana: is a browser based user interface that can be used to search, analyze and visualize the data stored in Elasticsearch indices. It is very renowned and popular due to it’s rich graphical and visualization capabilities which allows user to explore large volumes of data. One thing to note is that it cannot be used in conjunction with other databases except for Elasticsearch.
  • Beats: are single-purpose data shippers of ELK stack. They collects/observes data/events from end machines/systems and send it to Elasticsearch directly or via Logstash.
from https://www.elastic.co/

You can see the above diagram to understand how ELK can be used. Beats and Logstash are the means to inject data to Elasticsearch. Elasticsearch is the heart of stack. It stores and indexes data which can be visualized with the help of Kibana. You can also use it as a NoSQL database which receives and keeps data from your whole system and then can be used to provide a very fast search capability for the entire system.

ELK in action

I have create a sample application to showcase how to start working with Elasticsearch. You can find the code here. The folder structure looks something like below:

As shown in above diagram, there are 2 main folders: Infrastructure and Simulator. Infrastructure folder consists of four different folders, each showcasing different use cases. Each folder has one docker-compose.yml to run respective instances on Docker Desktop, one PowerShell script to start these instances and one to stop.

  • EK [consists of Elasticsearch & Kibana]

This is a very basic sample use-case of creating ElasticSearch and Kibana.

  • ELK [consists of Elasticsearch, Logstash & Kibana]

You can see here 3 instances are running, one each for Elasticsearch, Logstash and Kibana.

Logstash is configured to accept http input on port 3131 and push it to Elasticsearch. It is getting stored in ‘http-input-example’ index. To simulate pushing http input to port 3131, I have created a sample ‘http.simulator’. You can see the generated data can be seen on Kibana(localhost:5601).

  • ELK-Ki [consists of Elasticsearch, Logstash & Kibana (with Kafka as input/producer]

You can see here 5 instances are running, 3 for ELK stack and 2 for Kafka(Kafka & zookeeper).

Here Logstash is configured to accept input from Kafka. It is getting pushed to ‘kafka-input-example’ index on Elasticsearch. To simulate pushing streams to Kafka, I have created a sample ‘kafka.producer.simulator’. You can see the generated data can be seen on Kibana dashboard(localhost:5601).

  • ELK-Ko [consists of Elasticsearch, Logstash & Kibana (with Kafka as output/consumer]

This is similar to the example explained above. The only change here is this particular setup simulates pushing data on http port 3131, which in turn will be pushed to Elasticsearch. And then there is another Logstash configured to take input data from Elasticsearch and pushes it to Kafka. There is a kafka consumer simulator is created which you can find at ‘Simulator\kafka.consumer.simulator’. You can check out both the Logstash configurations in the below image.

For further learning about ELK stack, Please refer to official elastic website.

Hope this article has helped you getting some insight on how to start working with ELK stack.

--

--

Satyam Pushkar

Software Engineer | Backend Specialist | System Architect | Cloud Native Apps | DOTNET, PYTHON | https://www.linkedin.com/in/satyampushkar/