ELK

Overview

ElasticSearch, Logstash, Kibana (ELK) stack is deployed as a single container using sebp/elk image.

Kibana data view

By default there are no data views configured in Kibana. This means that the logs will not be visible. Refer to Creating Kibana Data View for steps to create one.

Note: the data view can be created only if an ElasticSearch index exists, this means that some logs must be ingested before Data View can be created.

Container

If you want to use different version of container you can modify the following variable:

elk_docker_image: sebp/elk:8.6.2

Memory limit

Due to how ElasticSearch manages memory, the container will consume all available RAM. By default the role limits the container memory to 6GB. The required minimum is 4GB.

The value can be customized using the following variable:

# Max memory to allocate to ELK container
# ELK container will consume all available memory if not set
elk_container_memory_limit: 6GB

Data persistence

ELK container uses named volume to persist the data. The container can be safely deleted and recreated without the loss of data stored in ElasticSearch.

    volumes:
       - elk-data:/var/lib/elasticsearch

Deleting data

To delete the data and start from scratch you will need to execute the following command on the server:

docker kill elk 
docker container rm elk 
docker volume rm observability_dc_elk-data

Note: this will delete all exisiting data and configuration, including all indexes, Kibana data views and dashboards. You may want to export your dashboards before.

Configuration

Logstash

Logstash configuration directory is mapped as a volume to the host in the Docker Compose template as shown below:

    volumes:
      - {{ observability_root_path }}/elk/:/etc/logstash/conf.d/

The path defaults to the following location:

/opt/observability/elk/

The role comes with an opinionated Logstash configuration that seamlessly integrates with Ethereum clients deployed using slingnode.ethereum role. Refer to the logging documentation for details.

Logstash pipeline is defined in a single .conf file. The configuration has been designed to properly parse and normalize logs generated by clients supported by the slingnode.ethereum role. You can review the configuration here: https://github.com/SlingNode/slingnode-ansible-ethereum-observability/blob/master/files/01-logstash-pipeline.conf

In order to customize the parser or provide your own you can modify the following variable by editing the "src" and pointing it to the desired file:

#  Use the below variable to provide your own Logstash parser config
logstash_parsers_config:
  - src: files/01-logstash-pipeline.conf
    dest: "{{ observability_root_path }}/elk/01-logstash-pipeline.conf"

ElasticSearch and Kibana

ElasticSearch and Kibana configuration is default. Currently the role does not expose option to customize it. Please log a Github issue if that's something you'd like.

Log forwarding

The role uses Filebeat as log forwarder. Filebeat is configured to autodiscover containers based on container labels and selectively forward logs. Refer to the Filebeat section for details.

Last updated