Securing your ELK stack - setting ElasticSearch, Kibana and Logstash security

In my previous post, I’ve shown a way to expose Kibana (ELK) instance to the internet using Nginx. This helps us hide our internal infrastructure behind a secure gateway reverse proxy. This, however, doesn’t mean that the actual Elastic Stack is secure. To do this, we need to configure the security settings for the cluster and related supporting applications. In this article, I will show how to do exactly this.

Naturally, these settings are added to your configuration files, like elasticsearch.yml, logstash.yml or kibana.yml.
Since I am using Docker, from now on I will be showing how to set up these settings using environmental variables in Docker or Kubernetes containers.

Prerequisites

Before set up the configuration, we need to complete some preparations.

  1. If you don’t have a certificate authority, you need to generate one using:
1
elasticsearch-certutil ca

You will be prompted for a file name and password. If you decide to use the default file name, after execution, the above command will create a file names elastic-stack-ca.p12. Be sure to save this file and remember the password, because you will need it for further setup and adding new nodes to the ES cluster.

  1. You need to create a certificate and private key for each node in your cluster. You can do this by running the following shell command:
1
elasticsearch-certutil cert --ca elastic-stack-ca.p12

At first, you will be prompted for the Certificate Authority password, after which, you can once again choose file name and password for the node certificate file.
The generated certificates do not include information about hostnames such as SAN fields. For this reason, we will be disabling hostname verification at a later time, during the ES configuration step. If you want to use hostanme verification, generate certificates using the following command:

1
elasticsearch-certutil cert --name (name) --dns <dns> --ip <ip> --ca <path/to/ca>
  1. Generate an HTTP certificate for use with Kibana with the following command:
1
elasticsearch-certutil http

When prompted with the question “Generate a CSR”, we will select N, because we are creating this certificate, just to satisfy the security requirements for X-Pack. If you want a trusted authority, such as an internal security team or a commercial certificate authority to sign your certificate, then choose ‘y’.

When prompted for a certificate authority, select ‘y’ and write the full path to the file we created at step 1.
You will be prompted for the password to the CA file.
Next, you will be asked how long should the certificate be valid for - I chose 5y (5 years), since I don’t want to have to deal with this for a long time.
At last there will be a series of question asking you for information on how you want the cluster to be accessed, such as hostnames, ip addresses and if the certificates will be valid for one or multiple nodes.

At the very end, you will be asked for a password, for the certificate and a name for the zip file that will be created by this tool, which contains the certificate files for ElasticSearch and Kibana.

With this, we have finished the preparation. Next it’s time to start the configuration.

ElasticSearch

  1. Create a “certs” folder in to your ElasticSearch configuration folder and copy all the files that we created there.
    The location of the configuration folder for a Docker container is /usr/share/elasticsearch/config

  2. Set the following environmental variables in either your Dockerfile or Kubernetes deployment:
    (This is how i set them using docker-compose.yml)

1
2
3
4
5
6
environment:
- xpack.security.enabled=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=/usr/share/elasticsearch/config/certs/node-cert.p12
- xpack.security.transport.ssl.truststore.path=/usr/share/elasticsearch/config/certs/node-cert.p12

Note: if you’ve set passwords for the certificates, you have to add these passwords to the ES key store like this:

1
2
elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
  1. Start or restart your ElasticSearch cluster.
  2. Change the default users’ passwords using elasticsearch-setup-passwords interactive

After all this, we have a secure ElasticSearch cluster, but we will be unable to connect from other tools, like Kibana or Logstash, so let’s see how to set those up.

Kibana

Setting up Kibana is easy - it’s just a matter of adding some configuration to your Kibana.yml, or in my case, adding those to the environmental variables for use with Docker or Kubernetes. Here is my configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
environment:
- "SERVER_BASEPATH=/kibana"
- "SERVER_HOST=0.0.0.0"
- "ELASTICSEARCH_URL=http://es:9200"
- "ELASTICSEARCH_HOSTS=http://es:9200"
- "XPACK_SECURITY_ENABLED=true"
- "SERVER_REWRITEBASEPATH=true"
- "XPACK_SECURITY_ENABLED=true"
- "XPACK_SECURITY_ENCRYPTIONKEY=SomeRandomStringHere"
- "SERVER_REWRITEBASEPATH=true"
- "ELASTICSEARCH_SSL_VERIFICATIONMODE=none"
- "ELASTICSEARCH_USERNAME=kibana_system"
- "ELASTICSEARCH_PASSWORD=YourPassword"

Notes:

  • Even though we have enabled security on the ES side, we still haven’t set a secure connection, so we need to use the HTTP protocol in our connection settings instead of HTTPS
  • I am using an example with the default username kibana_system and the password that we set at step 7, but you’re free to set any user and password that you like (it’s easy to do this from Kibana itself)

Logstash

Setting Logstash to work with a secure ES cluster is just a matter of setting the username and password in the ElasticSearch output for the pipeline that uses it. Here is an example configuration:

1
2
3
4
5
6
elasticsearch {
hosts => ["${ELASTIC_LOGS_SERVER}"]
index => "logstash-%{+YYYY.MM.dd}"
user => '${LOGSTASH_USERNAME}'
password => '${LOGSTASH_PASSWORD}'
}

You can replace the values directly in the configuration, or if you have multiple outputs to the same ES cluster, you can set them up as environmental variables like this:

1
2
3
4
environment:
- ELASTIC_LOGS_SERVER=http://es:9200
- LOGSTASH_USERNAME=YourUsername
- LOGSTASH_PASSWORD=YourPassword

Conclusion

I’ve shown you the steps to setting up a basic security in ELK (ElasticSearch, Kibana, Logstash) stack and how to make the different systems work together. While this article does not go into detail about the different available options, it combines all of the setup steps, scattered through the documentation pages, in one place. I hope this was helpful for you!

Sources:

Links used as a base for this article:

Additional materials:

How to run process in the background on Linux temrminal Setting up Kibana in a subpath and Nginx reverse proxy

Comments