In my previous post, I’ve shown a way to expose Kibana (ELK) instance to the internet using Nginx. This helps us hide our internal infrastructure behind a secure gateway reverse proxy. This, however, doesn’t mean that the actual Elastic Stack is secure. To do this, we need to configure the security settings for the cluster and related supporting applications. In this article, I will show how to do exactly this.
Naturally, these settings are added to your configuration files, like elasticsearch.yml
, logstash.yml
or kibana.yml
.
Since I am using Docker, from now on I will be showing how to set up these settings using environmental variables in Docker or Kubernetes containers.
Prerequisites
Before set up the configuration, we need to complete some preparations.
- If you don’t have a certificate authority, you need to generate one using:
1 | elasticsearch-certutil ca |
You will be prompted for a file name and password. If you decide to use the default file name, after execution, the above command will create a file names elastic-stack-ca.p12
. Be sure to save this file and remember the password, because you will need it for further setup and adding new nodes to the ES cluster.
- You need to create a certificate and private key for each node in your cluster. You can do this by running the following shell command:
1 | elasticsearch-certutil cert --ca elastic-stack-ca.p12 |
At first, you will be prompted for the Certificate Authority password, after which, you can once again choose file name and password for the node certificate file.
The generated certificates do not include information about hostnames such as SAN fields. For this reason, we will be disabling hostname verification at a later time, during the ES configuration step. If you want to use hostanme verification, generate certificates using the following command:
1 | elasticsearch-certutil cert --name (name) --dns <dns> --ip <ip> --ca <path/to/ca> |
- Generate an HTTP certificate for use with Kibana with the following command:
1 | elasticsearch-certutil http |
When prompted with the question “Generate a CSR”, we will select N, because we are creating this certificate, just to satisfy the security requirements for X-Pack. If you want a trusted authority, such as an internal security team or a commercial certificate authority to sign your certificate, then choose ‘y’.
When prompted for a certificate authority, select ‘y’ and write the full path to the file we created at step 1.
You will be prompted for the password to the CA file.
Next, you will be asked how long should the certificate be valid for - I chose 5y (5 years), since I don’t want to have to deal with this for a long time.
At last there will be a series of question asking you for information on how you want the cluster to be accessed, such as hostnames, ip addresses and if the certificates will be valid for one or multiple nodes.
At the very end, you will be asked for a password, for the certificate and a name for the zip file that will be created by this tool, which contains the certificate files for ElasticSearch and Kibana.
With this, we have finished the preparation. Next it’s time to start the configuration.
ElasticSearch
Create a “certs” folder in to your ElasticSearch configuration folder and copy all the files that we created there.
The location of the configuration folder for a Docker container is/usr/share/elasticsearch/config
Set the following environmental variables in either your Dockerfile or Kubernetes deployment:
(This is how i set them using docker-compose.yml)
1 | environment: |
Note: if you’ve set passwords for the certificates, you have to add these passwords to the ES key store like this:
1 | elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password |
- Start or restart your ElasticSearch cluster.
- Change the default users’ passwords using
elasticsearch-setup-passwords interactive
After all this, we have a secure ElasticSearch cluster, but we will be unable to connect from other tools, like Kibana or Logstash, so let’s see how to set those up.
Kibana
Setting up Kibana is easy - it’s just a matter of adding some configuration to your Kibana.yml, or in my case, adding those to the environmental variables for use with Docker or Kubernetes. Here is my configuration:
1 | environment: |
Notes:
- Even though we have enabled security on the ES side, we still haven’t set a secure connection, so we need to use the HTTP protocol in our connection settings instead of HTTPS
- I am using an example with the default username
kibana_system
and the password that we set at step 7, but you’re free to set any user and password that you like (it’s easy to do this from Kibana itself)
Logstash
Setting Logstash to work with a secure ES cluster is just a matter of setting the username and password in the ElasticSearch output for the pipeline that uses it. Here is an example configuration:
1 | elasticsearch { |
You can replace the values directly in the configuration, or if you have multiple outputs to the same ES cluster, you can set them up as environmental variables like this:
1 | environment: |
Conclusion
I’ve shown you the steps to setting up a basic security in ELK (ElasticSearch, Kibana, Logstash) stack and how to make the different systems work together. While this article does not go into detail about the different available options, it combines all of the setup steps, scattered through the documentation pages, in one place. I hope this was helpful for you!
Sources:
Links used as a base for this article:
- Set up X-Pack
- Configuring security in ElasticSearch
- ElasticSearch Security settings reference
- Generating ES certificates
- ES Built-in users
Additional materials:
Comments