In my previous post, I’ve shown a way to expose Kibana (ELK) instance to the internet using Nginx. This helps us hide our internal infrastructure behind a secure gateway reverse proxy. This, however, doesn’t mean that the actual Elastic Stack is secure. To do this, we need to configure the security settings for the cluster and related supporting applications. In this article, I will show how to do exactly this.

Naturally, these settings are added to your configuration files, like elasticsearch.yml, logstash.yml or kibana.yml.
Since I am using Docker, from now on I will be showing how to set up these settings using environmental variables in Docker or Kubernetes containers.

Prerequisites

Before set up the configuration, we need to complete some preparations.

  1. If you don’t have a certificate authority, you need to generate one using:
1
elasticsearch-certutil ca

You will be prompted for a file name and password. If you decide to use the default file name, after execution, the above command will create a file names elastic-stack-ca.p12. Be sure to save this file and remember the password, because you will need it for further setup and adding new nodes to the ES cluster.

  1. You need to create a certificate and private key for each node in your cluster. You can do this by running the following shell command:
1
elasticsearch-certutil cert --ca elastic-stack-ca.p12

At first, you will be prompted for the Certificate Authority password, after which, you can once again choose file name and password for the node certificate file.
The generated certificates do not include information about hostnames such as SAN fields. For this reason, we will be disabling hostname verification at a later time, during the ES configuration step. If you want to use hostanme verification, generate certificates using the following command:

1
elasticsearch-certutil cert --name (name) --dns <dns> --ip <ip> --ca <path/to/ca>
  1. Generate an HTTP certificate for use with Kibana with the following command:
1
elasticsearch-certutil http

When prompted with the question “Generate a CSR”, we will select N, because we are creating this certificate, just to satisfy the security requirements for X-Pack. If you want a trusted authority, such as an internal security team or a commercial certificate authority to sign your certificate, then choose ‘y’.

When prompted for a certificate authority, select ‘y’ and write the full path to the file we created at step 1.
You will be prompted for the password to the CA file.
Next, you will be asked how long should the certificate be valid for - I chose 5y (5 years), since I don’t want to have to deal with this for a long time.
At last there will be a series of question asking you for information on how you want the cluster to be accessed, such as hostnames, ip addresses and if the certificates will be valid for one or multiple nodes.

At the very end, you will be asked for a password, for the certificate and a name for the zip file that will be created by this tool, which contains the certificate files for ElasticSearch and Kibana.

With this, we have finished the preparation. Next it’s time to start the configuration.

ElasticSearch

  1. Create a “certs” folder in to your ElasticSearch configuration folder and copy all the files that we created there.
    The location of the configuration folder for a Docker container is /usr/share/elasticsearch/config

  2. Set the following environmental variables in either your Dockerfile or Kubernetes deployment:
    (This is how i set them using docker-compose.yml)

1
2
3
4
5
6
environment:
- xpack.security.enabled=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=/usr/share/elasticsearch/config/certs/node-cert.p12
- xpack.security.transport.ssl.truststore.path=/usr/share/elasticsearch/config/certs/node-cert.p12

Note: if you’ve set passwords for the certificates, you have to add these passwords to the ES key store like this:

1
2
elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
  1. Start or restart your ElasticSearch cluster.
  2. Change the default users’ passwords using elasticsearch-setup-passwords interactive

After all this, we have a secure ElasticSearch cluster, but we will be unable to connect from other tools, like Kibana or Logstash, so let’s see how to set those up.

Kibana

Setting up Kibana is easy - it’s just a matter of adding some configuration to your Kibana.yml, or in my case, adding those to the environmental variables for use with Docker or Kubernetes. Here is my configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
environment:
- 'SERVER_BASEPATH=/kibana'
- 'SERVER_HOST=0.0.0.0'
- 'ELASTICSEARCH_URL=http://es:9200'
- 'ELASTICSEARCH_HOSTS=http://es:9200'
- 'XPACK_SECURITY_ENABLED=true'
- 'SERVER_REWRITEBASEPATH=true'
- 'XPACK_SECURITY_ENABLED=true'
- 'XPACK_SECURITY_ENCRYPTIONKEY=SomeRandomStringHere'
- 'SERVER_REWRITEBASEPATH=true'
- 'ELASTICSEARCH_SSL_VERIFICATIONMODE=none'
- 'ELASTICSEARCH_USERNAME=kibana_system'
- 'ELASTICSEARCH_PASSWORD=YourPassword'

Notes:

  • Even though we have enabled security on the ES side, we still haven’t set a secure connection, so we need to use the HTTP protocol in our connection settings instead of HTTPS
  • I am using an example with the default username kibana_system and the password that we set at step 7, but you’re free to set any user and password that you like (it’s easy to do this from Kibana itself)

Logstash

Setting Logstash to work with a secure ES cluster is just a matter of setting the username and password in the ElasticSearch output for the pipeline that uses it. Here is an example configuration:

1
2
3
4
5
6
elasticsearch {
hosts => ["${ELASTIC_LOGS_SERVER}"]
index => "logstash-%{+YYYY.MM.dd}"
user => '${LOGSTASH_USERNAME}'
password => '${LOGSTASH_PASSWORD}'
}

You can replace the values directly in the configuration, or if you have multiple outputs to the same ES cluster, you can set them up as environmental variables like this:

1
2
3
4
environment:
- ELASTIC_LOGS_SERVER=http://es:9200
- LOGSTASH_USERNAME=YourUsername
- LOGSTASH_PASSWORD=YourPassword

Conclusion

I’ve shown you the steps to setting up a basic security in ELK (ElasticSearch, Kibana, Logstash) stack and how to make the different systems work together. While this article does not go into detail about the different available options, it combines all of the setup steps, scattered through the documentation pages, in one place. I hope this was helpful for you!

Sources:

Links used as a base for this article:

Additional materials:

Have you tried to set up Kibana in a subpath just to be met by the error {"statusCode":404,"error":"Not Found","message":"Not Found"}?
Or maybe you want to secure your infrastructure using Nginx reverse proxy.

I will show you how to do these two things at the same time.
I’ve written this guide, because the information found here, was scattered through many pages and takes time to find out and test.

Setting the Nginx reverse proxy

There’s not much to it, just add the following snippet to your configuration:

1
2
3
location ~ /kibana {
proxy_pass http://kibanaURL:5601;
}

Tgus tells Nginx to redirect all the traffic coming to the /kibana subpath to your Kibana server.

Setting Kibana

Now, there are two ways to set Kibana - environmental variables, when using a Docker container or through the kibana.yml.
Since I’m using Docker, let me start with this one.

1
2
3
4
5
6
7
8
9
10
11
12
kibana:
image: docker.elastic.co/kibana/kibana:7.11.0
container_name: Kibana
ports:
- '5601:5601'
environment:
- 'SERVER_BASEPATH=/kibana'
- 'SERVER_HOST=0.0.0.0'
- 'ELASTICSEARCH_URL=http://es:9200'
- 'ELASTICSEARCH_HOSTS=http://es:9200'
- 'XPACK_SECURITY_ENABLED=true'
- 'SERVER_REWRITEBASEPATH=true'

The two important things here are SERVER_BASEPATH, which tells Kibana to server its pages from /kibana instead of /, and
SERVER_REWRITEBASEPATH, which tells Kibana to handle rewriting of page and API URL requests coming under the /kibana subpath.
You can set your server to do this, but using the Kibana setting is a lot easier, most of the time.

In the yml file, these two settings are called:

1
2
server.basePath
server.rewriteBasePath

That’s it! I hope this saved you some time.

Additiopnal consideration

If you’re planning to expose your Kibana app to the internet, through reverse proxy, make sure you have the proper security configuration in place.

Inspiration:

https://serverfault.com/questions/775958/reverse-proxy-for-nginx-configuration-for-subpath
https://discuss.elastic.co/t/kibana-and-nginx-in-subpath/90280/5
https://www.elastic.co/guide/en/kibana/master/settings.html#server-rewriteBasePath
https://stackoverflow.com/questions/17423414/nginx-proxy-pass-subpaths-not-redirected
https://forum.chirpstack.io/t/running-application-behind-reverse-proxy-with-subpath/7057

Vector.io is a new observability tool, that is marketed as a one size fits all solution, for log parsing, data transformation, metrics aggregation and event collection. According to the creators, it’s Fast, Reliable, Unified, Vendor neutral, Customizable and Concise. Recently I had to make the decision if we should migrate our data pipeline to a new stack, and this tool was recommended by a co-worker, so I decided to make this evaluation.

Read More

The problem

If you’re here, then you probably have a Node.js application running in Cluster mode, either through the native Node APIs or through a package manager like PM2. In this mode, however, there is usually a load balancer that switches between several child processes to do computational work. Each of these child processes has their own statistics for resource usage. If you’re using something like Prometheus, to collect custom metrics, they are also saved per process. This results in jagged or incorrect results, when trying to display and analyze the data in a tool such as Grafana. The question is, how do we collect all these metrics and aggregate them for easy consumtion at one place?

Read More

Configuration

  1. Create a custom maintenance page that you would like to display to your users.
  2. Change your Nginx configuration to include the following:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
include /etc/nginx/extra.d/maintenance.conf;


location / {
# Adding the following "if" statement config under "location" directive
if (-e /var/tmp/nginx/maintenance) {
set $maintenance on;
}
if ($intra) {
set $maintenance off;
}
if ($maintenance = on) {
error_page 503 /maintenance.html;
return 503;
}
...
}
...

The dots in the above example are a placeholder for the rest of your configuration file and should be removed.

  1. Edit maintenance.conf file under “/etc/nginx/extra.d/maintenance.conf”
1
2
3
4
5
set $maintenance off;

location = /maintenancfe.html {
internal;
}
  1. (optional) If you want to exclude some IP or IP range from hitting the maintenance page (e.g. for development), Edit your geo.conf /etc/nginx/conf.d/geo.conf
1
2
3
4
5
6
geo $intra {
default 0;
127.0.0.1 1;
10.0.0.0/8 1;
100.0.0.0/26 1;
}
  1. Restart Nginx
1
2
$ nginx -t
$ sudo systemctl restart nginx

Switching in and out of maintenance mode

Switching the maintenance on and off is very easy, just by creating and deleting a faile.

Switch on maintenance mode

1
$ touch /var/tmp/nginx/maintenance

Switch off maintenance mode

1
$ rm /var/tmp/nginx/maintenance

When it was first released in 1999, PayPal was revolutionary. I created my account with the service in 2004, when I was still in high school.
Back then, PayPal was the only way to transfer money easily online. Even more, not having a local alternative it was irreplacable.
However times change, but PayPal keep their bad practices the same. In this article I will mentions some of the appalling ways of PayPal and suggest how to avoid them.

This post contains a lot of nitpicking, so if you don’t like it, please look away.

Read More

Since 2008 this blog was running on Wordpress. This makes Wordpress my loyal servant for almost 12 years. However, as everything else, Wordpress started showing it’s age. The performance of the PHP powered system started lagging behind some other alternatives. While being a great general purpose solution, that is being used for anything from hobby websites to ecommerce shops, I probably didn’t use 10% of the features that Wordpress provided. Mainly because of these two reasons, I decided to migrate to something more simple and easy(as well as cheaper) to manage.

After some searching on the internet, I saw that a system called Hexo is a hot thing right now, so I’ve decided to go with it.

Read More

Today i’d like to show you how to make Logstash Docker container output its operation to a log file inside the container. I’m writing specifically about this, because the official Logstash documentation is a bit vague and unless you know how Java (the language ELK stack is written in) logging with the third party library log4j2 works, you might struggle with this issue like me.

Read More

Previously I’ve written a post explaining the benefits of using Revolut over a traditional bank. Well not anymore. This time i will be writing why i’ve decided not to use Revolut anymore - simply because it will not be worth it anymore.

Aside from the annoying emails wanting you to upload a new ID document and photo every couple of months, the only other information i am subscribed to receive from Revolut is their policy updates. This is how their last email looks like:

Read More

1ST SIGNING PERIOD 1ST PAYMENT DAY 2ND SIGNING PERIOD 2ND PAYMENT DAY
April 2020 1-8 Apr 28 Apr 9-22 Apr 20 May
May 2020 7-8 May 27 May 11-22 May 15 Jun
June 2020 1-3 Jun 22 Jun 4-22 Jun 15 Jul
July 2020 1-3 Jul 22 Jul 6-27 Jul 18 Aug
August 2020 3-5 Aug 25 Aug 6-24 Aug 11 Sep
September 2020 1-2 Sep 17 Sep 3-26 Sep 15 Oct
October 2020 1-6 Oct 28 Oct 7-26 Oct 17 Nov
November 2020 2-5 Nov 25 Nov 6-24 Nov 15 Dec
December 2020 1-4 Dec 22 Dec 7-21 Dec 21 Jan
January 2021 6-8 Jan 28 Jan 12-22 Jan 16 Feb
February 2021 1-3 Feb 24 Feb 4-22 Feb 11 Mar
March 2021 1-2 Mar 17 Mar 3-26 Mar 15 Apr
Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×