As a web software developer, you will often face problems like setting up various servers, be it web servers like Apache, nginx or Litespeed, or the working environment for a scripting language like PHP and Node.js. This setup brings a lot of compatibility and security issues with it. In the recent years, Docker has established itself as the “to-go” solution, when setting up a local or remote working environment. While on the local developer’s machine, it’s as easy as doing brew install docker or apt install docker, when it comes to creating a real live production server, it’s not that simple. Usually production servers should not be running any software as a root (administrator) user, to avoid security issues, and this is how Docker is started by default - with full system privileges. As you can imagine, having an insecure application in an enterprise environment is not desirable. Now there is a setup for Docker called “rootless mode”, that allows running the software with another user, that has restricted privileges. I will be showing how to install Docker + Docker Compose using terminal commands and automate the process, using Ansible - a server configuration tool.
About operating systems
I will be installing the rootless Docker on Ubuntu 20.04 “Focal”, but you can adapt this to pretty much any OS with minimal changes. However, if you’re thinking of doing an installation on CentOS 7.x, please don’t do this to yourself - I’ve tried to do it and spent countless hours fixing the outdated and broken OS and its package repositories and dependencies. At the end you will not be able to fully use the latest Docker, because CentOS 7 uses Linux core 3 and Docker has some features that are only supported in Linux core 5. You might have better luck with CentOS 8.x or 9.x, but I’ve not tested these versions.
Rootless docker installation
I will be writing the terminal commands and showing the Ansible configuration in parallel, so you, as readers, can choose what ot use.
Ansible setup
If you don’t want to deal with Ansible and just want to get the Docker installation steps, you can skip this section.
1 2 3 4 5 6 7 8 9 10
# Install the Anbisle tool and sshpass (required for remote SSH connections by Ansible) sudo apt install ansible sshpass python3-pip # Since Ansible is a Python application, we will also need some python libraries sudo pip3 install docker docker-compose jsondiff # (Optional) - if you install docker, do not install docker-py or you will get an error, as this may cause package corruption and is not recommended. Install docker-py only if you need Python 2.6 support. #sudo pip3 install docker-py
# Install a package for working with the Docker API ansible-galaxy collection install community.docker ansible-galaxy collection install ansible.posix
If you’re using an old OS, Like Ubuntu 20.04 Focal, the Ansible version that will be installed (2.9.6) is old and buggy. You will have to take a few extra actions to get up-to-date or your run books might fail with unexplicable errors (like I did).
Both Docker and Ansible support the default Linux proxy settings, using http_proxy and https_proxy environmental variables, so if you find youself behind a corporate proxy, just set it like this:
# https://docs.ansible.com/ansible/latest/collections/ansible/builtin/user_module.html -name:Adddockeruser-"{{ docker_username }}" user: name:"{{ docker_username }}" # Set specific ID for the user uid:3000 shell:/bin/bash group:"{{ docker_usergroup }}" # no - Only add the user to the group Docker and remove them from any other group # yes - Add the user to the specified groups without removing them from other groups append:no # https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module password:"<your_encoded_linux_user_password_here>"
# Systemd is killing all processes after SSH is disconnected, so we have to enable session lingering: # This one: https://serverfault.com/questions/463366/does-getting-disconnected-from-an-ssh-session-kill-your-programs # This one: https://github.com/systemd/systemd/issues/8486 -name:"Check if {{ docker_username }} lingers" stat:"path=/var/lib/systemd/linger/{{ docker_username }}" register:linger
# Without this, commands like wget throw an error (wget error: bad address) -name:Neededfordockernetworking apt: name:slirp4netns state:present update_cache:yes
And of course, here is how the hosts.ini file looks like, for those of you who want to use Ansible
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
; Use the name in the brackets, when referencing the server_group in the ansible-playbook command [beta_nodes] bme_beta ansible_connection=ssh ansible_ssh_extra_args='-o StrictHostKeyChecking=no' ansible_host=my.beta.host.domain.name
That’s it - if you just wanted to see how to install rootless Docker, you’re good to go. If you want to see how to start an application in Docker, stay and read the next section.
Deploying the application in Docker
First, create your Dockerfile and docker-compose.yml files
1 2 3 4 5
FROM registry-jpe2.r-local.net/dockerhub/library/nginx:latest
upstream bmaas { # Here we are listing the host names of the applications we want to load balance. server simple-web:80 max_fails=3 fail_timeout=60s; }
Starting up our application
1
docker compose up -d
Zero-downtime updates
Now it’s time to see why did we need Nginx alongside our application. While we may make this work with only the application build, most of the time you will want to be able to update the application and deploy changes without interrupting your users. To do this, we need a way to simultaneously run the new version and the old version of the application, and stop the old version only after no users use it anymore. Here is one way to do this.
zero_downtime_deploy() { # The name of the application we will be updating service_name=simple-web old_container_id=$(docker ps -f name=$service_name -q | tail -n1)
# bring a new container online, running new code # (nginx continues routing to the old container only) docker-compose up -d --no-deps --scale $service_name=2 --no-recreate $service_name
# wait for new container to be available new_container_id=$(docker ps -f name=$service_name -q | head -n1) new_container_ip=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}'$new_container_id) docker exec nginx-proxy curl --silent --include --retry-connrefused --retry 30 --retry-delay 1 --fail http://$new_container_ip:8082/ || exit 1
# start routing requests to the new container (as well as the old) reload_nginx
# take the old container offline docker stop $old_container_id docker rm$old_container_id
docker-compose up -d --no-deps --scale $service_name=1 --no-recreate $service_name
# stop routing requests to the old container reload_nginx }
zero_downtime_deploy
The above shell script takes care of creating a new instance of our application, checking if it’s running correctly and updating Nginx, so it can start sending traffic to the correct application container.
Comments