This is the fifth part of the series Save yourself from a disaster: Redundancy on a budget.
How can we make sure our second most important asset is safely secured in case of a disaster?
We could mainly 3 things:
- Duplicate VM
Nowadays many cloud providers (also virtualization platforms) are giving you the possibility to take a snapshot of the VM and then restore/clone it. I’ll not cover it in this tutorial as we’ll increase the overall cost of the infrastructure. Although, sometimes (based on the application) it can be very time-saving doing a clone of the VM compared to the other method I’m proposing here below.
We live in 2021, everyone is running containers and wishing to have a k8s cluster to play with. So, let’s convert the simple applications into containers, there are a lot of already-ready containers on Docker Hub.
First, setup your nodes, I’m going to use standard images for my dockerized infrastructure, no custom images (for now – I’ve got pretty simple configurations). I’ve picked bitnami images, as they cover a lot of scenarios and provide pre-packaged images for most of the popular server software (more reasons why pick them).
If you really want to start using custom images you could publish them publicly for free on Docker Hub (but has got recently some limitations) or on Canister. After the announcement from Docker Hub about limiting the rates of pull, AWS decided to offer public repositories (and they are almost free if you don’t exceed 500GB/month when not logged or 5TB/month when logged).
This is an example of a WordPress website configured with docker-compose:
version: "3.9" services: wordpress: image: wordpress:5.7.0 ports: - 8000:80 deploy: replicas: 1 restart_policy: condition: on-failure extra_hosts: - "host.docker.internal:host-gateway" environment: WORDPRESS_DB_HOST: host.docker.internal:3306 WORDPRESS_DB_USER: *** WORDPRESS_DB_PASSWORD: *** WORDPRESS_DB_NAME: *** volumes: - /path/to/wp-content:/var/www/html/wp-content healthcheck: test: ["CMD", "curl", "-f", "http://localhost"] interval: 30s timeout: 10s retries: 3
When using Docker Swarm with lots of containers and services (which bounds a dedicated port), you’ll need an ingress system to sort the requests to the right service. You could use one of the 2 most used solutions: Nginx or Traefik.
I decided to use a simple bitnami/nginx with a custom config (pretty straightforward proxy):
version: "3.9" services: client: image: bitnami/nginx:1.19.8 ports: - 80:8080 - 443:8443 deploy: replicas: 2 restart_policy: condition: on-failure extra_hosts: - "host.docker.internal:host-gateway" volumes: - /root/docker-compose/nginx/lb.conf:/opt/bitnami/nginx/conf/server_blocks/lb.conf:ro - /etc/letsencrypt:/etc/letsencrypt
This is the tricky part. If you have already bought the certificates (eg. from SSLs) you’re good for 1 year (at least). If you don’t want to buy them and want to rely on Let’s Encrypt, you’ll need to be ready to sweat a bit to set it up. Setting it up on one node is pretty simple, but if you need to replicate it on multiple nodes then you need to start being creative.
One proposed solution would be having a primary node that generates (or renews) the certificate(s) and then it’ll spread them to the other servers:
rsync -e "ssh -i $HOME/.ssh/somekey" -auv --progress /etc/letsencrypt/ [email protected]<IP2>:/etc/letsencrypt rsync -e "ssh -i $HOME/.ssh/somekey" -auv --progress /etc/letsencrypt/ [email protected]<IP3>:/etc/letsencrypt
Kubernetes is more complex and require more time to configure it, but once done there could be no vendor lock-in for you (as many providers are offering managed k8s), also it is more extensible (but more complex than swarm).
If you have already a Docker Swarm cluster and want to migrate try following these guides:
- From Docker-Swarm to Kubernetes – the Easy Way!
- Translate a Docker Compose File to Kubernetes Resources
Remember to either use a dockerized database or rely on cloud-native managed solutions.
The next post will be about Redundancy of DNS, Stay Tuned.
Check out the whole version of this post in the ebook.