How I set up a container-based Linux server

Some years ago, I discovered the power and flexibility of containers, a technology that allows you to package and run applications in a consistent environment. Excited about the possibilities, I embarked on a journey to create a private Linux server capable of hosting various services. This blog series, starting with this post, documents my experience and guides you through the process so that you can achieve the same, or perhaps even enhance it further!

As of writing this, my VPS is running these services (links are to Wikipedia):

update

Update 2024-02-18

This website has now been recreated with HUGO! This means that the WordPress container is no longer active. The Site is served statically by NGINX.

With the exception of SSH, each of these services operates within its own container, allowing for seamless upgrades, modifications, or removal without complications.

info

Note

In this post I will not cover how to set up Docker or how to install the server itself. Instead, I’ll focus on discussing good practices to follow rather than providing an in-depth implementation guide. Detailed instructions on setting up each service will be covered in subsequent posts.

FYI, I am using Fedora Server 39, on which I installed Docker for container management.

Service availability on the network

When configuring a service, you have the flexibility to make it available across different ‘layers’ of the network. A service can be accessible:

With containers, you can also control access, allowing a service to be reachable only by specific containers and not the host itself.

For security reasons, it’s advisable to host services exclusively on the local network, preventing exposure to the public network -- unless, of course, you intend to run a public website. In that scenario, limit exposure to essential ports such as 443 for HTTPS and, if absolutely necessary, port 80 to redirect HTTP traffic to HTTPS.

But what if you need to access these services from an external network, like when you’re away from home? In such cases, the recommended approach is setting up a VPN (for example, utilizing Wireguard, as I did). A VPN allows you to establish a secure, encrypted connection to the local network from an external location.

feedback

Important

One of the most critical services to set up securely is SSH. To enhance security, disable root login, disable password-based logins, and set up encryption keys. Avoid exposing SSH directly to the public network; instead, access it through a VPN. If this isn’t feasible, consider exposing it on a non-standard port, different from the default 22.

Set up the users of the server

On my VPS, I wanted to have a separation of concerns between my personal user and the one responsible for the management of the various services, so I created a edotm user and a smanager user (“service manager”).

Why would I create two different users?

It’s mostly a matter of personal preference. I wanted to use my personal profile for activities like note-taking, personal projects, and more. Additionally, I preferred not to clutter my home directory with various sub-directories for the services.

Why not use the root profile?

I chose to have the services run under the same user to facilitate the creation of services that manage other services. For instance, I envisioned having an “auto-puller” service that rebuilds a custom service whenever a push is made on Git. This service needed the necessary permissions to shut down and bring back up other services. At the same time, I didn’t want all services to have root access.

Handle containers’ access

I’ve structured the server to ensure that each container exists in an environment as isolated as possible--after all, that’s one of the main reasons for using containers. It doesn’t make sense, in my view, for a container to have access to global server directories like /srv, /var, or /etc. This holds true even for NGINX. At most, containers should interact with Docker named volumes or local directories.

To achieve this, I’ve organized every service in its dedicated sub-directory:

~/services/
├── gitlab/
├── nextcloud/
├── nginx/
├── wireguard/
└── wordpress/

Each of these directories contains a docker-compose.yml file and possibly some sub-directories utilized by the containers for data or configuration.

Why use docker compose?

Why not? In fact, I believe it should be the standard way to run a Docker command. The difference lies between pasting a lengthy docker run command with multiple lines in the terminal every time a service needs to be (re)started, along with other options, versus creating a configuration file with all the necessary options. This makes handling custom networks, volumes, bind mounts, and port publishing much more straightforward, initiating the service with a simple

docker compose up -d

Wrapping things up

And that concludes our exploration, at least for now. To summarize, the key principles I’ve adhered to -- and recommend you consider -- are:

Stay tuned for more detailed insights into setting up individual services in the upcoming blog posts.

Tags: Docker Docker Hosting Setup Linux Linux