Skip to content

How I Run Docker at Home

Running container workloads at home is farily accessible, many consumer/homelab products support Docker containers out of the box. Products like Unraid, Proxmox LXC, TrueNAS Scale, and Synology all provide easy ways of running Docker containers.

Info

Keep in mind that I am a dev-ops engineer by hobby, not profession. This post is based on my experience, personal testing, and simply what has worked well for me so far. There are very likely different ways to do everything I talk about here.

If you are new to the world of containerized workloads, this site details the general concept pretty well, but Docker containers are purpose built applications, commonly seen in microservices architectures. Docker isn't the only way to run containers and not every container is a "Docker" container. In this post when I'm referring to containers, I'm specifically talking about Docker containers running in the Docker engine.

Containers are a common way to deploy applications at home, one of the major benefits of containers is that it gets away from the issue of "it works on my computer", it makes running the same application in different environments very similar and often includes package dependencies.

So, how do I run containers at home?

Portainer

Put simply, Portainer is a tool to help manage containers. While it can be used in large scale and complicated environments, I like it because it provides a clear way to manage Docker hosts and containers in my homelab. Portainer is installed as two components, a Portainer server and a Portainer agent. This allows you to manage multiple Docker servers with one central manager.

This means:

Docker Server == Portainer Server

Docker Server == Portainer Agent

When installing Portainer, you are really just running a Docker container, which is the whole Portainer application. You could do this on a single Docker host if you wanted.

Portainer Design

The Portainer design I use includes a central Portainer console that manages 3 Docker servers.

Docker Breakdown

In deciding how I wanted to structure Docker I tested a few options, Docker Swarm nodes on different Proxmox hosts, completely stand alone Portainer systems, ultimately I landed on this structure. This allows me to have one main node that through the Portainer interface, allows management of all of my Portainer nodes. The main Portainer node communicates with the other hosts via the Portainer agents installed on each individual Docker host.

Portainer Infrastructure

My Portainer hosts are all running on Debian 12 and installed using the standard Linux installation steps for Docker and the Portainer installation steps for Docker. All 4 of my Portainer hosts are running on my primary Proxmox hypervisor, and while this doesn't buy me any high availability capabilities, the services I run in Docker aren't needed to run my home, Docker services fall into the category of homelab rather than home-prod. I would like to keep the services I'm running up, but they can be down if I need to reboot my main Proxmox system.

The hardware specs for these systems are as follows.

Docker Docker-Prod Docker-WAN Docker-Dev
OS Debian 12 Debian 12 Debian 12 Debian 12
CPU 4 vCPU 16 vCPU 4 vCPU 4 vCPU
RAM 8 GB 24 GB 16 GB 8 GB
Disk 50 GB 120 GB 100 GB 40 GB

I've found that RAM is the most important spec of a Docker/Portainer host, CPU load for what I run doesn't really factor in. From the disk space perspective I used to try to utilize NFS or CIFS as remote storage for my container file system storage needs, I ran into too many issues with containers not supporting either or behaving poorly when the Docker host had a remote drive mapped. Now I just have Portainer store the Docker volume locally and back up the whole machine with Proxmox Backup Server.

Portainer Editions

Portainer comes in a Community Edition and a Business Edition. The Community Edition is Portainer's open source version and can be used by everyone. It offers most of the features that a homelab user would need, but there are a couple of very beneficial features in the Business Edition. Luckily, Portainer offers the Business Edition for free if you have 3 or less Portainer nodes.

Warning

Portainer recently reduced the amount of hosts that you can use with their free Business Edition license from 5 hosts to 3 hosts. If you have an existing 5 node license you are able to keep it, but for any new requests the limit is 3 nodes. While I currently use 4 hosts, if I needed to lower my hosts to 3 I would either remove my Docker-Dev host or use Docker-Prod as the management node.

If you are wondering why you may be interested in the Business Edition rather than the Community Edition, Portainer details the differences. For me, the most interesting Business Edition feature is something I've written about, allowing Portainer Stacks to copy in files to the Docker host and then be referenced in docker-compose.yml files.

Deploying to Portainer

So we have our Portainer infrastructure, how do I actually deploy containers? Portainer gives us a few options, we can click around the UI and specify container options, think of this method as a normal docker run command. We can also deploy Portainer Stacks, this is the equivilent of using a docker-compose.yml file, essentially a YAML file that defines the options for your container. Docker Compose can also be used to deploy multiple containers at the same time. This tends to be used when grouping containers, for example in a "Metrics" Docker Compose file you may want to create Grafana, Prometheus, and InfluxDB containers all at the same time.

An example of my metrics stack docker-compose.yml

This is intended as an example, you likely won't be able to verbatim copy this example as it requires the Business Edition features and is missing some data I'm not going to post on the internet 😄

version: "3.9"
services:
  influxdb:
    image: influxdb:latest
    container_name: influxdb
    networks:      
      - proxy
    volumes:
      - influx_var:/var/lib/influxdb2
      - influx_etc:/etc/influxdb2
    ports:
      - 8086:8086
    restart: unless-stopped

  grafana:
    image: grafana/grafana-oss:latest
    container_name: grafana
    networks:      
      - proxy
    volumes:
      - grafana_data:/var/lib/grafana
    ports:
      - 3000:3000
    restart: unless-stopped

  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    network_mode: bridge
    volumes:
      - prometheus_data:/prometheus
      - /mnt/portainer-compose-unpacker/stacks/metrics/metrics/prometheus:/etc/prometheus
    ports:
      - 9090:9090
    restart: unless-stopped

Great! We have a docker-compose.yml file. We have a few options now, we could just paste in the contents of our docker-compose.yml file into a Portainer Stack and just deploy, this is how I test things on my Docker-Dev Portainer host. When I am ready to run something on my Docker-Prod or Docker-WAN hosts I deploy my docker-compose.yml file and any associated configuration files into a gitlab.com repo.

Using Gitlab.com

I use gitlab.com (as opposed to github.com) as my code repository of choice. I'm familiar with Gitlab in my job and having the option to self-host Gitlab is always a positive thing. Gitlab includes a Free tier that should work just fine for a homelab. While Gitlab does mention that in the future there will be a charge for data transfer, the current charges for Gitlab come from project storage (with a limit of 10 GB per namespace), and compute minutes. Compute minutes only apply when using gitlab.com shared runners, if you use your own self-hosted Gitlab runners you will never use compute minutes from a license perspective. In the future if Gitlab starts charging for data transfer to and from their SaaS service, that could be a problem for homelabs, BUT you can always just self-host Gitlab, completely removing this concern/cost.

Utilizing gitlab.com to store docker-compose.yml files is pretty easy, you can use CLI Git tools or directly on the gitlab.com site to create or upload a docker-compose.yml file. In the Portainer interface, when you create a new Stack you will have the option of sourcing it from a Git repo. The Portainer documentation on this is pretty comprehensive and worth a read.

Gitlab Project Visibility

An important note on this process relates to the visibility of your Gitlab repos. Very likely, since this is designed to be for homelab use, you will want to use a Private repo setting, this prevents others from seeing your projects unless you want them to. Locking this down is important, but then how do we get Portainer access to pull the docker-compose.yml file? In Gitlab we can create a PAT, allowing us to provide Portainer access to the Gitlab repository in question. Gitlab has the most up-to-date documentation on this functionality.

Simple Gitlab to Portainer Flow

Deploying docker-compose.yml from Gitlab to Portainer is fairly simple, once you have a Git repository configured and a Portainer host ready, the process is just these four items.

  flowchart LR
  A[Write docker-compose.yml] --> B[Upload docker-compose.yml<br />to gitlab.com repo];
  B -->C[Create Stack in Portainer];
  C --> D[Deploy docker-compose.yml<br />via Git];

Wrap Up

Using a combination of Debian, Docker, Portainer, and Gitlab, I'm able to create docker-compose.yml files, host and version them for free in Gitlab and deploy through Portainer. Through this overview of the technology involved in running containers at home, hopefully this helps with inspiration in your own homelab!