← All articles
INFRASTRUCTURE Docker Compose for Self-Hosting: The Complete Guide 2026-02-09 · docker · docker-compose · containers

Docker Compose for Self-Hosting: The Complete Guide

Infrastructure 2026-02-09 docker docker-compose containers infrastructure linux

Every self-hosting guide you read will include a docker-compose.yml file. Every one. Whether you're setting up Nextcloud, Vaultwarden, Jellyfin, or a monitoring stack, Docker Compose is the lingua franca of self-hosted infrastructure. If you learn one tool before starting your self-hosting journey, make it this one.

This guide is the foundation that the rest of this site's guides build on. We'll go from zero to confidently managing multi-service stacks with networking, persistent storage, and automatic updates.

Why Docker Compose?

Before containers, setting up a self-hosted application meant installing dependencies directly onto your server. PHP 7.4 for Nextcloud, Java 17 for Minecraft, Node.js 20 for Gitea — and heaven help you if two applications needed conflicting versions. Updates were nerve-wracking, rollbacks were manual, and reproducing your setup on a new server meant hours of work.

Docker Compose solves all of this:

Reproducibility — Your entire stack is defined in a single YAML file. Move to a new server? Copy the file, run one command, and everything is back. No more "works on my machine" debugging.

Isolation — Each service runs in its own container with its own dependencies. Nextcloud's PHP doesn't conflict with anything else. A misbehaving service can't take down its neighbors.

Declarative configuration — You describe the desired state ("I want Nextcloud, a database, and Redis, connected like this") and Compose figures out how to make it happen. No imperative scripting.

Easy updates — Pull a new image, recreate the container, done. Your data persists in volumes. Rolling back is just pointing to the previous image tag.

Community standard — When a project provides a self-hosting guide, they provide a docker-compose.yml. This means you can copy-paste configurations from official docs, GitHub repos, and community wikis and have them running in minutes.

Prerequisites

Install Docker

On Debian or Ubuntu, the official Docker install is a one-liner:

curl -fsSL https://get.docker.com | sh

This installs Docker Engine and the Docker Compose plugin (the docker compose subcommand). You no longer need to install docker-compose as a separate binary — that's the old way.

After installation, add your user to the docker group so you don't need sudo for every command:

sudo usermod -aG docker $USER

Log out and back in for this to take effect.

On Fedora, use dnf:

sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo systemctl enable --now docker

Verify the installation

docker --version
docker compose version

You should see Docker 24+ and Compose v2+. If docker compose (with a space, not a hyphen) doesn't work, you have the old standalone version — upgrade.

What you should already know

That's it. You don't need to understand container internals, cgroups, or namespaces. Docker Compose is a practical tool, and you can learn the theory later.

Your First Compose File

Let's start with the simplest possible example. Create a directory and a compose file:

mkdir -p ~/compose/hello && cd ~/compose/hello

Create a file called docker-compose.yml (or compose.yml — both work):

services:
  web:
    image: nginx:alpine
    ports:
      - "8080:80"

That's a complete compose file. Three lines of configuration. Now start it:

docker compose up -d

The -d flag runs containers in the background (detached mode). Visit http://your-server-ip:8080 and you'll see the Nginx welcome page.

Let's break down what happened:

To stop and remove everything:

docker compose down

That's the core workflow: up -d to start, down to stop. Everything else builds on this.

Docker Compose Essentials

Here's a more complete compose file that demonstrates all the fundamental building blocks:

services:
  app:
    image: myapp:latest
    container_name: myapp
    restart: unless-stopped
    ports:
      - "3000:3000"
    volumes:
      - app_data:/data
      - ./config:/app/config
    environment:
      DATABASE_URL: postgres://user:pass@db:5432/myapp
      REDIS_URL: redis://cache:6379
    networks:
      - frontend
      - backend
    depends_on:
      - db
      - cache

  db:
    image: postgres:16-alpine
    restart: unless-stopped
    volumes:
      - db_data:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: myapp
    networks:
      - backend

  cache:
    image: redis:7-alpine
    restart: unless-stopped
    networks:
      - backend

volumes:
  app_data:
  db_data:

networks:
  frontend:
  backend:

Here's how those services are connected:

HOST MACHINE frontend network backend network app :3000 → :3000 app myapp:latest db postgres:16-alpine cache redis:7-alpine app_data vol db_data vol :3000

Let's go through each section.

Services

A service is a container definition. Each service gets a name (like app, db, cache) that other services use to reach it over the network. These names become DNS hostnames — your app can connect to the database at db:5432 without knowing any IP addresses.

Ports

ports:
  - "3000:3000"       # host:container
  - "127.0.0.1:8080:80"  # only listen on localhost
  - "9090:9090/udp"   # UDP port

The format is host_port:container_port. If you're running a reverse proxy (and you should be), bind to localhost only (127.0.0.1:port:port) so the service isn't directly accessible from the internet.

Volumes

Volumes are how your data survives container restarts, updates, and rebuilds. Without volumes, everything inside a container is ephemeral — stop the container and your data vanishes.

volumes:
  - db_data:/var/lib/postgresql/data   # named volume
  - ./config:/app/config               # bind mount
  - /host/path:/container/path         # absolute bind mount

More on the difference between named volumes and bind mounts later.

Environment variables

Configuration that changes between deployments belongs in environment variables, not baked into the image:

environment:
  DB_HOST: db
  DB_PORT: 5432
  DEBUG: "false"

Networks

Networks control which services can talk to each other. In the example above, db and cache are only on the backend network — they're unreachable from the internet even if someone bypasses the app service.

Restart policies

restart: unless-stopped   # Recommended for most services

Options:

Use unless-stopped for everything. It survives reboots but respects when you intentionally stop a service.

Real-World Example: Nextcloud + MariaDB + Redis

Here's a production-ready example that you'll actually use — a full Nextcloud stack with a database and cache layer:

services:
  nextcloud:
    image: nextcloud:29
    container_name: nextcloud
    restart: unless-stopped
    ports:
      - "127.0.0.1:8080:80"
    volumes:
      - nextcloud_app:/var/www/html
      - /mnt/data/nextcloud:/var/www/html/data
    environment:
      MYSQL_HOST: db
      MYSQL_DATABASE: nextcloud
      MYSQL_USER: nextcloud
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
      REDIS_HOST: redis
      NEXTCLOUD_TRUSTED_DOMAINS: cloud.example.com
      OVERWRITEPROTOCOL: https
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_started
    networks:
      - nextcloud

  db:
    image: mariadb:11
    container_name: nextcloud-db
    restart: unless-stopped
    volumes:
      - db_data:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: nextcloud
      MYSQL_USER: nextcloud
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    healthcheck:
      test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - nextcloud

  redis:
    image: redis:7-alpine
    container_name: nextcloud-redis
    restart: unless-stopped
    volumes:
      - redis_data:/data
    networks:
      - nextcloud

volumes:
  nextcloud_app:
  db_data:
  redis_data:

networks:
  nextcloud:

And the accompanying .env file:

MYSQL_PASSWORD=change-me-to-something-random
MYSQL_ROOT_PASSWORD=change-me-to-something-else-random

Generate real passwords:

echo "MYSQL_PASSWORD=$(openssl rand -base64 24)" > .env
echo "MYSQL_ROOT_PASSWORD=$(openssl rand -base64 24)" >> .env

Start the stack:

docker compose up -d

That's three interconnected services with persistent storage, health checks, and secure password handling — in 50 lines of YAML.

Networking

Docker Compose networking is one of its most useful features, and one of the most common sources of confusion.

How it works by default

When you run docker compose up, Compose creates a default network for your project. All services in the same compose file can reach each other by service name. If your compose file has a db service, every other service can connect to it at db:3306 (or whatever port the database listens on internally).

This is DNS-based service discovery, and it works automatically.

Custom networks

For multi-service setups, explicitly defining networks gives you control over which services can communicate:

services:
  frontend:
    networks:
      - public
      - backend

  api:
    networks:
      - backend
      - database

  db:
    networks:
      - database

networks:
  public:
  backend:
  database:

In this setup, frontend can reach api (they share the backend network) but cannot reach db directly. The api service acts as the only bridge.

Connecting to a reverse proxy

If you use a separate compose file for your reverse proxy (Traefik, Caddy, Nginx Proxy Manager), your services need to share a network with it. The standard pattern:

First, create an external network:

docker network create proxy

Then reference it in both your reverse proxy's compose file and your service's compose file:

# In your service's compose file
services:
  app:
    networks:
      - proxy
      - internal

  db:
    networks:
      - internal

networks:
  proxy:
    external: true
  internal:

The external: true flag tells Compose to use an existing network instead of creating a new one. This is how services in different compose files communicate.

Exposing ports

A quick mental model:

A common mistake is exposing database ports to the host. Don't do this:

# Don't do this unless you specifically need external database access
db:
  ports:
    - "5432:5432"  # Now your database is open to the world

If your database is only used by other containers in the same compose file, skip the ports: section entirely.

Volumes and Data Persistence

This is where self-hosting gets real. Your containers are disposable — your data is not.

Named volumes vs bind mounts

Named volumes are managed by Docker. They live in /var/lib/docker/volumes/ and Docker handles their lifecycle:

volumes:
  - db_data:/var/lib/mysql

# Declared at the top level
volumes:
  db_data:

Bind mounts map a specific host directory into the container:

volumes:
  - /mnt/storage/photos:/data/photos
  - ./config:/app/config

When to use which

Named Volumes Bind Mounts
Best for Database storage, app internal data Config files, large media, shared data
Managed by Docker You
Location /var/lib/docker/volumes/ Anywhere on host
Backup Slightly harder (need to find the volume) Easy (you know exactly where files are)
Performance Optimal on Linux Same as native filesystem
Portability Works identically everywhere Depends on host path existing

Rule of thumb: Use named volumes for databases and application internals. Use bind mounts for anything you want to directly access, back up easily, or share between services.

Backup strategies

For named volumes:

# Create a temporary container that mounts the volume and tars it
docker run --rm \
  -v db_data:/source:ro \
  -v $(pwd):/backup \
  alpine tar czf /backup/db_data_backup.tar.gz -C /source .

For bind mounts, just use your regular backup tools (rsync, restic, borgbackup):

rsync -a /mnt/storage/nextcloud/ /backups/nextcloud/

For databases, always use the database's own dump tool rather than copying files:

docker exec my-postgres pg_dump -U myuser mydb > backup.sql
docker exec my-mariadb mysqldump -u root -p"$PASS" --all-databases > backup.sql

Environment Variables and Secrets

Hardcoding passwords in your compose file is a bad habit. Here's how to do it properly.

.env files

Docker Compose automatically reads a .env file in the same directory as your compose file:

# .env
DB_PASSWORD=super-secret-password
[email protected]
APP_VERSION=2.4.1

Reference these in your compose file with ${VARIABLE} syntax:

services:
  app:
    image: myapp:${APP_VERSION}
    environment:
      DB_PASSWORD: ${DB_PASSWORD}
      ADMIN_EMAIL: ${ADMIN_EMAIL}

Important: Add .env to your .gitignore if you keep your compose files in a repository. Committing passwords to git is one of the most common security mistakes.

Multiple .env files

For different environments or to separate secrets from non-secret configuration:

services:
  app:
    env_file:
      - common.env
      - secrets.env

Docker secrets (Swarm mode)

For more security-conscious setups, Docker secrets store sensitive data encrypted and only expose it to containers that need it:

services:
  db:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password
    secrets:
      - db_password

secrets:
  db_password:
    file: ./secrets/db_password.txt

This requires Docker Swarm mode (docker swarm init), and not all images support the _FILE suffix convention. For most self-hosting setups, .env files are sufficient — just protect them with proper file permissions (chmod 600 .env).

Updating Containers

One of Docker's biggest advantages for self-hosting is how simple updates are.

Manual updates

# Pull the latest images
docker compose pull

# Recreate containers with the new images
docker compose up -d

That's it. Compose detects which images changed and only recreates those containers. Your volumes (data) are preserved. The downtime is typically a few seconds per service.

To update a specific service:

docker compose pull nextcloud
docker compose up -d nextcloud

Pin your image versions

Don't use :latest in production. Pin to specific major or minor versions:

# Bad — you don't know what you'll get
image: nextcloud:latest

# Better — pin to major version, get minor/patch updates
image: nextcloud:29

# Most specific — pin to exact version
image: nextcloud:29.0.3

Pinning to the major version (e.g., nextcloud:29) is usually the right balance. You get security patches automatically but won't be surprised by breaking changes from a major upgrade.

Automatic updates with Watchtower

Watchtower monitors your running containers and automatically updates them when new images are pushed:

services:
  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      WATCHTOWER_CLEANUP: "true"
      WATCHTOWER_SCHEDULE: "0 0 4 * * *"  # 4 AM daily
      WATCHTOWER_NOTIFICATIONS: shoutrrr
      WATCHTOWER_NOTIFICATION_URL: "discord://token@webhookid"

Configuration worth noting:

Caution: Watchtower updates everything by default. For critical services (databases, password managers), you may want to exclude them and update manually:

services:
  db:
    labels:
      - "com.centurylinklabs.watchtower.enable=false"

Docker Compose vs the Alternatives

Feature Docker Compose Kubernetes Podman Compose LXC/LXD
Complexity Low Very high Low Medium
Learning curve Hours Weeks/months Hours Days
Best for Single server Multi-node clusters Rootless containers Full OS containers
Scaling Manual Automatic Manual Manual
High availability No Yes No Possible
Community support Massive Massive Growing Moderate
Resource overhead Minimal Significant Minimal Low
Container images Docker Hub Docker Hub / OCI Docker Hub / OCI OS images
Networking Simple Complex, powerful Simple Bridge/macvlan
Config format YAML YAML (lots of it) YAML CLI / YAML
When to use 1 server, 1-50 services Multi-server, needs HA Security-focused, rootless Need full OS in container

For self-hosting on a single server, Docker Compose is the right answer 95% of the time. Kubernetes is overkill unless you're running a multi-node cluster. Podman is worth considering if you want rootless containers (no Docker daemon running as root). LXC is a different paradigm entirely — full OS containers rather than application containers.

Common Patterns

These patterns show up repeatedly in self-hosting compose files. Learn them once and you'll recognize them everywhere.

Health checks

Health checks let Compose (and your monitoring tools) know whether a service is actually working, not just running:

services:
  db:
    image: postgres:16-alpine
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 30s

depends_on with conditions

The basic depends_on only waits for a container to start, not to be ready. Combine it with health checks for proper ordering:

services:
  app:
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_started

Now the app service won't start until the database is actually accepting connections, not just until the container is running.

Restart policies in practice

# For most services — survives reboots, respects manual stops
restart: unless-stopped

# For critical infrastructure (reverse proxy, DNS)
restart: always

# For one-off tasks or migration containers
restart: "no"

Logging configuration

By default, Docker stores unlimited logs, which will eventually eat your disk. Set limits:

services:
  app:
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"

This keeps at most 30 MB of logs per service. Apply this to every service, or set it globally in /etc/docker/daemon.json:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Resource limits

Prevent a runaway container from consuming all your server's resources:

services:
  app:
    deploy:
      resources:
        limits:
          cpus: "2.0"
          memory: 1G
        reservations:
          cpus: "0.5"
          memory: 256M

Multi-Service Stacks

As your homelab grows, you'll move from one compose file to several. Here are the common approaches.

Separate directories per stack

The simplest and most common approach:

~/compose/
├── nextcloud/
│   ├── docker-compose.yml
│   └── .env
├── monitoring/
│   ├── docker-compose.yml
│   └── prometheus.yml
├── media/
│   ├── docker-compose.yml
│   └── .env
└── traefik/
    ├── docker-compose.yml
    ├── traefik.yml
    └── acme.json

Each stack is independent. You update, restart, and manage them separately. They communicate through shared external networks (like the proxy network discussed earlier).

Profiles

If you want a single compose file but don't always need every service, profiles let you selectively start groups of services:

services:
  nextcloud:
    profiles: ["cloud"]
    image: nextcloud:29
    # ...

  jellyfin:
    profiles: ["media"]
    image: jellyfin/jellyfin
    # ...

  grafana:
    profiles: ["monitoring"]
    image: grafana/grafana
    # ...

  traefik:
    # No profile — always starts
    image: traefik:v3.3
    # ...
# Start only cloud services
docker compose --profile cloud up -d

# Start cloud and media
docker compose --profile cloud --profile media up -d

# Start everything
docker compose --profile cloud --profile media --profile monitoring up -d

Services without a profile always start. This is useful for shared infrastructure like reverse proxies.

Multiple compose files

Compose can merge multiple files:

docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

The second file overrides or extends the first. This is handy for differences between development and production (different ports, extra logging, debug flags).

Useful Commands Reference

Once you're managing several stacks, these commands become second nature:

# Start all services in the background
docker compose up -d

# Stop and remove all containers (preserves volumes)
docker compose down

# Stop and remove everything including volumes (DATA LOSS)
docker compose down -v

# View logs (follow mode)
docker compose logs -f
docker compose logs -f nextcloud   # specific service

# Restart a specific service
docker compose restart nextcloud

# Execute a command in a running container
docker compose exec db psql -U postgres

# See resource usage
docker compose top
docker stats

# Rebuild after changing a Dockerfile
docker compose build
docker compose up -d

# See which images would be updated
docker compose pull --dry-run

The Honest Take

Docker Compose is the best tool for single-server self-hosting. That said, here's what trips people up:

The learning curve is real. YAML formatting, networking concepts, volume management, and image tagging are all things you need to internalize. Your first few stacks will involve frustrating debugging. This is normal. It gets dramatically easier after the first week.

Networking is the number one source of confusion. Containers that can't reach each other, services that are accidentally exposed to the internet, reverse proxy 502 errors because of network mismatches — expect to spend time on this. The debugging command you'll use most is docker compose exec app ping db to verify connectivity.

Docker runs as root. The Docker daemon runs as root, and by default containers can do a lot of damage if compromised. This is a real security consideration. Mitigation: don't expose the Docker socket unnecessarily, use read-only mounts where possible, and keep images updated.

Docker's storage can bloat. Old images, stopped containers, and unused volumes accumulate. Run periodic cleanups:

# Remove unused images, containers, networks, and build cache
docker system prune -a

# See what's using disk space
docker system df

Compose doesn't handle secrets well. The .env file approach is convenient but not truly secure — environment variables show up in docker inspect output and process listings. For most self-hosting this is an acceptable trade-off. For truly sensitive deployments, look into Docker secrets or external secret managers like Vault.

There's no built-in monitoring. Compose doesn't tell you when a service goes down. Pair it with something like Uptime Kuma or a Prometheus/Grafana stack to monitor your services.

When Compose isn't enough: If you need high availability (automatic failover across multiple servers), zero-downtime deployments, or auto-scaling, you've outgrown Compose. The next steps are Docker Swarm (simpler, but limited ecosystem) or Kubernetes (complex, but industry standard). For a single server running a homelab, that day may never come.

Despite these caveats, Docker Compose remains the right choice for the vast majority of self-hosters. It's simple enough to learn in an afternoon, powerful enough to run dozens of services reliably, and supported by virtually every self-hosted project. Master this tool and every other guide on this site becomes a copy-paste exercise.