← All articles
INFRASTRUCTURE K3s: Lightweight Kubernetes for Self-Hosters 2026-02-08 · k3s · kubernetes · containers

K3s: Lightweight Kubernetes for Self-Hosters

Infrastructure 2026-02-08 k3s kubernetes containers orchestration

You've outgrown a single Docker Compose file. Maybe you have services spread across multiple machines, you want automatic restarts with health checks, or you need rolling updates without downtime. You've heard Kubernetes is the answer, but full K8s is a beast — it wants multiple nodes, etcd clusters, and a week of your life to set up.

K3s is Kubernetes stripped down to what actually matters. Built by Rancher (now SUSE), it's a single binary under 100 MB that runs a fully conformant Kubernetes cluster. It replaces etcd with SQLite (or embedded etcd for HA), bundles Traefik for ingress, and includes a local storage provider. You can have a working cluster in under 5 minutes.

Why K3s Over Full Kubernetes?

Feature Full K8s (kubeadm) K3s
Installation Complex, multi-step Single command
Binary size 300+ MB ~70 MB
RAM usage (idle) ~1.5 GB ~512 MB
Default datastore etcd (separate cluster) SQLite or embedded etcd
Ingress controller None (install yourself) Traefik (bundled)
Load balancer None (install yourself) ServiceLB (bundled)
Storage None (install yourself) Local-path (bundled)
Cert management None (install yourself) Easy to add cert-manager
Conformant Yes Yes (certified by CNCF)

K3s is real Kubernetes. Your kubectl commands, Helm charts, and manifests work identically. The difference is operational simplicity — K3s handles the infrastructure plumbing so you can focus on deploying apps.

Why Kubernetes at All? (Docker Compose vs. K3s vs. Nomad)

This is the real question most self-hosters should ask first.

Capability Docker Compose K3s Nomad
Learning curve Low Medium-High Medium
Single-node setup Excellent Good Good
Multi-node Manual Built-in Built-in
Rolling updates No Yes Yes
Health checks + restart Basic Advanced Advanced
Service discovery Docker DNS CoreDNS Consul
Secret management Docker secrets / .env K8s Secrets Vault
Ecosystem Docker Hub Massive (Helm, operators) Smaller
Resource overhead Minimal ~512 MB RAM ~256 MB RAM

When Docker Compose is enough

When K3s makes sense

When Nomad might be better

Single-Node Setup

The simplest K3s deployment: one machine running everything.

Installation

curl -sfL https://get.k3s.io | sh -

That's it. One command. K3s is now running as a systemd service with:

Verify it's working:

sudo k3s kubectl get nodes

You should see your node with status Ready.

Configure kubectl access

K3s writes its kubeconfig to /etc/rancher/k3s/k3s.yaml. To use standard kubectl:

mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config

Now kubectl get nodes works without sudo.

Multi-Node Cluster

For high availability or spreading workloads across machines.

On the server (control plane) node

curl -sfL https://get.k3s.io | sh -s - server --cluster-init

Get the join token:

sudo cat /var/lib/rancher/k3s/server/node-token

On each agent (worker) node

curl -sfL https://get.k3s.io | K3S_URL=https://server-ip:6443 K3S_TOKEN=<token> sh -

The agent joins the cluster automatically. K3s handles the overlay network (Flannel by default) so pods on different nodes communicate seamlessly. For true HA, run 3 server nodes with --cluster-init — you can lose one and the cluster keeps running.

Deploying Apps with kubectl

Basic deployment

A Kubernetes deployment defines your app, its container image, and how many replicas to run. A Service exposes it to the network. Here's a minimal example deploying Jellyfin:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: jellyfin
  namespace: media
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jellyfin
  template:
    metadata:
      labels:
        app: jellyfin
    spec:
      containers:
        - name: jellyfin
          image: jellyfin/jellyfin:latest
          ports:
            - containerPort: 8096
---
apiVersion: v1
kind: Service
metadata:
  name: jellyfin
  namespace: media
spec:
  selector:
    app: jellyfin
  ports:
    - port: 8096
      targetPort: 8096
kubectl create namespace media
kubectl apply -f jellyfin.yaml

Exposing services with Ingress

K3s bundles Traefik as its ingress controller. Add an Ingress resource to expose a service with a hostname:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: jellyfin
  namespace: media
spec:
  rules:
    - host: jellyfin.home.lan
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: jellyfin
                port:
                  number: 8096

Point jellyfin.home.lan to your K3s node's IP in DNS and Traefik routes traffic automatically.

Helm Charts

Helm is the package manager for Kubernetes. Many self-hosted apps distribute official Helm charts that bundle all the Kubernetes manifests, configuration options, and dependencies.

Installing Helm

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Using Helm charts

Example: installing Nextcloud via Helm:

helm repo add nextcloud https://nextcloud.github.io/helm/
helm repo update

helm install nextcloud nextcloud/nextcloud \
  --namespace nextcloud \
  --create-namespace \
  --set nextcloud.host=nextcloud.home.lan \
  --set persistence.enabled=true \
  --set persistence.size=50Gi

Helm manages the entire lifecycle — install, upgrade, rollback, uninstall — with a single command. Upgrading Nextcloud becomes:

helm upgrade nextcloud nextcloud/nextcloud --namespace nextcloud

Persistent Storage

The bundled local-path provisioner stores data on the node's filesystem. For single-node setups, this is fine:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jellyfin-config
  namespace: media
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 5Gi

Multi-node storage options

For multi-node clusters where pods might move between nodes, you need distributed storage:

TLS with cert-manager

cert-manager automates TLS certificate provisioning from Let's Encrypt.

Installation

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true

Configure a Let's Encrypt issuer

Create a ClusterIssuer that tells cert-manager how to obtain certificates:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            class: traefik

Then annotate any Ingress with cert-manager.io/cluster-issuer: letsencrypt-prod and add a tls block. cert-manager requests the certificate, handles HTTP-01 validation, and auto-renews before expiry. You never touch TLS certificates manually.

The Honest Trade-offs

K3s is great if:

K3s is not ideal if:

Bottom line: K3s brings the power of Kubernetes to self-hosters without the pain of kubeadm, etcd clusters, and manual component installation. If you're running a single machine with a few services, Docker Compose is simpler and you should stick with it. But if you're managing multiple machines, want zero-downtime updates, or are building Kubernetes skills, K3s is the best way to run Kubernetes at home. The bundled Traefik, storage provisioner, and ServiceLB mean you get a functional cluster from a single install command.

Resources