K3s: Lightweight Kubernetes for Self-Hosters
You've outgrown a single Docker Compose file. Maybe you have services spread across multiple machines, you want automatic restarts with health checks, or you need rolling updates without downtime. You've heard Kubernetes is the answer, but full K8s is a beast — it wants multiple nodes, etcd clusters, and a week of your life to set up.
K3s is Kubernetes stripped down to what actually matters. Built by Rancher (now SUSE), it's a single binary under 100 MB that runs a fully conformant Kubernetes cluster. It replaces etcd with SQLite (or embedded etcd for HA), bundles Traefik for ingress, and includes a local storage provider. You can have a working cluster in under 5 minutes.
Why K3s Over Full Kubernetes?
| Feature | Full K8s (kubeadm) | K3s |
|---|---|---|
| Installation | Complex, multi-step | Single command |
| Binary size | 300+ MB | ~70 MB |
| RAM usage (idle) | ~1.5 GB | ~512 MB |
| Default datastore | etcd (separate cluster) | SQLite or embedded etcd |
| Ingress controller | None (install yourself) | Traefik (bundled) |
| Load balancer | None (install yourself) | ServiceLB (bundled) |
| Storage | None (install yourself) | Local-path (bundled) |
| Cert management | None (install yourself) | Easy to add cert-manager |
| Conformant | Yes | Yes (certified by CNCF) |
K3s is real Kubernetes. Your kubectl commands, Helm charts, and manifests work identically. The difference is operational simplicity — K3s handles the infrastructure plumbing so you can focus on deploying apps.
Why Kubernetes at All? (Docker Compose vs. K3s vs. Nomad)
This is the real question most self-hosters should ask first.
| Capability | Docker Compose | K3s | Nomad |
|---|---|---|---|
| Learning curve | Low | Medium-High | Medium |
| Single-node setup | Excellent | Good | Good |
| Multi-node | Manual | Built-in | Built-in |
| Rolling updates | No | Yes | Yes |
| Health checks + restart | Basic | Advanced | Advanced |
| Service discovery | Docker DNS | CoreDNS | Consul |
| Secret management | Docker secrets / .env | K8s Secrets | Vault |
| Ecosystem | Docker Hub | Massive (Helm, operators) | Smaller |
| Resource overhead | Minimal | ~512 MB RAM | ~256 MB RAM |
When Docker Compose is enough
- Everything runs on one machine
- You have fewer than 15-20 services
- You don't need rolling updates or automatic failover
- You value simplicity over features
When K3s makes sense
- You want to run services across 2+ machines
- You need rolling deployments (zero-downtime updates)
- You want to use Helm charts (many apps distribute as Helm charts)
- You're interested in learning Kubernetes for career reasons
- You want automatic rescheduling when a node goes down
When Nomad might be better
- You want multi-node orchestration without Kubernetes complexity
- You run a mix of containers and non-container workloads
- You prefer HashiCorp's ecosystem (Consul, Vault)
- You find Kubernetes YAML excessive
Single-Node Setup
The simplest K3s deployment: one machine running everything.
Installation
curl -sfL https://get.k3s.io | sh -
That's it. One command. K3s is now running as a systemd service with:
- API server on port 6443
- Traefik ingress controller
- CoreDNS for service discovery
- Local-path storage provisioner
- ServiceLB for LoadBalancer services
Verify it's working:
sudo k3s kubectl get nodes
You should see your node with status Ready.
Configure kubectl access
K3s writes its kubeconfig to /etc/rancher/k3s/k3s.yaml. To use standard kubectl:
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
Now kubectl get nodes works without sudo.
Multi-Node Cluster
For high availability or spreading workloads across machines.
On the server (control plane) node
curl -sfL https://get.k3s.io | sh -s - server --cluster-init
Get the join token:
sudo cat /var/lib/rancher/k3s/server/node-token
On each agent (worker) node
curl -sfL https://get.k3s.io | K3S_URL=https://server-ip:6443 K3S_TOKEN=<token> sh -
The agent joins the cluster automatically. K3s handles the overlay network (Flannel by default) so pods on different nodes communicate seamlessly. For true HA, run 3 server nodes with --cluster-init — you can lose one and the cluster keeps running.
Deploying Apps with kubectl
Basic deployment
A Kubernetes deployment defines your app, its container image, and how many replicas to run. A Service exposes it to the network. Here's a minimal example deploying Jellyfin:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jellyfin
namespace: media
spec:
replicas: 1
selector:
matchLabels:
app: jellyfin
template:
metadata:
labels:
app: jellyfin
spec:
containers:
- name: jellyfin
image: jellyfin/jellyfin:latest
ports:
- containerPort: 8096
---
apiVersion: v1
kind: Service
metadata:
name: jellyfin
namespace: media
spec:
selector:
app: jellyfin
ports:
- port: 8096
targetPort: 8096
kubectl create namespace media
kubectl apply -f jellyfin.yaml
Exposing services with Ingress
K3s bundles Traefik as its ingress controller. Add an Ingress resource to expose a service with a hostname:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jellyfin
namespace: media
spec:
rules:
- host: jellyfin.home.lan
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jellyfin
port:
number: 8096
Point jellyfin.home.lan to your K3s node's IP in DNS and Traefik routes traffic automatically.
Helm Charts
Helm is the package manager for Kubernetes. Many self-hosted apps distribute official Helm charts that bundle all the Kubernetes manifests, configuration options, and dependencies.
Installing Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Using Helm charts
Example: installing Nextcloud via Helm:
helm repo add nextcloud https://nextcloud.github.io/helm/
helm repo update
helm install nextcloud nextcloud/nextcloud \
--namespace nextcloud \
--create-namespace \
--set nextcloud.host=nextcloud.home.lan \
--set persistence.enabled=true \
--set persistence.size=50Gi
Helm manages the entire lifecycle — install, upgrade, rollback, uninstall — with a single command. Upgrading Nextcloud becomes:
helm upgrade nextcloud nextcloud/nextcloud --namespace nextcloud
Persistent Storage
The bundled local-path provisioner stores data on the node's filesystem. For single-node setups, this is fine:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jellyfin-config
namespace: media
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 5Gi
Multi-node storage options
For multi-node clusters where pods might move between nodes, you need distributed storage:
- Longhorn (by Rancher/SUSE) — The most popular choice for K3s. Distributed block storage with replication, snapshots, and backups. Install via Helm:
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace - NFS — Simple network storage. Mount an NFS share as a persistent volume. Less reliable but easy to set up.
- Rook-Ceph — Enterprise-grade distributed storage. Overkill for most self-hosters but rock-solid.
TLS with cert-manager
cert-manager automates TLS certificate provisioning from Let's Encrypt.
Installation
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true
Configure a Let's Encrypt issuer
Create a ClusterIssuer that tells cert-manager how to obtain certificates:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: traefik
Then annotate any Ingress with cert-manager.io/cluster-issuer: letsencrypt-prod and add a tls block. cert-manager requests the certificate, handles HTTP-01 validation, and auto-renews before expiry. You never touch TLS certificates manually.
The Honest Trade-offs
K3s is great if:
- You want Kubernetes without the operational complexity
- You run services across multiple machines
- You want rolling updates and automatic recovery
- You're learning Kubernetes and want a low-friction entry point
- You want to use Helm charts for deployment
K3s is not ideal if:
- Everything runs on one machine and Docker Compose works fine
- You find YAML-heavy configuration frustrating
- You don't want to learn Kubernetes concepts (pods, services, ingress, PVCs)
- Your hardware is very resource-constrained (a Raspberry Pi can run K3s, but you'll feel the 512 MB overhead)
Bottom line: K3s brings the power of Kubernetes to self-hosters without the pain of kubeadm, etcd clusters, and manual component installation. If you're running a single machine with a few services, Docker Compose is simpler and you should stick with it. But if you're managing multiple machines, want zero-downtime updates, or are building Kubernetes skills, K3s is the best way to run Kubernetes at home. The bundled Traefik, storage provisioner, and ServiceLB mean you get a functional cluster from a single install command.
Resources
- K3s documentation
- K3s GitHub
- Longhorn storage
- cert-manager documentation
- Helm documentation
- Awesome K3s — Community resource list for K8s at home