Proxmox VE: The Self-Hoster's Virtualization Platform
Every self-hoster eventually faces the same question: do I keep stacking services onto one Linux install and pray nothing conflicts, or do I isolate things properly? Maybe you've been burned by a bad apt upgrade that broke three services at once. Maybe you want to test a new distro without wiping your server. Maybe you just want to run Home Assistant, a NAS, and a Docker host on the same physical machine without them stepping on each other.
Proxmox VE (Virtual Environment) is the answer most serious home-labbers land on. It's a free, open-source virtualization platform that turns bare metal into a hypervisor capable of running dozens of virtual machines and containers simultaneously, all managed through a clean web interface. It's the foundation that everything else runs on top of.
What Is Proxmox VE?
Proxmox VE is a Type 1 (bare-metal) hypervisor built on Debian Linux. It combines two virtualization technologies under one roof:
- KVM (Kernel-based Virtual Machine) — Full hardware virtualization. Run complete operating systems (Windows, Linux, BSD, whatever) in isolated virtual machines with their own virtual hardware. Each VM thinks it's running on a real computer.
- LXC (Linux Containers) — Lightweight OS-level containers. Run isolated Linux environments that share the host kernel. Dramatically less overhead than full VMs, but limited to Linux guests.
On top of these, Proxmox provides:
- A web-based management interface that handles everything — creating VMs, managing storage, monitoring resources, configuring networking, taking backups
- Built-in firewall with per-VM and per-node rules
- Clustering support for managing multiple Proxmox nodes as a single system
- ZFS, Ceph, and LVM storage integration out of the box
- A full REST API for automation
Proxmox is developed by Proxmox Server Solutions GmbH, an Austrian company. The software is fully open source (AGPLv3), and you can use it in production without paying anything. They sell optional support subscriptions, which also give you access to the stable ("enterprise") package repository. The free "no-subscription" repository works perfectly fine — you'll just see a nag dialog at login reminding you that you don't have a subscription. (More on removing that later.)
When to Use Proxmox (and When to Just Use Docker)
This is the first question worth answering honestly, because Proxmox isn't always the right tool.
Use Proxmox when:
- You have a dedicated server or PC that you want to run multiple isolated workloads on
- You need to run different operating systems (Linux + Windows, or multiple Linux distros)
- You want hardware passthrough (GPU to a VM, USB devices to specific VMs)
- You need proper isolation between services — one compromised service can't touch another
- You want snapshots and backups at the VM/container level
- You're running a home lab where you regularly spin up and tear down test environments
Just use Docker on bare-metal Linux when:
- You're running a single-purpose server (just a NAS, just a media server)
- All your services are Linux-based and available as Docker containers
- Your hardware is too weak for virtualization overhead (Raspberry Pi, old thin clients)
- You don't need strong isolation between services
- You want the simplest possible setup
The truth is that many self-hosters run Docker inside a Proxmox VM. Proxmox handles the hardware abstraction, VM isolation, and backups, while Docker handles application deployment within each VM. This layered approach gives you the best of both worlds, at the cost of a small amount of additional overhead and complexity.
Hardware Requirements
Proxmox runs on surprisingly modest hardware, but the experience scales with your investment.
Minimum specs
- CPU: Any 64-bit processor with hardware virtualization support (Intel VT-x or AMD-V). Virtually all CPUs from the last 15 years have this. Check your BIOS/UEFI — it may be disabled by default.
- RAM: 4 GB absolute minimum for Proxmox itself. Realistically, 16 GB is the starting point for running anything useful. Each VM and container needs its own memory allocation.
- Storage: 32 GB for the Proxmox OS, plus whatever you need for VM disks. An SSD is strongly recommended for the boot drive and VM storage — running VMs off spinning rust is painful.
- Network: At least one Ethernet port. Gigabit minimum.
Recommended specs
- CPU: Intel i5/i7 or AMD Ryzen 5/7 with 4+ cores. More cores means more VMs running comfortably in parallel. For heavy workloads, look at used Xeon E5 v3/v4 or EPYC processors.
- RAM: 32-64 GB. This is usually the bottleneck — you'll run out of RAM before CPU. Each VM running a typical Linux server needs 2-4 GB; a Windows VM wants 4-8 GB minimum.
- Storage: NVMe SSD for VM disks (500 GB to 2 TB). Optionally, additional HDDs for bulk storage or backups.
- Network: Two NICs — one for management, one for VM traffic. Helpful but not required.
Good hardware for Proxmox
Used enterprise hardware is the sweet spot for Proxmox home labs:
- Dell PowerEdge R720/R730 — Dual Xeon, tons of RAM slots, hot-swap drive bays. Loud but powerful. Available used for $200-400.
- HP ProLiant DL360/DL380 — Similar to Dell. Good IPMI/iLO for remote management.
- Lenovo ThinkCentre/ThinkStation — Quieter than rack servers. Good for an apartment setup.
- Intel NUC or similar mini PCs — Low power, silent, limited expandability. Great for a small cluster.
- Custom build — Ryzen 5/7 on a decent motherboard with 64 GB of DDR4 gives excellent performance per dollar, with the added benefit of silence.
One important note: if you plan on GPU passthrough (for Plex transcoding, AI workloads, or a gaming VM), verify that your motherboard and CPU support IOMMU groupings properly. Consumer boards are hit-or-miss with this — search for your specific board before buying.
Installation
Proxmox installs as a complete operating system, replacing whatever was on the drive before.
Step 1: Download and flash the ISO
Download the latest Proxmox VE ISO from the official website (proxmox.com/downloads). Flash it to a USB drive:
# On Linux or macOS
dd bs=4M if=proxmox-ve_8.x.iso of=/dev/sdX status=progress
Or use balenaEtcher or Rufus on Windows.
Step 2: Boot and install
Boot from the USB drive. The installer is straightforward:
- Accept the EULA
- Select the target disk for installation. Important: this will wipe the disk. If you have multiple disks, choose the one you want for the OS. You can optionally select ZFS as the root filesystem here (recommended if you have multiple disks for mirroring).
- Set your country, timezone, and keyboard layout
- Set a root password and email address (for alerts)
- Configure the management network interface — assign a static IP, gateway, and DNS
The install takes about 5 minutes. Remove the USB drive and reboot.
Step 3: Access the web UI
From any browser on your network, navigate to:
https://<your-proxmox-ip>:8006
Log in with root and the password you set during installation. You'll see the Proxmox web interface — your command center for everything.
Step 4: Remove the subscription nag (optional)
If you're using the free edition, you'll see a pop-up at each login about not having a subscription. To switch to the no-subscription repository and remove the enterprise repo:
# SSH into your Proxmox host, then:
# Disable the enterprise repository (requires a subscription key)
sed -i 's/^deb/# deb/' /etc/apt/sources.list.d/pve-enterprise.list
# Add the no-subscription repository
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list
# Update packages
apt update && apt full-upgrade -y
The nag popup at login is cosmetic — it doesn't affect functionality. Some people remove it with a small JavaScript patch; others just click through it. Up to you.
Step 5: Update and reboot
apt update && apt full-upgrade -y
reboot
Always keep Proxmox updated. Kernel updates frequently include important KVM and security patches.
Core Concepts
Before you start creating VMs, understanding a few key concepts saves a lot of confusion later.
VMs vs LXC containers
| Aspect | VM (KVM) | LXC Container |
|---|---|---|
| Isolation | Full — separate kernel, virtual hardware | Partial — shares host kernel |
| Overhead | Moderate (1-2 GB RAM for the OS) | Minimal (50-200 MB) |
| Guest OS | Any (Linux, Windows, BSD, etc.) | Linux only |
| Boot time | 30-60 seconds | 1-3 seconds |
| Performance | Near-native (with VirtIO drivers) | Native |
| Use case | Windows, untrusted workloads, different kernels | Trusted Linux services, lightweight isolation |
| Hardware passthrough | Yes (GPU, USB, PCIe) | Limited |
Rule of thumb: Use LXC for Linux services you trust (Pi-hole, Nginx, databases). Use VMs for Windows, anything that needs a specific kernel, anything you don't fully trust, or anything needing GPU passthrough.
Storage types
Proxmox supports multiple storage backends. The main ones:
- Local (directory) — Simple directory on the Proxmox host's filesystem. Default for a basic install. Fine for getting started.
- LVM — Logical Volume Manager. The default storage for VM disks on a standard install. Good performance, supports snapshots with LVM-thin.
- LVM-Thin — Thin-provisioned LVM. Allows over-provisioning (allocating more total disk space to VMs than physically exists) and supports snapshots. This is what Proxmox uses by default for VM disks.
- ZFS — If you selected ZFS during install or add ZFS pools later. Gives you checksumming, compression, excellent snapshots, and replication. Best option if you have the RAM (ZFS wants 1 GB of ARC cache per TB of storage as a rough guideline).
- NFS/SMB — Mount network shares as storage. Useful for ISO libraries or backup targets on a NAS.
- Ceph — Distributed storage for multi-node clusters. Overkill for most home labs but powerful at scale.
Networking
Proxmox creates a Linux bridge (vmbr0) during installation that connects to your physical NIC. VMs and containers attach to this bridge and get IPs from your network's DHCP server, just like physical devices.
For more advanced setups, you can create:
- Additional bridges for isolated networks (e.g., a DMZ for internet-facing services)
- VLANs for traffic segmentation
- Bonded interfaces for redundancy or throughput (requires multiple NICs)
The default single-bridge setup works fine for most home labs. Don't over-engineer your networking on day one.
Creating Your First VM
Let's walk through creating a basic Ubuntu Server VM.
Step 1: Upload an ISO
Download the Ubuntu Server ISO on your local machine, then upload it to Proxmox:
- In the web UI, click your node, then go to local storage > ISO Images
- Click Upload and select the ISO file
Alternatively, download directly on the Proxmox host:
cd /var/lib/vz/template/iso/
wget https://releases.ubuntu.com/24.04/ubuntu-24.04-live-server-amd64.iso
Step 2: Create the VM
Click Create VM in the top right of the web UI.
General tab:
- Give it a name (e.g.,
ubuntu-docker) - Note the VM ID (100, 101, etc.)
OS tab:
- Select the ISO image you uploaded
- Type: Linux, Version: 6.x - 2.6 Kernel
System tab:
- Machine: q35 (modern virtual chipset)
- BIOS: OVMF (UEFI) for modern OSes. SeaBIOS for legacy compatibility.
- Add EFI Disk if using OVMF
- Check "Qemu Agent" (install
qemu-guest-agentinside the VM later for better integration)
Disks tab:
- Bus/Device: VirtIO Block (best performance)
- Disk size: 32 GB is fine for a basic server; adjust based on needs
- Storage: select your storage pool
CPU tab:
- Cores: 2-4 (adjust based on workload)
- Type: "host" gives the VM access to your actual CPU features (best performance). Use "x86-64-v2-AES" or similar for migration compatibility in clusters.
Memory tab:
- 2048-4096 MB for a basic Linux server
- Uncheck "Ballooning Device" if you want guaranteed RAM allocation
Network tab:
- Bridge: vmbr0
- Model: VirtIO (best performance)
Click Finish, then start the VM. Open the console and install Ubuntu normally.
Step 3: Install the guest agent
After the OS is installed, install the QEMU guest agent inside the VM:
sudo apt install qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent
This lets Proxmox see the VM's IP address, trigger graceful shutdowns, and freeze the filesystem during backups for consistency.
LXC Containers
LXC containers are one of Proxmox's killer features. They give you isolated Linux environments with almost zero overhead.
When to use LXC
- DNS servers (Pi-hole, AdGuard Home) — A container uses 50 MB of RAM instead of 1-2 GB for a full VM
- Reverse proxies (Nginx, Traefik, Caddy)
- Databases (PostgreSQL, MariaDB)
- Monitoring (Grafana, Prometheus)
- Web applications (Nextcloud, Gitea, etc.)
Basically, any Linux service that doesn't need a custom kernel, GPU access, or Docker (Docker inside LXC is possible but finicky — just use a VM for Docker).
Creating an LXC container
Download a template
Go to your storage > CT Templates > Templates. Proxmox provides official templates for Debian, Ubuntu, Alpine, Fedora, and more. Download one:
# Or from the command line:
pveam update
pveam available --section system
pveam download local debian-12-standard_12.7-1_amd64.tar.zst
Create the container
Click Create CT in the web UI:
- General: Set hostname, password, and optionally an SSH public key
- Template: Select the template you downloaded
- Disks: Root disk size (8 GB is plenty for a basic service)
- CPU: 1-2 cores
- Memory: 256-512 MB for lightweight services, 1024+ MB for applications
- Network: Bridge vmbr0, DHCP or static IP
That's it. The container starts in seconds. SSH in or use the Proxmox console:
# From the Proxmox host:
pct enter 101
LXC vs Docker
A common point of confusion: LXC containers and Docker containers are both "containers," but they serve different purposes.
- LXC gives you a full Linux system (init, systemd, package manager, SSH). Think of it as a lightweight VM. You manage it like a regular server.
- Docker gives you a single application in an isolated environment. It's application-level packaging, not system-level virtualization.
Many Proxmox users create an LXC or VM to run Docker inside it, getting the benefits of both: Proxmox-level isolation and backups around the Docker host, and Docker's application management within.
Proxmox vs the Competition
| Feature | Proxmox VE | VMware ESXi | Unraid | TrueNAS SCALE |
|---|---|---|---|---|
| Cost | Free (open source) | Free tier removed; subscription only | $59-$129 (license) | Free (open source) |
| Base OS | Debian Linux | Proprietary (VMkernel) | Slackware Linux | Debian Linux |
| VM support | KVM (excellent) | Excellent | KVM (good) | KVM (basic) |
| Container support | LXC (native) | None (Docker in VMs only) | Docker (excellent) | Docker (native) |
| Storage | Local, ZFS, Ceph, NFS, LVM | VMFS, vSAN, NFS | XFS + parity array | ZFS |
| Web UI | Functional, improving | Polished | Excellent | Good |
| Clustering | Yes, built-in (free) | Yes (requires vCenter, expensive) | No | No |
| GPU passthrough | Yes | Yes | Yes | Limited |
| Backup | Built-in + PBS | Requires vSphere/3rd party | Community plugins | Snapshots only |
| Target use case | Virtualization-first | Enterprise virtualization | NAS + apps + VMs | Storage-first |
| Community | Large, active | Shrinking (home lab) | Large, very active | Large, active |
| Learning curve | Moderate | Moderate-High | Low | Moderate |
VMware ESXi
VMware was the go-to for home labs until Broadcom's acquisition gutted the free tier. The free ESXi license is gone, and licensing has shifted to expensive per-core subscriptions. The home lab community has largely migrated away. Proxmox is the direct beneficiary — it does everything ESXi did for home use, it's fully open source, and it's not at the mercy of a holding company's quarterly earnings calls.
Unraid
Unraid is primarily a NAS/Docker platform that happens to support VMs. It's excellent at what it does — the Docker and VM management is arguably more user-friendly than Proxmox's. But virtualization is a secondary feature. If VMs are your primary workload, Proxmox is the better foundation. If storage and Docker apps are primary and VMs are occasional, Unraid may be the better fit.
TrueNAS SCALE
TrueNAS SCALE has KVM support, but it's clearly a storage platform first. VM management is basic compared to Proxmox. If your primary need is ZFS storage with occasional VMs, TrueNAS is fine. If you want a serious virtualization platform with flexible storage options, Proxmox is the tool.
Backup and Snapshots
Proxmox has genuinely good backup tooling built in. This is a major advantage over cobbling together your own backup scripts.
Snapshots
Snapshots capture the state of a VM or container at a point in time. They're instant and stored alongside the VM's disk:
# Take a snapshot from the command line
qm snapshot 100 before-upgrade --description "Before kernel upgrade"
# List snapshots
qm listsnapshot 100
# Rollback to a snapshot
qm rollback 100 before-upgrade
Snapshots are great for quick "save points" before risky changes. They're not backups — they live on the same storage as the VM. If the disk dies, snapshots die with it.
Built-in backups (vzdump)
Proxmox includes vzdump for creating full backups of VMs and containers. You can schedule these from the web UI:
- Go to Datacenter > Backup
- Click Add to create a backup job
- Select which VMs/containers to back up
- Choose a schedule (daily, weekly, etc.)
- Select backup storage (local directory, NFS share, PBS)
- Choose mode:
- Snapshot — Back up while the VM is running (minimal downtime, may have inconsistencies without the guest agent)
- Suspend — Briefly pauses the VM for a consistent backup
- Stop — Shuts down the VM, backs up, restarts (most consistent, has downtime)
Backups are stored as compressed archives (.vma.zst for VMs, .tar.zst for containers). You can restore them on any Proxmox host.
Proxmox Backup Server (PBS)
For serious backup infrastructure, Proxmox offers a dedicated companion product: Proxmox Backup Server. PBS provides:
- Incremental backups — Only changed blocks are transferred after the first full backup. Dramatically reduces storage and network usage.
- Deduplication — Identical data blocks across all your VMs are stored only once.
- Encryption — Client-side encryption if you're backing up to an untrusted location.
- Verification — PBS can verify backup integrity without doing a full restore.
PBS is free, open source, and installable on a separate machine (or even in a VM on a different Proxmox host — just don't back up to the same machine you're backing up from). For a home lab, a small PC with a large HDD running PBS makes an excellent backup target.
A solid backup strategy:
- PBS on a separate machine for daily incremental backups (keep 7 dailies, 4 weeklies, 3 monthlies)
- Snapshots before any risky changes (manual)
- Offsite copy — rsync PBS backup data to a remote location or cloud storage periodically
Clustering
Proxmox supports multi-node clustering out of the box, for free. This is something VMware charges thousands for.
A Proxmox cluster gives you:
- Centralized management — Manage all nodes from a single web UI
- Live migration — Move running VMs between nodes with zero downtime (requires shared storage or VM disk replication)
- High availability — If a node goes down, its VMs automatically restart on another node
- Shared storage via Ceph (distributed storage across cluster nodes)
Setting up a basic cluster
# On the first node, create the cluster:
pvecm create my-cluster
# On additional nodes, join the cluster:
pvecm add <first-node-ip>
After joining, all nodes appear in the same web UI under the "Datacenter" view.
For a home lab, clustering is mostly useful if you have 3+ nodes. Two-node clusters require a quorum device (QDevice) to handle split-brain scenarios. Don't bother with clustering if you only have one server — it adds complexity with no benefit.
Live migration
With shared storage (or Proxmox's built-in replication for ZFS):
# Migrate VM 100 to node "pve2" with zero downtime
qm migrate 100 pve2 --online
This is genuinely impressive to see in action — a running VM moves between physical machines with no interruption.
Tips and Best Practices
IOMMU and PCI passthrough
Passing physical hardware (GPUs, USB controllers, NVMe drives) directly to a VM gives native performance. This is how people run gaming VMs, Plex with hardware transcoding, or AI workloads on Proxmox.
Enable IOMMU in your BIOS and bootloader:
# For Intel CPUs, edit /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
# For AMD CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
# Then update GRUB and reboot:
update-grub
reboot
Load the required kernel modules:
# Add to /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Verify IOMMU is active:
dmesg | grep -e DMAR -e IOMMU
# Should see "IOMMU enabled" or "DMAR: IOMMU enabled"
Then in the VM configuration, add the PCI device under Hardware > Add > PCI Device. Select the device and check "All Functions" and "Primary GPU" if passing through a graphics card.
Passthrough can be finicky — IOMMU group isolation varies by motherboard. Some boards put multiple devices in the same IOMMU group, requiring an ACS override patch. Research your specific hardware.
Resource allocation
- Don't over-allocate CPU cores. Proxmox (KVM) handles CPU scheduling efficiently. Four VMs each with 4 cores on an 8-core host works fine because they're rarely all fully loaded simultaneously.
- Do carefully allocate RAM. Unlike CPU, RAM is a hard commitment (unless you use ballooning, which has trade-offs). If you allocate 32 GB of RAM across VMs but only have 16 GB physical, VMs will crash or the OOM killer will intervene. Be honest about your RAM budget.
- Use VirtIO drivers. VirtIO for disk and network gives near-native performance. For Windows VMs, download the VirtIO driver ISO from Proxmox and load the drivers during Windows setup.
Monitoring
Proxmox's built-in monitoring covers basics (CPU, RAM, disk, network per VM). For deeper visibility:
- Install Prometheus + Grafana in a container or VM for historical metrics
- Use the Proxmox exporter (
prometheus-pve-exporter) to pull metrics from the Proxmox API - Set up email alerts in Datacenter > Options > Email from Address. Proxmox sends notifications for backup failures, disk health issues, and HA events.
Useful CLI commands
# List all VMs and their status
qm list
# Start/stop/reboot a VM
qm start 100
qm stop 100
qm reboot 100
# List all containers and their status
pct list
# Enter a container's shell
pct enter 101
# View cluster status
pvecm status
# View storage usage
pvesm status
# View node resource usage
pvesh get /nodes/$(hostname)/status
Template VMs
Don't install the same OS from scratch every time. Create a base VM, install the OS, configure it how you like (updates, SSH keys, common packages), then convert it to a template:
qm template 100
Templates can be cloned instantly. Need a new Ubuntu VM? Clone the template, adjust resources, boot. Done in seconds instead of 20 minutes.
You can also use cloud-init with template VMs for automatic provisioning (set hostname, IP, SSH keys at clone time):
# Clone template 9000 to new VM 110
qm clone 9000 110 --name my-new-vm --full
qm set 110 --ipconfig0 ip=192.168.1.110/24,gw=192.168.1.1
qm set 110 --ciuser admin --sshkeys ~/.ssh/authorized_keys
qm start 110
The VM boots with the specified IP, user, and SSH keys automatically.
The Honest Take
Proxmox VE is an outstanding platform, but it's worth being straightforward about where it sits.
When Proxmox is overkill
If you have a Raspberry Pi running Pi-hole and Homebridge, you don't need Proxmox. If you have a single mini PC running Docker Compose with a handful of containers, you probably don't need Proxmox. The additional layer of abstraction adds complexity without proportional benefit for simple setups.
Proxmox earns its keep when you have hardware capable of running multiple workloads that benefit from isolation. If you're only running one thing, Proxmox is overhead.
The learning curve is real
Proxmox exposes real Linux system administration. Networking, storage, kernel modules, IOMMU groups — these are not things you can ignore when issues arise. The web UI handles the common cases well, but when something goes wrong, you're debugging at the Linux command line. If you've never administered a Linux server, expect to spend some time learning.
That said, the Proxmox community is one of the most helpful in the home lab world. The official forums and wiki are solid. The subreddit (r/Proxmox) is active. If you can describe your problem clearly, someone has likely solved it.
The subscription question
Proxmox is fully usable without a subscription. The no-subscription repository gets the same updates as the enterprise repository, just with less testing. For home use, this is absolutely fine. The nag popup is annoying but harmless.
If you use Proxmox professionally, buy a subscription. It supports development of genuinely excellent open source software, and you get stable, tested packages.
Compared to just running Docker on bare metal
The "just install Debian and Docker" approach is simpler and has less overhead. But you lose:
- The ability to snapshot and restore your entire server state in seconds
- Clean separation between workloads
- The ability to run non-Linux operating systems
- Centralized web management and monitoring
- Backup infrastructure that works at the VM level
- The ability to migrate workloads between machines
For many self-hosters, that trade-off is worth it. Proxmox is the infrastructure that lets everything else be disposable and recoverable.
Bottom line
Proxmox VE is the best free virtualization platform available, and it's not close. It handles VM and container management with the kind of competence that used to cost thousands in VMware licensing. The learning curve is moderate, the community is excellent, and once you've set it up, you have a rock-solid foundation for every other self-hosted service you'll ever run.
Start with one machine. Create a few LXC containers for lightweight services. Spin up a VM for Docker. Take snapshots before you experiment. Once you're comfortable, you'll wonder how you ever managed a server without it.