← All articles
VIRTUALIZATION Proxmox VE: The Self-Hoster's Virtualization Platform 2026-02-09 · proxmox · virtualization · homelab

Proxmox VE: The Self-Hoster's Virtualization Platform

Virtualization 2026-02-09 proxmox virtualization homelab docker lxc vm

Every self-hoster eventually faces the same question: do I keep stacking services onto one Linux install and pray nothing conflicts, or do I isolate things properly? Maybe you've been burned by a bad apt upgrade that broke three services at once. Maybe you want to test a new distro without wiping your server. Maybe you just want to run Home Assistant, a NAS, and a Docker host on the same physical machine without them stepping on each other.

Proxmox VE (Virtual Environment) is the answer most serious home-labbers land on. It's a free, open-source virtualization platform that turns bare metal into a hypervisor capable of running dozens of virtual machines and containers simultaneously, all managed through a clean web interface. It's the foundation that everything else runs on top of.

What Is Proxmox VE?

Proxmox VE is a Type 1 (bare-metal) hypervisor built on Debian Linux. It combines two virtualization technologies under one roof:

On top of these, Proxmox provides:

Proxmox is developed by Proxmox Server Solutions GmbH, an Austrian company. The software is fully open source (AGPLv3), and you can use it in production without paying anything. They sell optional support subscriptions, which also give you access to the stable ("enterprise") package repository. The free "no-subscription" repository works perfectly fine — you'll just see a nag dialog at login reminding you that you don't have a subscription. (More on removing that later.)

When to Use Proxmox (and When to Just Use Docker)

This is the first question worth answering honestly, because Proxmox isn't always the right tool.

Use Proxmox when:

Just use Docker on bare-metal Linux when:

The truth is that many self-hosters run Docker inside a Proxmox VM. Proxmox handles the hardware abstraction, VM isolation, and backups, while Docker handles application deployment within each VM. This layered approach gives you the best of both worlds, at the cost of a small amount of additional overhead and complexity.

Bare Metal Hardware (CPU, RAM, Storage, NIC) Proxmox VE — Debian + KVM + LXC VM: Docker Host Nextcloud Jellyfin Monitoring VM: Windows GPU passthrough Gaming / desktop LXC: Pi-hole Lightweight container ~64 MB RAM LXC: Home Assistant Smart home automation Web UI :8006

Hardware Requirements

Proxmox runs on surprisingly modest hardware, but the experience scales with your investment.

Minimum specs

Recommended specs

Good hardware for Proxmox

Used enterprise hardware is the sweet spot for Proxmox home labs:

One important note: if you plan on GPU passthrough (for Plex transcoding, AI workloads, or a gaming VM), verify that your motherboard and CPU support IOMMU groupings properly. Consumer boards are hit-or-miss with this — search for your specific board before buying.

Installation

Proxmox installs as a complete operating system, replacing whatever was on the drive before.

Step 1: Download and flash the ISO

Download the latest Proxmox VE ISO from the official website (proxmox.com/downloads). Flash it to a USB drive:

# On Linux or macOS
dd bs=4M if=proxmox-ve_8.x.iso of=/dev/sdX status=progress

Or use balenaEtcher or Rufus on Windows.

Step 2: Boot and install

Boot from the USB drive. The installer is straightforward:

  1. Accept the EULA
  2. Select the target disk for installation. Important: this will wipe the disk. If you have multiple disks, choose the one you want for the OS. You can optionally select ZFS as the root filesystem here (recommended if you have multiple disks for mirroring).
  3. Set your country, timezone, and keyboard layout
  4. Set a root password and email address (for alerts)
  5. Configure the management network interface — assign a static IP, gateway, and DNS

The install takes about 5 minutes. Remove the USB drive and reboot.

Step 3: Access the web UI

From any browser on your network, navigate to:

https://<your-proxmox-ip>:8006

Log in with root and the password you set during installation. You'll see the Proxmox web interface — your command center for everything.

Step 4: Remove the subscription nag (optional)

If you're using the free edition, you'll see a pop-up at each login about not having a subscription. To switch to the no-subscription repository and remove the enterprise repo:

# SSH into your Proxmox host, then:

# Disable the enterprise repository (requires a subscription key)
sed -i 's/^deb/# deb/' /etc/apt/sources.list.d/pve-enterprise.list

# Add the no-subscription repository
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list

# Update packages
apt update && apt full-upgrade -y

The nag popup at login is cosmetic — it doesn't affect functionality. Some people remove it with a small JavaScript patch; others just click through it. Up to you.

Step 5: Update and reboot

apt update && apt full-upgrade -y
reboot

Always keep Proxmox updated. Kernel updates frequently include important KVM and security patches.

Core Concepts

Before you start creating VMs, understanding a few key concepts saves a lot of confusion later.

VMs vs LXC containers

Aspect VM (KVM) LXC Container
Isolation Full — separate kernel, virtual hardware Partial — shares host kernel
Overhead Moderate (1-2 GB RAM for the OS) Minimal (50-200 MB)
Guest OS Any (Linux, Windows, BSD, etc.) Linux only
Boot time 30-60 seconds 1-3 seconds
Performance Near-native (with VirtIO drivers) Native
Use case Windows, untrusted workloads, different kernels Trusted Linux services, lightweight isolation
Hardware passthrough Yes (GPU, USB, PCIe) Limited

Rule of thumb: Use LXC for Linux services you trust (Pi-hole, Nginx, databases). Use VMs for Windows, anything that needs a specific kernel, anything you don't fully trust, or anything needing GPU passthrough.

Storage types

Proxmox supports multiple storage backends. The main ones:

Networking

Proxmox creates a Linux bridge (vmbr0) during installation that connects to your physical NIC. VMs and containers attach to this bridge and get IPs from your network's DHCP server, just like physical devices.

For more advanced setups, you can create:

The default single-bridge setup works fine for most home labs. Don't over-engineer your networking on day one.

Creating Your First VM

Let's walk through creating a basic Ubuntu Server VM.

Step 1: Upload an ISO

Download the Ubuntu Server ISO on your local machine, then upload it to Proxmox:

  1. In the web UI, click your node, then go to local storage > ISO Images
  2. Click Upload and select the ISO file

Alternatively, download directly on the Proxmox host:

cd /var/lib/vz/template/iso/
wget https://releases.ubuntu.com/24.04/ubuntu-24.04-live-server-amd64.iso

Step 2: Create the VM

Click Create VM in the top right of the web UI.

General tab:

OS tab:

System tab:

Disks tab:

CPU tab:

Memory tab:

Network tab:

Click Finish, then start the VM. Open the console and install Ubuntu normally.

Step 3: Install the guest agent

After the OS is installed, install the QEMU guest agent inside the VM:

sudo apt install qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent

This lets Proxmox see the VM's IP address, trigger graceful shutdowns, and freeze the filesystem during backups for consistency.

LXC Containers

LXC containers are one of Proxmox's killer features. They give you isolated Linux environments with almost zero overhead.

When to use LXC

Basically, any Linux service that doesn't need a custom kernel, GPU access, or Docker (Docker inside LXC is possible but finicky — just use a VM for Docker).

Creating an LXC container

Download a template

Go to your storage > CT Templates > Templates. Proxmox provides official templates for Debian, Ubuntu, Alpine, Fedora, and more. Download one:

# Or from the command line:
pveam update
pveam available --section system
pveam download local debian-12-standard_12.7-1_amd64.tar.zst

Create the container

Click Create CT in the web UI:

That's it. The container starts in seconds. SSH in or use the Proxmox console:

# From the Proxmox host:
pct enter 101

LXC vs Docker

A common point of confusion: LXC containers and Docker containers are both "containers," but they serve different purposes.

Many Proxmox users create an LXC or VM to run Docker inside it, getting the benefits of both: Proxmox-level isolation and backups around the Docker host, and Docker's application management within.

Proxmox vs the Competition

Feature Proxmox VE VMware ESXi Unraid TrueNAS SCALE
Cost Free (open source) Free tier removed; subscription only $59-$129 (license) Free (open source)
Base OS Debian Linux Proprietary (VMkernel) Slackware Linux Debian Linux
VM support KVM (excellent) Excellent KVM (good) KVM (basic)
Container support LXC (native) None (Docker in VMs only) Docker (excellent) Docker (native)
Storage Local, ZFS, Ceph, NFS, LVM VMFS, vSAN, NFS XFS + parity array ZFS
Web UI Functional, improving Polished Excellent Good
Clustering Yes, built-in (free) Yes (requires vCenter, expensive) No No
GPU passthrough Yes Yes Yes Limited
Backup Built-in + PBS Requires vSphere/3rd party Community plugins Snapshots only
Target use case Virtualization-first Enterprise virtualization NAS + apps + VMs Storage-first
Community Large, active Shrinking (home lab) Large, very active Large, active
Learning curve Moderate Moderate-High Low Moderate

VMware ESXi

VMware was the go-to for home labs until Broadcom's acquisition gutted the free tier. The free ESXi license is gone, and licensing has shifted to expensive per-core subscriptions. The home lab community has largely migrated away. Proxmox is the direct beneficiary — it does everything ESXi did for home use, it's fully open source, and it's not at the mercy of a holding company's quarterly earnings calls.

Unraid

Unraid is primarily a NAS/Docker platform that happens to support VMs. It's excellent at what it does — the Docker and VM management is arguably more user-friendly than Proxmox's. But virtualization is a secondary feature. If VMs are your primary workload, Proxmox is the better foundation. If storage and Docker apps are primary and VMs are occasional, Unraid may be the better fit.

TrueNAS SCALE

TrueNAS SCALE has KVM support, but it's clearly a storage platform first. VM management is basic compared to Proxmox. If your primary need is ZFS storage with occasional VMs, TrueNAS is fine. If you want a serious virtualization platform with flexible storage options, Proxmox is the tool.

Backup and Snapshots

Proxmox has genuinely good backup tooling built in. This is a major advantage over cobbling together your own backup scripts.

Snapshots

Snapshots capture the state of a VM or container at a point in time. They're instant and stored alongside the VM's disk:

# Take a snapshot from the command line
qm snapshot 100 before-upgrade --description "Before kernel upgrade"

# List snapshots
qm listsnapshot 100

# Rollback to a snapshot
qm rollback 100 before-upgrade

Snapshots are great for quick "save points" before risky changes. They're not backups — they live on the same storage as the VM. If the disk dies, snapshots die with it.

Built-in backups (vzdump)

Proxmox includes vzdump for creating full backups of VMs and containers. You can schedule these from the web UI:

  1. Go to Datacenter > Backup
  2. Click Add to create a backup job
  3. Select which VMs/containers to back up
  4. Choose a schedule (daily, weekly, etc.)
  5. Select backup storage (local directory, NFS share, PBS)
  6. Choose mode:
    • Snapshot — Back up while the VM is running (minimal downtime, may have inconsistencies without the guest agent)
    • Suspend — Briefly pauses the VM for a consistent backup
    • Stop — Shuts down the VM, backs up, restarts (most consistent, has downtime)

Backups are stored as compressed archives (.vma.zst for VMs, .tar.zst for containers). You can restore them on any Proxmox host.

Proxmox Backup Server (PBS)

For serious backup infrastructure, Proxmox offers a dedicated companion product: Proxmox Backup Server. PBS provides:

PBS is free, open source, and installable on a separate machine (or even in a VM on a different Proxmox host — just don't back up to the same machine you're backing up from). For a home lab, a small PC with a large HDD running PBS makes an excellent backup target.

A solid backup strategy:

  1. PBS on a separate machine for daily incremental backups (keep 7 dailies, 4 weeklies, 3 monthlies)
  2. Snapshots before any risky changes (manual)
  3. Offsite copy — rsync PBS backup data to a remote location or cloud storage periodically

Clustering

Proxmox supports multi-node clustering out of the box, for free. This is something VMware charges thousands for.

A Proxmox cluster gives you:

Setting up a basic cluster

# On the first node, create the cluster:
pvecm create my-cluster

# On additional nodes, join the cluster:
pvecm add <first-node-ip>

After joining, all nodes appear in the same web UI under the "Datacenter" view.

For a home lab, clustering is mostly useful if you have 3+ nodes. Two-node clusters require a quorum device (QDevice) to handle split-brain scenarios. Don't bother with clustering if you only have one server — it adds complexity with no benefit.

Live migration

With shared storage (or Proxmox's built-in replication for ZFS):

# Migrate VM 100 to node "pve2" with zero downtime
qm migrate 100 pve2 --online

This is genuinely impressive to see in action — a running VM moves between physical machines with no interruption.

Tips and Best Practices

IOMMU and PCI passthrough

Passing physical hardware (GPUs, USB controllers, NVMe drives) directly to a VM gives native performance. This is how people run gaming VMs, Plex with hardware transcoding, or AI workloads on Proxmox.

Enable IOMMU in your BIOS and bootloader:

# For Intel CPUs, edit /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

# For AMD CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

# Then update GRUB and reboot:
update-grub
reboot

Load the required kernel modules:

# Add to /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Verify IOMMU is active:

dmesg | grep -e DMAR -e IOMMU
# Should see "IOMMU enabled" or "DMAR: IOMMU enabled"

Then in the VM configuration, add the PCI device under Hardware > Add > PCI Device. Select the device and check "All Functions" and "Primary GPU" if passing through a graphics card.

Passthrough can be finicky — IOMMU group isolation varies by motherboard. Some boards put multiple devices in the same IOMMU group, requiring an ACS override patch. Research your specific hardware.

Resource allocation

Monitoring

Proxmox's built-in monitoring covers basics (CPU, RAM, disk, network per VM). For deeper visibility:

Useful CLI commands

# List all VMs and their status
qm list

# Start/stop/reboot a VM
qm start 100
qm stop 100
qm reboot 100

# List all containers and their status
pct list

# Enter a container's shell
pct enter 101

# View cluster status
pvecm status

# View storage usage
pvesm status

# View node resource usage
pvesh get /nodes/$(hostname)/status

Template VMs

Don't install the same OS from scratch every time. Create a base VM, install the OS, configure it how you like (updates, SSH keys, common packages), then convert it to a template:

qm template 100

Templates can be cloned instantly. Need a new Ubuntu VM? Clone the template, adjust resources, boot. Done in seconds instead of 20 minutes.

You can also use cloud-init with template VMs for automatic provisioning (set hostname, IP, SSH keys at clone time):

# Clone template 9000 to new VM 110
qm clone 9000 110 --name my-new-vm --full
qm set 110 --ipconfig0 ip=192.168.1.110/24,gw=192.168.1.1
qm set 110 --ciuser admin --sshkeys ~/.ssh/authorized_keys
qm start 110

The VM boots with the specified IP, user, and SSH keys automatically.

The Honest Take

Proxmox VE is an outstanding platform, but it's worth being straightforward about where it sits.

When Proxmox is overkill

If you have a Raspberry Pi running Pi-hole and Homebridge, you don't need Proxmox. If you have a single mini PC running Docker Compose with a handful of containers, you probably don't need Proxmox. The additional layer of abstraction adds complexity without proportional benefit for simple setups.

Proxmox earns its keep when you have hardware capable of running multiple workloads that benefit from isolation. If you're only running one thing, Proxmox is overhead.

The learning curve is real

Proxmox exposes real Linux system administration. Networking, storage, kernel modules, IOMMU groups — these are not things you can ignore when issues arise. The web UI handles the common cases well, but when something goes wrong, you're debugging at the Linux command line. If you've never administered a Linux server, expect to spend some time learning.

That said, the Proxmox community is one of the most helpful in the home lab world. The official forums and wiki are solid. The subreddit (r/Proxmox) is active. If you can describe your problem clearly, someone has likely solved it.

The subscription question

Proxmox is fully usable without a subscription. The no-subscription repository gets the same updates as the enterprise repository, just with less testing. For home use, this is absolutely fine. The nag popup is annoying but harmless.

If you use Proxmox professionally, buy a subscription. It supports development of genuinely excellent open source software, and you get stable, tested packages.

Compared to just running Docker on bare metal

The "just install Debian and Docker" approach is simpler and has less overhead. But you lose:

For many self-hosters, that trade-off is worth it. Proxmox is the infrastructure that lets everything else be disposable and recoverable.

Bottom line

Proxmox VE is the best free virtualization platform available, and it's not close. It handles VM and container management with the kind of competence that used to cost thousands in VMware licensing. The learning curve is moderate, the community is excellent, and once you've set it up, you have a rock-solid foundation for every other self-hosted service you'll ever run.

Start with one machine. Create a few LXC containers for lightweight services. Spin up a VM for Docker. Take snapshots before you experiment. Once you're comfortable, you'll wonder how you ever managed a server without it.