23 min read

How to Build a Homelab with Proxmox VE: Run Multiple Servers on One Machine

Most homelab setups start the same way: one service per machine, or a tangle of Docker containers on a single Linux box with no isolation between them. Proxmox VE is the tool that lets you consolidate everything onto one physical server and carve it into as many isolated virtual machines and containers as the hardware can support.

Homelab · Virtualization · Self-Hosting · KVM · LXC · Docker · 2026

How to Build a Homelab with Proxmox VE: Run Multiple Servers on One Machine

Most homelab setups start the same way: one service per machine, or a tangle of Docker containers on a single Linux box with no isolation between them. Proxmox VE is the tool that lets you consolidate everything onto one physical server and carve it into as many isolated virtual machines and containers as the hardware can support. Run Home Assistant, Frigate, Pi-hole, a Windows VM, a development server, and a Minecraft server all on the same box — each with its own resources, its own network config, and full snapshot and backup support. This guide covers everything from installation to first VM to accessing your Proxmox dashboard remotely with a Localtonet tunnel.

📖 18 min read 🔵 Proxmox VE 9.1 (April 2026) 🖥️ KVM VMs + LXC containers 💾 ZFS storage guide 🌐 Remote dashboard access 🏠 Homelab foundation

What Is Proxmox VE and Why Use It?

Proxmox Virtual Environment is a Type-1 hypervisor — it installs directly on bare metal hardware and runs everything else as virtual machines or containers on top of it. Unlike running VirtualBox on Windows or Docker on Ubuntu, Proxmox is the operating system. There is no host OS underneath it taking up resources. The hardware goes straight to Proxmox, and Proxmox carves it up between guests.

It combines two mature virtualization technologies into a single, unified web interface:

🖥️ KVM virtual machines Full hardware virtualization. Each VM gets its own kernel, CPU allocation, RAM, and disk. Run Windows, Linux, BSD, or any other OS. Complete isolation — a crash in one VM cannot affect others.
📦 LXC containers Lightweight Linux containers that share the host kernel. Start in seconds. Use 50-80% less RAM than an equivalent VM. Feel like a full Linux server over SSH — not like Docker microservices.
🌐 Unified web UI Manage all VMs, containers, storage, networking, and backups from a single browser-based interface on port 8006. No separate management server needed.
💾 ZFS storage Native ZFS support with snapshots, checksums, compression, and thin provisioning. Take a snapshot before a risky update and roll back in seconds if something breaks.
🔄 Snapshots and backups Snapshot any VM or container instantly. Integrate with Proxmox Backup Server for incremental, deduplicated, encrypted backups that run on a schedule.
🆓 Free and open source AGPLv3 licensed. No license fees for the software itself. Optional paid subscriptions provide enterprise repository access and commercial support. Over 1.6 million hosts run Proxmox worldwide.
Proxmox VE 9.1 — latest version (November 2025, patched to 9.1.8 in April 2026)

Version 9.1 is based on Debian 13 (Trixie) with Linux Kernel 6.17. Key additions: OCI image support for LXC containers (pull Docker Hub images as LXC templates), vTPM state stored in qcow2 (enabling snapshots of Windows VMs with vTPM on NFS storage), improved nested virtualization, major SDN Fabric improvements, and initial Intel TDX and AMD SEV confidential computing support. The April 2026 9.1.8 patch added automatic HA rebalancing, which distributes workloads back evenly when a failed node returns to a cluster.

Hardware Requirements

ComponentMinimumRecommended for homelabNotes
CPU 64-bit with VT-x (Intel) or AMD-V (AMD) 4+ cores, modern generation (Ryzen 5, Core i5+) Check BIOS: virtualization extensions must be enabled. Intel VT-d / AMD-Vi required for PCIe passthrough.
RAM 2 GB (barely functional) 16-32 GB Each VM needs its own RAM allocation. ZFS needs additional RAM for its ARC cache (1 GB per TB of storage, minimum 8 GB system RAM for ZFS).
System disk 32 GB SSD 120-240 GB SSD (dedicated) Do NOT use a USB drive for the Proxmox system disk. Proxmox writes heavily to the system disk (logs, corosync) and USB drives wear out and fail within months.
VM/data storage Any extra disk NVMe SSD for VMs, HDD for bulk storage Keep VM storage on a separate disk from the Proxmox OS. ZFS on NVMe gives you snapshots and checksums at near-native performance.
Network 1 Gigabit NIC 2 NICs (management + VM traffic) Two NICs let you put management traffic and VM traffic on separate bridges, which is cleaner and more secure. Intel NICs have better Linux driver support than Realtek.

Good homelab hardware options for Proxmox

For a starter homelab, mini PCs work well: Beelink EQ13 (Intel N100, dual NIC, ~$170), Minisforum UN100 series, or any used business desktop (Dell OptiPlex, HP EliteDesk, Lenovo ThinkCentre). For a more powerful build, any Ryzen 5 or Core i5 machine with 32 GB RAM handles a dozen or more VMs comfortably. Old gaming PCs make excellent Proxmox hosts — the single-core performance matters for VM responsiveness more than core count.

Installing Proxmox VE

1

Download the ISO and write to USB

Download the Proxmox VE 9.1 ISO from proxmox.com/downloads. Write it to a USB drive using Rufus on Windows (in DD mode, not ISO mode) or dd on Linux/macOS:

Linux / macOS — write USB (replace sdX with your USB device)
# Find your USB device first
lsblk

# Write ISO to USB
sudo dd if=proxmox-ve_9.1-1.iso of=/dev/sdX bs=1M status=progress conv=fsync
2

Boot from USB and run the installer

Enter your BIOS (usually Del, F2, or F11 on POST) and set the USB drive as the first boot device. Enable virtualization extensions (VT-x, VT-d on Intel / AMD-V, AMD-Vi on AMD) if they are not already enabled. Boot from USB and select "Install Proxmox VE (Graphical)".

3

Choose your filesystem

The installer asks which filesystem to use for the system disk. Two practical choices:

FilesystemChoose if
ext4 (default) You have less than 8 GB RAM, you are new to Proxmox, or you want the simplest setup. Reliable and fast. No snapshots on the system disk itself.
ZFS (RAID0 for single disk) You have 8 GB+ RAM, want checksums and compression on the system disk, or plan to add a second disk later for ZFS mirroring. Better for data integrity.
4

Set a static IP address

The installer asks for a network configuration. Set a static IP address — do not use DHCP. If the IP changes after a router reboot, you lose access to the web UI. Choose an IP outside your router's DHCP range (e.g. 192.168.1.10 if your DHCP range starts at 192.168.1.100).

5

Complete installation and access the web UI

After installation (~10 minutes), the machine reboots into Proxmox. Remove the USB drive. Open a browser on another machine and navigate to:

Browser
https://YOUR-PROXMOX-IP:8006

Your browser will warn about a self-signed certificate — this is normal. Proceed anyway. Log in as root with the password you set during installation. The realm dropdown should be Linux PAM.

Post-Install: Setting Up the Free Repository

After first login, Proxmox shows a "No valid subscription" warning and has the enterprise repository configured. The enterprise repository requires a paid subscription (from EUR 115/year per CPU). For a homelab, use the free community repository instead.

Proxmox host — SSH or Node Shell in web UI
# Disable the enterprise repository
sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/pve-enterprise.list

# Add the community (no-subscription) repository
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" \
  > /etc/apt/sources.list.d/pve-no-subscription.list

# Update package lists and upgrade
apt update && apt full-upgrade -y

To remove the "No valid subscription" popup that appears on every login:

Proxmox host — remove nag popup
sed -Ezi.bak \
  "s/(Ext\.Msg\.show\(\{.*?title: gettext\('No valid sub)/void\(\{ \/\/\1/g" \
  /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js

systemctl restart pveproxy
The nag popup fix reverts after Proxmox updates

The proxmoxlib.js file is replaced when Proxmox packages update, so the popup returns after an update. Rerun the sed command after each major update, or consider using the community post-install script which handles this automatically.

VMs vs LXC Containers: When to Use Each

This is the most important practical decision in Proxmox. Getting it wrong wastes resources or creates unnecessary complexity.

AttributeKVM Virtual MachineLXC Container
Kernel Own kernel, fully isolated from host Shares host kernel — must be Linux
RAM overhead Higher (full OS in memory) 50-80% less RAM for same workload
Start time 30-90 seconds (full boot) 1-3 seconds
Guest OS Any OS: Windows, Linux, BSD, macOS Linux only
Disk I/O Near-native with VirtIO drivers Native (no virtualization layer)
Isolation Full hardware isolation Namespace isolation (shared kernel)
Snapshots Full VM snapshots (ZFS or QCOW2) Supported on ZFS storage
GPU passthrough Full PCIe passthrough supported Limited GPU access (no full passthrough)
Docker inside Works normally Works with privileged container or nesting=1

Simple rule: use LXC for Linux services, VMs for everything else

Running Pi-hole, Nginx, a database, Home Assistant, or any standard Linux service? Use an LXC container. It starts instantly, uses a fraction of the RAM, and feels exactly like a real Linux server over SSH. Running Windows, a firewall appliance (OPNsense, pfSense), or anything that needs GPU passthrough? Use a VM. For Docker workloads, many experienced homelab users run Docker inside an LXC container with nesting enabled — this gives you lightweight resource usage while keeping Docker's familiar workflow.

Creating Your First VM

1

Upload an ISO to Proxmox storage

In the web UI: select your node in the left sidebar, click local storage, then ISO Images, then Upload or Download from URL. Download Ubuntu Server 24.04 LTS (or any Linux/Windows ISO you want to install).

2

Create the VM

Click Create VM in the top right. Work through the wizard:

SettingRecommended valueWhy
OS → ISO Select your uploaded ISO
System → Machine q35 Modern machine type with PCIe support
System → BIOS OVMF (UEFI) Required for Windows 11 and modern Linux
Disk → Bus SCSI with VirtIO SCSI controller Best disk performance on Linux. Use IDE only for very old OSes.
CPU → Type host Passes all host CPU flags to the VM. Best performance for single-host setups. Use x86-64-v2 if you plan live migration between different CPU generations.
Network → Model VirtIO (paravirtualized) Best network performance. Avoid e1000 except for old Windows guests.
3

Start the VM and install the OS

Start the VM and click Console to see its display. Install your OS normally through the console. For Windows guests, you will need to load VirtIO drivers during installation — download the virtio-win.iso from the Fedora project and attach it as a second CD-ROM drive before starting the VM.

4

Install the QEMU guest agent

After OS installation, install the QEMU guest agent inside the VM. This enables Proxmox to show the VM's IP address in the web UI, perform clean graceful shutdowns, and support live migration and consistent snapshots.

Inside the VM — Ubuntu/Debian
sudo apt install -y qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent

Then in the Proxmox web UI, go to the VM → Options → QEMU Guest Agent → enable it, and restart the VM.

Creating an LXC Container

1

Download an LXC template

In the web UI: select your node, click local storage, then CT Templates, then Templates. Download Ubuntu 24.04, Debian 12, or Alpine. Templates are small archives (~100-200 MB) and download in under a minute.

In Proxmox 9.1, you can also use OCI images directly: pull any Docker Hub image as an LXC template for lightweight application containers.

2

Create the container

Click Create CT. Key settings:

SettingRecommended
Unprivileged container Yes (default) — safer, maps root inside container to a non-root user on host
Nesting Enable if you want to run Docker inside this LXC container
RAM 512 MB for a simple service like Pi-hole; 1-2 GB for heavier apps
Disk 8-20 GB for most services (add more for media or databases)
Network Static IP recommended (same reason as the Proxmox host itself)
3

Start and access

Start the container and click Console. Or SSH in directly using the static IP you configured. The container behaves like a regular Linux server — install packages with apt, run systemd services, and configure everything the normal way.

Storage: ZFS vs ext4 vs LVM-thin

Storage is where Proxmox's flexibility shows most clearly. You can mix storage types on the same host, using the right option for each use case.

Storage typeSnapshotsThin provisioningRAM requirementBest for
ZFS ✅ Instant, copy-on-write ✅ Yes 8 GB+ recommended Primary VM storage. Data integrity, compression, deduplication. Excellent on NVMe.
LVM-thin ✅ Yes ✅ Yes Low Good alternative to ZFS when RAM is limited. No checksums. Available on any disk.
ext4/dir ❌ No (backup only) ❌ No (thick allocation) None ISO storage, backups, CT templates. Simplest. No snapshot capability for VMs.
Ceph ✅ Yes ✅ Yes High Multi-node clusters with shared storage. Overkill for single-node homelab.

To add a ZFS pool for VM storage on a second disk after installation:

Proxmox host — create ZFS pool on a second disk
# List available disks (identify your second disk, e.g. sdb or nvme1n1)
lsblk

# Create a ZFS pool named "vmdata" on the disk (wipes all data on the disk)
zpool create -f vmdata /dev/sdb

# Verify
zpool status vmdata

Then in the web UI: Datacenter → Storage → Add → ZFS → select the pool. Proxmox automatically makes it available for VM disk images with snapshot support.

Community Helper Scripts: One-Line LXC App Installs

The community-scripts/ProxmoxVE repository maintains helper scripts that install popular self-hosted apps directly into LXC containers with a single command run from the Proxmox host shell. This is one of the most useful resources in the Proxmox homelab community.

Proxmox host shell — example: install Home Assistant OS in a VM
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/vm/haos-vm.sh)"
Proxmox host shell — example: install Pi-hole in an LXC container
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/pihole.sh)"

The scripts handle container creation, OS installation, and application setup automatically. They prompt you for resource allocation and create the container with recommended settings. Available scripts cover Home Assistant, Pi-hole, Nextcloud, Jellyfin, Immich, AdGuard Home, Vaultwarden, Frigate, Portainer, Grafana, and dozens more.

Always review scripts before running them

Running any script directly from the internet as root carries risk. Before running a community script, open the GitHub URL in your browser and read the script to understand what it does. The community-scripts project is well-maintained and widely used, but the principle of reviewing before running always applies. Also check the project's GitHub issues if you are on Proxmox 9.1, as some scripts are still being updated for 9.1 compatibility after the kernel and networking changes in that release.

Backups: Protecting Your Homelab

Proxmox has built-in backup scheduling. The simplest setup is to back up VMs and containers to local storage on a schedule. Go to Datacenter → Backup → Add, select a storage target, choose your VMs and containers, and set a schedule.

For serious data protection, Proxmox Backup Server (PBS) is the purpose-built companion. PBS runs as a separate installation (free download from proxmox.com) on a second machine, a NAS, or even a VM on another Proxmox host. It provides:

🔄 Incremental backups Only changed blocks are transferred after the first full backup. A 50 GB VM might back up in 2 minutes after the initial run.
🗜️ Deduplication Identical blocks across VMs and backups are stored once. Ten similar Ubuntu VMs take far less space than ten times one VM's size.
🔒 Encryption Client-side encryption before backup data leaves the source node. The backup server never sees plaintext.
Verification jobs PBS can automatically verify backup integrity on a schedule, so you know backups are actually restorable before you need them.

Accessing the Proxmox Web UI Remotely

The Proxmox web UI runs on https://YOUR-IP:8006 on your local network. To access it from outside your home — to manage VMs while travelling, check on a running job, or restart a stuck container from your phone — you need a way to reach port 8006 from the internet.

A Localtonet HTTP tunnel creates a stable public HTTPS URL that forwards to your local port 8006. This works even behind CGNAT with no static IP and no router configuration.

1

Install Localtonet on the Proxmox host

Proxmox host shell
# Download the Linux x64 binary from localtonet.com/download
chmod +x localtonet
mv localtonet /usr/local/bin/
localtonet authtoken YOUR_TOKEN
2

Create an HTTP tunnel for port 8006

In the Localtonet dashboard: Protocol = HTTP, Local IP = 127.0.0.1, Local Port = 8006, Subdomain = your choice (e.g. proxmoxproxmox.localto.net).

3

Install as a system service for auto-start

Proxmox host shell
localtonet --install-service --authtoken YOUR_TOKEN
localtonet --start-service --authtoken YOUR_TOKEN

Open https://proxmox.localto.net from any browser. The Proxmox login page loads exactly as it does on the local network. Log in as root and manage everything normally.

Proxmox uses a self-signed certificate — your browser will warn you

When you access Proxmox via the Localtonet URL, the browser sees the Proxmox self-signed certificate. This is normal and expected. You can safely proceed past the browser warning for your own server. To eliminate the warning permanently, you can configure Proxmox to use a proper TLS certificate from Let's Encrypt, but this requires a domain name and is optional for homelab use.

Essential CLI Commands

CommandWhat it does
qm list List all VMs with their status, RAM, and uptime
qm start <vmid> Start a VM by its ID number
qm shutdown <vmid> Gracefully shut down a VM (sends ACPI power button signal)
qm snapshot <vmid> snap-name Take an instant snapshot of a VM (requires ZFS or QCOW2 storage)
qm rollback <vmid> snap-name Roll back a VM to a previous snapshot
qm config <vmid> Show the full hardware configuration of a VM
pct list List all LXC containers with their status
pct start <ctid> Start an LXC container
pct enter <ctid> Get a shell inside a running LXC container (no SSH needed)
pct snapshot <ctid> snap-name Snapshot an LXC container (ZFS storage required)
pvesh get /nodes Query the Proxmox API via CLI — useful for scripting and automation
pveversion -v Show installed versions of all Proxmox packages
apt update && apt full-upgrade Update Proxmox to the latest version in the configured repository

Troubleshooting Common Problems

ProblemCauseFix
Cannot reach web UI at :8006 after install Firewall, wrong IP, or network misconfiguration during setup Log into the Proxmox console directly. Run hostname -I to confirm the IP. Check systemctl status pveproxy. Verify the management interface is connected to your network.
"No valid subscription" popup on every login Enterprise repo configured, no subscription key Disable enterprise repo, add no-subscription repo, run the proxmoxlib.js sed command shown above.
VM won't start: "KVM virtualization is not available" VT-x/AMD-V not enabled in BIOS, or running Proxmox inside a VM Reboot into BIOS and enable Intel VT-x (and VT-d for passthrough) or AMD-V. Proxmox is a Type-1 hypervisor and must run on bare metal — it cannot run inside VirtualBox or VMware.
Windows VM very slow disk performance Using IDE disk controller instead of VirtIO SCSI Inside Windows, install the VirtIO storage driver from the virtio-win ISO, then change the disk bus to SCSI with VirtIO SCSI controller in VM settings. Significant performance improvement.
LXC container fails to start after Proxmox update Known issue with the no-subscription repo: LXC packages sometimes receive updates that break specific configurations Check journalctl -u pve-guests and pct start <ctid> output. If NFS mounts or specific kernel features broke, check the Proxmox forum for the specific error. Pinning the LXC package version is a temporary workaround.
ZFS pool not showing in web UI after adding Pool created but not added to Proxmox storage Go to Datacenter → Storage → Add → ZFS and select the pool. Or add via CLI: pvesm add zfspool vmdata --pool vmdata --content images,rootdir
Snapshot option greyed out for a VM VM disk is on storage type that does not support snapshots (directory, LVM without thin) Move the VM disk to ZFS or LVM-thin storage. In web UI: VM → Hardware → Hard Disk → Move Disk → select ZFS storage target.

Frequently Asked Questions

Can I run Proxmox on a used server from eBay?

Yes, and this is one of the most cost-effective homelab setups. Used enterprise servers from eBay (Dell PowerEdge R720, HP ProLiant DL380 Gen9, etc.) offer 128+ GB RAM and multiple CPUs for a fraction of new hardware cost. The trade-off is power consumption: a 1U server can use 200-400 watts at load versus 15-30 watts for a mini PC. For a low-power homelab that runs 24/7, a mini PC or workstation is more economical. For maximum RAM and CPU at the lowest upfront cost, a used rack server is hard to beat.

Do I need a Proxmox subscription for home use?

No. Proxmox VE is fully functional without a subscription. The no-subscription community repository provides updates, bug fixes, and new features — it is slightly behind the enterprise repository and not officially recommended for production use, but it works reliably for most homelab setups. The subscription (starting at EUR 115/year per CPU) provides access to the enterprise-tested repository and commercial support. For a home server, the community repository is the right choice.

How is Proxmox different from just running Docker on Linux?

Docker on Linux gives you containerized applications that share one Linux install. All containers share the same kernel and the same OS. Proxmox gives you full isolation: each VM or LXC container is genuinely separate. A broken service in one container cannot crash another. You can run completely different Linux distributions in different containers. You can run Windows alongside Linux. You get proper snapshot and backup support for the entire system state, not just application data. You can roll back a broken update in 30 seconds. For a single application, Docker is simpler. For a homelab running a dozen services, Proxmox is the more robust and manageable foundation.

Can I run Proxmox on a single disk system?

Yes, though it is not ideal. The Proxmox installer partitions the disk for the OS, and you can use the remaining space for VM storage. On a single disk, you typically partition with ext4 for the OS (LVM) and add a dir-type storage or LVM-thin pool for VMs. You lose snapshot support without ZFS, and losing the disk means losing both the OS and all VMs at once. A common upgrade path is to add a second SSD later for ZFS VM storage while keeping the original disk for the Proxmox OS.

Manage Your Proxmox Homelab from Anywhere

A Localtonet HTTP tunnel gives your Proxmox web UI a stable public HTTPS URL. Start and stop VMs, check on running jobs, and manage your entire homelab from your phone, no matter where you are.

Create Your Free Tunnel →

Localtonet is a secure multi-protocol tunneling and proxy platform designed to expose localhost, devices, private services, and AI agents to the public internet supporting HTTP/HTTPS tunnels, TCP/UDP forwarding, mobile proxy infrastructure, file server publishing, latency-optimized game connectivity, and developer-ready AI agent endpoint exposure from a single unified control plane.

support