Skip to content
← Back to blog
·1 min read·

ZFS, Proxmox, and the Art of Not Losing Data

Lessons from rebuilding a ZFS pool after a drive scare, and how I structure storage across Proxmox VMs to balance performance, redundancy, and sanity.

ZFS, Proxmox, and the Art of Not Losing Data

I woke up to a SMART warning on one of my NAS drives. Not the way you want to start a Tuesday. What followed was a crash course in how well (or poorly) I'd set up my storage. Spoiler: it was a mix of both.

The storage layout

My homelab runs on Proxmox VE with two main storage tiers. A 150GB SSD for VM boot disks, project repos, and databases that need fast I/O. And a ZFS pool on spinning rust — currently a 16TB + 20TB stripe — for media, backups, and bulk storage. The NAS is passed through to VMs that need it via bind mounts.

Why ZFS

ZFS gives you checksumming (bitrot protection), snapshots, compression, and send/receive for backups. On a homelab where you're the only sysadmin, these features matter more than raw performance. I'd rather have a filesystem that tells me when data is corrupt than one that silently serves bad bytes.

The rebuild

When that SMART warning fired, I had to decide quickly. The drive wasn't dead yet, but it was dying. I pulled the data off to a temporary drive, replaced the failing disk, and rebuilt the pool. ZFS made the data migration straightforward with send/receive — pipe the dataset to the new pool and you're done. No rsync, no permission headaches.

Lessons learned

The stripe layout (no redundancy) was a gamble I got away with. If that drive had died outright instead of warning me, I'd have lost 20TB of media. I've since set up nightly snapshot send to an offsite backup. The hardware isn't redundant yet — drives are expensive — but at least the data is recoverable now. The real takeaway: backups aren't optional, even for media you can re-download. Your time has value too.

#zfs#proxmox#storage#backup#homelab