Downsizing My Homelab

My homelab has gotten out of hand. What started as a single server has somehow sprawled into four separate machines, each humming away in my basement, each with its own quirks, update schedule, and maintenance needs. It’s time to consolidate.

The Current Mess

Right now I’m running:

  • UnRAID Server: 4x8TB drives (two for parity), hosting various Docker containers like Paperless-ngx
  • Proxmox Machine: Pretty much just running an Immich instance in a VM
  • Synology NAS: 47TB of storage, currently just serving as network storage for the Immich instance
  • ZimaBoard: Dedicated to Home Assistant with a Zigbee USB stick

This setup evolved organically. Need photo backup? Spin up Immich on Proxmox. Need more storage? Point it at the Synology. Want reliable home automation? Dedicated hardware seems smart. But now I’m managing four different systems, four different update cycles, and way more complexity than necessary.

The Goal

I want to get down to two machines:

  • ZimaBoard: Stays as-is running Home Assistant (because I want my lights to work even when I’m screwing around with the main server)
  • Consolidated Proxmox Server: Everything else

The main driver here isn’t power savings or cost – it’s reducing complexity and creating a better platform for personal development projects. I want to spend time building applications, not managing infrastructure.

The Plan

The UnRAID machine is the beefiest system I have – AMD Ryzen 7 5700G with 32GB RAM (soon to be 64GB). It’ll become the new Proxmox host. The current Proxmox machine has just a 1TB SSD, so it’s not useful for much beyond its current role.

Here’s what needs to happen:

  1. Backup everything critical to Backblaze B2 - Mainly the 3TB of family photos in Immich and documents in Paperless.

  2. Test Proxmox in a VM on UnRAID - Before nuking UnRAID, I’ll run Proxmox as a VM to test out configurations, learn the platform better, and start migrating services.

  3. Consolidate storage - Move all drives into the UnRAID machine using an HBA card I already have. The Synology drives will form one ZFS pool, the UnRAID drives another.

  4. Migrate services gradually - Move Immich, Paperless, and other services to the new Proxmox instance while keeping the old ones running.

  5. Convert UnRAID machine to bare metal Proxmox - Once everything is tested and working, nuke UnRAID and install Proxmox directly.

The Hardware

The future Proxmox server will have:

  • AMD Ryzen 7 5700G (8 cores, 16 threads)
  • 64GB DDR4 RAM (upgrading from 32GB)
  • ~60TB raw storage across multiple drives
  • LSI HBA card in IT mode for drive passthrough
  • New PSU with enough juice for all the drives

Expected Pain Points

I’m expecting downtime. This isn’t going to be a seamless migration – services will be offline, things will break, and I’ll probably discover dependencies I didn’t know existed.

The big concerns:

  • Data migration: Moving 60TB of data between different storage formats (UnRAID array, Synology SHR, ZFS pools)
  • RAM constraints: 32GB might be tight during the migration when running both UnRAID and Proxmox VMs
  • Service configuration: Preserving Immich face recognition data, Paperless documents and tags
  • Network reconfiguration: Moving from UnRAID’s Docker networking to Proxmox VMs and LXC containers

Why Proxmox?

I’ve been comfortable with UnRAID for years, but Proxmox offers better flexibility for what I want to do:

  • Proper virtualization with KVM instead of UnRAID’s QEMU wrapper
  • LXC containers for lightweight services
  • ZFS native support for data integrity
  • Better resource isolation between services
  • Free and open source

Plus, this is partly about learning. Proxmox is more common in professional environments, and the skills transfer better than UnRAID-specific knowledge.

Next Steps

First priority is getting those backups running. I’ve set up Synology Cloud Sync to push everything to Backblaze B2. At my upload speeds, it’ll take about a week for the Immich photos alone. While that’s running, I’ll start planning the service migration in detail.

The next post will cover the backup strategy and the reality of pushing 3TB to the cloud on a residential internet connection. Spoiler: it’s going exactly as slowly as you’d expect.