My Arr Stack Setup: Proxmox, LXCs, and NFS Workarounds

Documenting my media server architecture to save future-me from debugging blind.

This blog exists for one primary reason: to serve as documentation for future-me. Debugging a broken server configuration months down the line when you have zero memory of what you originally did is an absolute nightmare.

So, for my first post to test out this new blog, I am documenting one of the more “Frankenstein” (but highly functional) parts of my homelab: my media stack.

A full deep-dive into my entire homelab network and architecture is coming later, but for context, everything runs on a single HP EliteDesk SFF. It’s running Proxmox as the hypervisor, and I am working with limited storage: just two drives (a 250GB and a 500GB). The 250GB drive is already full housing important stuff that I absolutely do not play around with. The 500GB drive holds my primary production Ubuntu VM, where most of my Docker containers live.

Here is how I set up the media stack, almost broke my production VM, and ended up using NFS and LXCs to fix it.

The Services & The Initial Approach

The media pipeline itself is the standard, battle-tested “Arr” stack. If you aren’t familiar with them, they are phenomenal pieces of software that automate media requests, downloading, and library management:

  • Seerr: The frontend UI for discovering and requesting media.
  • Radarr & Sonarr: The core managers that monitor requests, grab metadata, and organize the files.
  • Prowlarr: The indexer manager that feeds search results to Radarr and Sonarr.
  • qBittorrent: The download client doing the heavy lifting.
  • Bazarr: The subtitle manager.

Why Docker and Bind Mounts?

My initial approach was to run all of these inside my main Ubuntu VM using Docker Compose. For storage, I went with bind mounts instead of Docker’s managed volumes.

Why? Because it makes management and migration incredibly easy. I keep the bind mount data directories in the exact same parent folder as the docker-compose.yml file. If I ever need to migrate these services to a new server, back them up, or nuke the VM, I can literally just grab that single folder, drop it on a new machine, run docker compose up -d, and everything is exactly as I left it.

This approach worked flawlessly—until I realized how much storage 4K media actually consumes.

The Storage Dilemma: Protecting the Production VM

This approach worked fine at first, but I quickly realized a massive flaw in my architecture: media files are huge, and I am working with strict storage limits.

My Proxmox node only has two physical drives. The 250GB drive is dedicated entirely to the Proxmox OS and my ISO images (and is basically full). Everything else—all my VMs and containers—lives on a single 500GB drive.

To keep things segmented, my production Ubuntu VM is only allocated a 100GB virtual disk.

The problem is that if I requested a few 4K movies or TV show seasons in Jellyseerr, the automated download pipeline would just start pulling them. Proxmox won’t let the VM spill past its 100GB limit, which is good for the host, but catastrophic for the VM. If the downloads fill up that 100GB virtual disk, it will corrupt the entire Ubuntu VM image. A corrupted main VM means every single service running on it goes offline, and restoring everything from backups is a massive headache I do not have time for.

To isolate the risk, I decided to separate the OS from the data.

In Proxmox, I created a new virtual disk (using the remaining space on the 500GB drive) and attached it to the Ubuntu VM specifically for media storage. The logic is simple: even if Jellyseerr goes crazy and completely fills the disk, only that specific media virtual disk runs out of space. The main 100GB Ubuntu OS drive remains perfectly safe and operational. If things go terribly wrong, I can just delete the media disk and start over without affecting the main server.

For future reference, after attaching the virtual disk in the Proxmox GUI, here is how I initialized and mounted it inside the Ubuntu VM:

# Find the new disk (usually /dev/sdb or /dev/vdb)
lsblk

# Format it to ext4
sudo mkfs.ext4 /dev/sdb

# Create the mount point
sudo mkdir -p /mnt/media

# Add it to fstab so it mounts automatically on boot
# (Always use the UUID, get it with 'sudo blkid')
echo "UUID=your-uuid-here /mnt/media ext4 defaults 0 2" | sudo tee -a /etc/fstab

# Mount it
sudo mount -a

The 4K Transcoding Problem: Why I Pivoted to an LXC

With the storage isolated and safe, the downloading stack was running beautifully. Then I tried to actually watch a 4K movie.

Within seconds, I heard my server’s fans ramping up like a jet engine, and the movie playback was completely stuttery and sluggish. The server was falling back to software encoding, meaning the CPU was trying to brute-force the video stream.

My i7-8700 has an integrated GPU with Intel Quick Sync, which is incredible for hardware transcoding. So why wasn’t I using it?

Here is the reality of Proxmox: passing an iGPU into a full virtual machine is a massive pain in the ass. It often requires messing with GRUB boot parameters, IOMMU groups, and vfio-pci drivers. Worse, if you pass the iGPU to a VM, you lock that hardware to that single machine, preventing other VMs or the host from using it.

Since I am already using Proxmox, there is a much smarter workaround: Linux Containers (LXC).

Unlike full VMs, LXCs share the host’s kernel. Passing a GPU render node into an LXC is incredibly simple. Instead of running Jellyfin in a Docker container inside the main Ubuntu VM and fighting PCIe passthrough, I spun up a dedicated Jellyfin LXC.

To give it access to the iGPU for hardware transcoding, I just had to edit the LXC config file (/etc/pve/lxc/VMID.conf) directly on the Proxmox host:

# Give the LXC access to the Intel Quick Sync render node
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file

After saving the file and rebooting the LXC, I enabled Intel QuickSync in the Jellyfin dashboard. Hardware acceleration engaged immediately. The 4K streams now transcode smoothly, and the CPU usage remains at normal baseline levels instead of maxing out.

Bridging the Gap with NFS

So, to recap the architecture at this point:

  1. The media is being downloaded by Docker containers inside the Ubuntu VM, and stored on a dedicated virtual disk mounted at /mnt/media.
  2. The media player (Jellyfin) is running inside an isolated LXC container so it can access the Intel Quick Sync hardware.

The final hurdle: How does the Jellyfin LXC access the media sitting on the VM’s isolated virtual disk?

The answer is an NFS (Network File System) share.

I set up an NFS server on the Ubuntu VM to broadcast the /mnt/media directory over my internal virtual network, and then mounted that share inside the Jellyfin LXC as a client.

On the Ubuntu VM (NFS Server): First, I installed the NFS kernel server:

sudo apt update && sudo apt install nfs-kernel-server

Then, I edited the /etc/exports file to share the media drive with the local Proxmox subnet:

/mnt/media    192.168.1.0/24(rw,sync,no_subtree_check)

After saving, run sudo exportfs -a and sudo systemctl restart nfs-kernel-server to apply.

On the Jellyfin LXC (NFS Client): Inside the LXC, I installed the NFS common tools, created a mount point, and added it to /etc/fstab so it connects automatically on boot:

sudo apt update && sudo apt install nfs-common
sudo mkdir -p /mnt/media

# Add to fstab
echo "192.168.1.X:/mnt/media /mnt/media nfs defaults 0 0" | sudo tee -a /etc/fstab
sudo mount -a

(Note: Replace 192.168.1.X with the actual IP address of the Ubuntu VM)

Conclusion

Is this a bit of a Frankenstein setup? Absolutely. Running the downloading stack in a VM and the viewing stack in an LXC just to share an iGPU and protect a 100GB OS disk is a very specific workaround.

But it works flawlessly. The downloads are completely automated, the host OS is protected from storage overflow, and 4K transcoding runs buttery smooth on bare metal hardware.

Obviously, there are cleaner architectural approaches to media management if you have a massive dedicated NAS or unlimited drives. But if you are working within tight storage constraints on a single Proxmox node, this gets the job done.

If anyone reading this has managed to streamline a similar restricted-storage setup, drop a comment below.

Comments