Why I Open-Sourced My Hardened *arr Stack (and What Most Compose Files Get Wrong)

My name is Przemek and I have a problem with commercial streaming services. It started when I saw a proper 4K Blu-ray remux on my OLED TV. The bandwidth was 60 Mbps, zero compression artifacts, no banding in the blacks. I wasn’t aware what I was missing. Most streaming services push maybe 15 Mbps for their "4K." It's not even close.

So like any tech obsessed person, I went down the self-hosted media rabbit hole. Jellyfin, Sonarr, Radarr, the whole *arr ecosystem, Unraid OS, and buying a Terramaster and then modifying it. The pipeline itself is well-documented. What isn't well-documented is how to run it without feeling that something is off about your settings.

Pirates of the Caribbean meme: Is it ARR or RRR?

I looked at dozens of Docker Compose files across GitHub, Reddit, YouTube tutorials. They all had the same problems, and nobody seemed to care. The r/selfhosted is mostly hobbyist, which hack things together by duct tape and don’t see the difference between infra as code and clicking things through portainer.

The state of most *arr compose files is embarrassing

Here's what I kept finding:

Everything on one flat network. Your torrent client, your media server, your reverse proxy — all sitting on the same default Docker bridge. There's zero isolation. Your Jellyfin instance can talk directly to your torrent client and vice versa.

The "kill switch" that isn't one. Every guide tells you to use a VPN sidecar with iptables rules as a kill switch. This is a firewall rule. Software. If it fails — and iptables rules can absolutely fail or get flushed — your real IP leaks and you don't even know it. People treat this like it's airtight. It's not.

Ports bound to 0.0.0.0. I lost count of how many compose files just blast every port open to the entire LAN with no TLS, no auth, nothing. Your Sonarr admin panel is accessible from any device on your network by default.

No health checks anywhere. The VPN container dies? qBittorrent just sits there with no connection, silently failing. Nobody gets notified. Nothing recovers. Your family opens Seerr, requests a movie, and it just... never arrives. Then you get the text: "the movie thing is broken again."

I approached this stuff through the lens of a software engineer, who just wanted to make this mess more readable and clean.

So I built it properly

My project is called uncompressed. It's 10 containers across 2 Compose stacks, driven by a single .env file. Here's what's actually different.

Three networks, not one

  • traefik_proxy — HTTPS ingress only.
  • arr_internal — marked internal: true, so any traffic on this network has no external route. Service-to-service communication between the *arr containers stays here, on a network that physically can't reach the internet.
  • vpn_network — tunneled outbound traffic only.

To be precise: the *arr containers themselves are also attached to traefik_proxy so Traefik can reach them for ingress, which means they're not airgapped at the container level. The point is that the internal chatter — Sonarr asking Prowlarr for indexers, Radarr handing files to qBittorrent — never has to traverse a network that exposes them to anything outside the stack.

This took some trial and error before I came back with a solution I am happy with.

VPN namespace isolation (the big one)

This part took the most time, the docs were horrible and forum posts misleading. qBittorrent doesn't "route through" Gluetun. It runs inside Gluetun's network namespace via network_mode: service:gluetun. The qBittorrent container literally has no network interface of its own. It borrows Gluetun's.

On top of that, a custom init script forces BIND_TO_INTERFACE: tun0. So even within the namespace, it's bound to the tunnel interface specifically.

What this means in practice: if the ProtonVPN WireGuard tunnel drops, there is no network path. Not a blocked path. No path. The difference between a firewall rule and a missing network interface is the difference between a locked door and no door existing.

Zero ports on the public internet

Traefik binds exclusively to the Tailscale IP — ${TAILSCALE_IP}:443. If you're not on my Tailscale mesh, you can't even see that the *arr stack, Seerr, or qBittorrent's web UI exist. HTTPS certs get provisioned automatically via Cloudflare DNS challenges.

No router port forwarding. No dynamic DNS. No "but I set up Authelia." Just: are you on my mesh? No? Then there's nothing here for you. If you are not part of my tailscale network, then you can’t use it, although it’s running behind my custom domain configured on Cloudflare.

Two honest exceptions worth calling out:

  • Jellyfin publishes port 8096 to the host for direct LAN access. This is intentional — clients like Infuse on Apple TV want a direct connection for direct play, and routing 4K remuxes through Tailscale + Traefik for in-house streaming is unnecessary overhead. It's bound to the LAN, not the internet, but it is a published port. This is the part that maybe is still worth solving through tailscale on my apple tv.
  • ProtonVPN forwards a port through the WireGuard tunnel to qBittorrent so peer connections work and seeding ratios don't tank. That's a port forwarded through the VPN, not on your router. Different thing, but worth being clear about.

Making it survive without me

The real test for a home media server isn't whether it works when you set it up. It's whether it still works three weeks later.

Every container has an endpoint-specific health check running every 30-60 seconds. Not "is the process alive" — "does the API actually respond." An Autoheal container watches for failures and restarts anything that goes unhealthy.

The depends_on chain uses condition: service_healthy throughout. The download client won't start until the VPN tunnel is verified active. The media server won't start until the services it depends on are actually responding. No more cascading failures where one container dying takes everything else with it.

This is the kind of thing that sounds boring to build and is incredibly satisfying when it just works and you don’t have to explain to your friends that you have to restart your media server.

Behind the scenes, so how it all works together

Seerr tells Radarr, which queries Prowlarr for indexers, which sends the grab to qBittorrent inside the VPN namespace. The file downloads and gets hardlinked, Bazarr grabs subtitles, Jellyfin picks it up on the next library scan.

If any step in that chain fails, health checks catch it and Autoheal recovers it. It works smoothly and without worry.

Things that bit me

ProtonVPN WireGuard key rotation. This one is annoying. ProtonVPN's WireGuard keys can expire or desync, and when they do, Gluetun's tunnel fails silently. The health checks catch it eventually. I haven't solved this cleanly yet. If you have some ideas, please open an issue.

Traefik's label system. Traefik is powerful, but configuring it entirely through Docker Compose labels gets unreadable fast. The Let's Encrypt DNS challenge setup alone is a wall of labels. I went through a lot of iterations to keep the compose file from turning into label soup. It's still not perfect, but it works.

Gluetun port forwarding. Getting qBittorrent reachable for incoming peer connections through Gluetun's port forwarding took more debugging than I'd like to admit. The port needs to be forwarded through the VPN provider, mapped in Gluetun, AND configured in qBittorrent. Miss any one of those three and you silently lose incoming connections, which tanks your seeding ratios.

Get it

The whole thing, you can check out on my GitHub.

github.com/Lackoftactics/uncompresseduncompressed.media

Your media server shouldn’t require babysitting. This one doesn’t.

If you find a bug or see something I could do better, open an issue. If you've solved the ProtonVPN key rotation problem, I'd genuinely love to hear about it.