Self-hosting a decade of photos

AI Disclaimer

25 April, 2026

Self-hosting a decade of photos

Hundreds of Gigabytes photos and videos scattered across three Windows drives D:, G:, C:, accumulated over ~15 years of phones, digital cameras, film rolls, and work gigs. No central index, no search, no backup — just chaos and risk.

I wanted to fix this by self-hosting Immich, but file transfer is slow and the cloud setup is painful, so I kept avoiding the daunting task — until last month, when I used Claude Code to finally help me get it off the ground.

PhotoOps

I ran Claude Code from /mnt/ in WSL2. All my PC drives mounted at /mnt/c, /mnt/d, /mnt/g, plus a new 4 TB external HDD at /mnt/q as a physically separate target. I gave Claude Code clear goals and we brainstormed the setup together. The first step was creating order locally. We built a pipeline that walks every drive and moves images out tp /mnt/q — and since the run took hours, I made sure it was resumable so it could pick up after any error/crash.

  1. Index — walked all in-scope folders, captured EXIF date/camera, extracted folder-derived tags. Final result: 115,290 files, 649 GB.
  2. Hash — SHA-256 every file.
  3. Dedupe by content — 17,065 duplicate groups, 21,160 redundant copies = 102 GB of pure waste. A selection algorithm picked one survivor per group with rules like: prefer organized paths, penalize Dupes/ and UUID filenames, prefer the more recently curated drive.
  4. Quarantine, don't delete — copied all 21,160 duplicates to /mnt/q/PhotoQuarantine/ before touching originals.
  5. Consolidate — 92,945 files (547 GB) into /mnt/q/PhotoArchive/YYYY/YYYY-MM-DD/filename.ext.

Side outputs: tag-export.json (820 unique folder-derived tags) and album-mappings.json (673 albums reconstructed from folder structure). This metadata would later flow into Immich as my human-labeled tags and albums.

The infra, ~€13/mo

Hetzner Cloud + a Hetzner Storage Box, all managed as IaC:

Resource Spec €/mo
CAX21 server 4 vCPU ARM, 8 GB RAM 6.99
50 GB SSD volume Postgres + ML model cache 2.18
BX11 Storage Box 1 TB, SSHFS-mounted 3.81
Total ~€13.00

Photos live on the Storage Box, not the volume. SSHFS-mount it at /mnt/photos and point Immich's upload directory there. Slight network latency, massive cost win, and photos survive any server or volume failure because they sit on completely separate infrastructure. The 50 GB volume only holds Postgres and the ML model cache.

Uploading and importing

Photo upload: immich-cli upload --recursive /mnt/q/PhotoArchive/. ~547 GB over a residential line and took multiple sessions.

Album and tag import is where it got fun: Immich's API has no bulk endpoints for what I wanted, so I allowed Claude to do direct SQL into the Postgres container (transactional, dry-run-able and with a DB backup):

  • 606 albums rebuilt from folder structure. 44 got renamed from numeric folder IDs (e.g. 132 → "Tel Aviv 2019") via lookup against a hand-curated film-catalog.tsv of my film rolls.
  • For each catalog-matched album, I also created flat tags like Film: 35mm, Camera: Olympus OM-1, Stock: Portra 400.

Access via Tailscale

The server is reachable only on my tailnet — no public ports open except SSH. The Immich mobile app points at the same hostname as the browser; Tailscale runs as a background VPN on my phone.

Notes

Having everything finally organized has been great — I'm seeing photos I hadn't looked at in years. Immich's embedding search is genuinely useful: queries like black and white, desert plane just work. And the whole setup follows the 3-2-1 backup strategy, which keeps data-loss risk low.

  1. Dedupe before you upload. Manual is no fun; there are plenty of deterministic tools.
  2. Quarantine duplicates until you've verified the archive. Disk is cheap; an irreversible mistake isn't.
  3. Storage Box is the cost unlock. €4/mo for 1 TB vs. €40+/mo for the same in block storage. Latency is fine for a photo archive app.
  4. Tailscale for for personal stuff. Exposing your entire photo gallery to the open internet isn't worth the risk.

End state: A great private user experience, 93k photos, 547 GB, searchable, ML-tagged, organized into albums and film-roll metadata, on a €13/mo stack I can rebuild from tofu apply with some environment variables.