The Homelab Way: Running WordPress in Docker Behind a Caddy Reverse Proxy

I moved this entire blog off my local dev server and onto Hetzner cloud hosting in about fifteen minutes today. Not fifteen minutes of clicking around hoping for the best. Fifteen minutes of actual wall-clock time from “new server provisioned” to “site is live and loading fast.” No database export drama, no file permission archaeology, no half-broken wp-config.php situations at midnight.

That’s not a magic trick. That’s just what happens when you stop running WordPress like it’s something you nail to a server and start running it like a portable, self-contained stack.

Here’s how the whole thing works.


Why Most People Run WordPress Wrong (And Why It Comes Back to Bite Them)

The traditional WordPress setup story goes something like this: you get a shared hosting account, or you throw it on a VPS bare-metal, the installer runs, it works fine, and then six months later you’ve got seventeen plugins, a custom PHP version pinned because one plugin threw a fit when you upgraded, and a server configuration that exists only in your memory and a few Stack Overflow tabs you never closed.

Shared hosting is the worst offender. You’re on a box with forty other sites. Your PHP version is whatever the host decided to support this year. Your “migration plan” is downloading a backup zip through cPanel and praying the import works on the other end. I’ve been in IT for 28 years. I’ve watched this exact scenario destroy weekends for people who should have known better. Hell, I’ve even done it that way before. I’ve also been that dude debugging it at 11 PM on a Sunday.

Bare-metal VPS installs are better, but only slightly. You get more control, which means you also get more ways to make a mess. Apache or NGINX configs that grow organically over years. MySQL installed directly on the host, version-locked because touching it means touching everything. PHP extensions installed globally and half-forgotten. The server becomes what people in infrastructure call a snowflake: unique, irreplaceable, and terrifying to touch.

And then you need to migrate it. Or the host raises prices. Or you want to move from your homelab server to actual cloud hosting. Suddenly “move a WordPress site” becomes a project with subtasks.

Here’s the thing: WordPress itself is fine. It’s not glamorous, but it works, it has a massive plugin ecosystem, and for a personal blog it gets the job done without requiring you to become a frontend developer. The problem has never been WordPress. The problem is how people deploy it. Specifically, the problem is deploying it in a way that welds the application to the server underneath it.

Containerization solves that. Not theoretically. Actually.


Docker Gives WordPress a Home It Can Actually Pack Up and Move

I’m not a developer. I’m a systems engineer who has been building things with code for about five years using whatever tools made the problem smaller. PowerShell is my daily driver at Advocate Health for Exchange and Active Directory work. Python and PHP are what I reach for in the homelab. I say that because Docker’s appeal to me is not academic. It’s practical.

The core idea is this: the entire WordPress stack, WordPress itself, the database, and the cache layer, lives in a docker-compose.yml file. That file is the server, functionally. It declares what runs, how it’s configured, where data lives, how containers restart, and how they talk to each other. You don’t configure this stuff interactively and then try to remember what you did. You write it down once in a compose file, and the compose file is the truth.

Here’s my actual production compose file, anonymized:

services:
  wordpress:
    image: wordpress:latest
    container_name: wordpress
    restart: unless-stopped
    ports:
      - "8484:80"
    environment:
      WORDPRESS_DB_HOST: wordpress-db
      WORDPRESS_DB_USER: wordpress_user
      WORDPRESS_DB_PASSWORD: wp_password
      WORDPRESS_DB_NAME: wordpress_prod
    volumes:
      - /mnt/wordpress/html:/var/www/html
      - ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
    depends_on:
      - db
      - redis

  db:
    image: mariadb:11
    container_name: wordpress-db
    restart: unless-stopped
    environment:
      MYSQL_DATABASE: wordpress_prod_db
      MYSQL_USER: wordpress_db_user
      MYSQL_PASSWORD: wp_password
      MYSQL_ROOT_PASSWORD: wp_rootpassword_2026
    volumes:
      - /mnt/wordpress/db:/var/lib/mysql

  redis:
    image: redis:7-alpine
    container_name: wordpress-redis
    restart: unless-stopped
    command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru

Let me walk through the deliberate choices here, because none of this is random.

MariaDB 11 instead of MySQL. MariaDB is a drop-in replacement for MySQL that the WordPress ecosystem supports fully. It’s faster on read-heavy workloads, which is exactly what a blog is. There’s no practical reason to run MySQL here.

Redis on Alpine. The redis:7-alpine image is tiny. Legitimately tiny. Alpine Linux base images strip out everything you don’t need and leave you with a container that uses almost no memory at rest. The --maxmemory 128mb --maxmemory-policy allkeys-lru command line sets a hard ceiling and tells Redis what to do when it hits that ceiling: evict the least recently used keys first. That’s the right policy for object caching. Without it, Redis just keeps growing until something complains.

Volume mounts to /mnt/wordpress/html and /mnt/wordpress/db. This is the most important architectural decision in the whole file. WordPress files and the database both live on the host filesystem, not inside the container layer. Containers are ephemeral by design. If you store your data inside the container and you stop or remove that container, the data goes with it. Mounting to a real path on the host means the container can be nuked, rebuilt, or upgraded and your data stays exactly where it was. On Hetzner, those paths live on a mounted volume. On Scooby, they were on a local disk path. The compose file didn’t care either way.

 

The uploads.ini override. WordPress’s default PHP upload limits are laughably small for anything beyond text posts. This bind-mount drops a custom PHP ini file into the container’s config directory, raising the upload size limit without touching anything else. One file, one mount, problem solved.

restart: unless-stopped on everything. If the host reboots or a container crashes, it comes back automatically. You don’t have to babysit it.

WordPress Docker stack: Caddy terminates HTTPS and routes to WordPress on port 8484, which connects to MariaDB for storage and Redis for object caching. Persistent data lives in host volume mounts, not inside containers.

Hetzner VPS — WordPress Production Stack knuckledustchronicles.com Internet / Client HTTPS :443 :443 Hetzner VPS · Ubuntu 24.04 Caddy (Reverse Proxy) Listens :443 · Auto TLS (Let's Encrypt) Caddyfile: reverse_proxy localhost:8484 :8484 docker-compose · internal bridge network wordpress image: wordpress:latest port 8484:80 restart: unless-stopped DB cache wordpress-db image: mariadb:11 internal port 3306 restart: unless-stopped wordpress-redis image: redis:7-alpine maxmemory 128mb policy: allkeys-lru /mnt/wordpress/html host volume mount (WP files) /mnt/wordpress/db host volume mount (DB data) ./uploads.ini PHP upload limit override traffic flow volume mount host filesystem path
WordPress Docker stack on Hetzner: Caddy terminates HTTPS and proxies to WordPress on port 8484, which connects to MariaDB for storage and Redis for object caching. Persistent data lives in host volume mounts.


Caddy: The Reverse Proxy That Doesn’t Make You Want to Throw Your Computer

A reverse proxy sits in front of your application and handles the public-facing part of the connection. In this case, Caddy listens on port 443 for HTTPS traffic destined for knuckledustchronicles.com, handles the TLS certificate automatically, and forwards the request back to WordPress on port 8484. WordPress never sees the public internet directly. Caddy is the doorman.

I ran NGINX Proxy Manager in the homelab for a while. It works, and if y’all want a GUI to manage proxy hosts, it’s fine. But it’s one more container to run, one more web interface to log into, one more thing that needs updating and occasionally misbehaves. It also has a habit of making simple things feel more complicated than they need to be, especially when you’re doing anything beyond basic forwarding.

Caddy replaced it. The config went from a GUI-managed set of database-backed rules to a plain text file I can read without squinting. Automatic TLS via Let’s Encrypt is built in. You don’t configure it. You just point Caddy at a domain and it handles the certificate request, renewal, and everything else.

The Caddyfile block for WordPress looks like this:

knuckledustchronicles.com {
    reverse_proxy localhost:8484
}

That’s it. Four words in the directive. Caddy figures out HTTPS, gets the cert, renews it before it expires, and proxies traffic to port 8484 where WordPress is listening. What used to take 20 minutes of NGINX config fumbling, reading documentation, and testing rewrites is now four lines that work the first time.

For internal homelab services I’m running Authelia in front for SSO. For public-facing sites like this one, Caddy handles it directly. Clean separation, no friction.


The Migration: Fifteen Minutes From Scooby to Hetzner

Scooby is my dev server. It lives on my home network in Gray and has been the landing zone for half-built projects and experiments for years. Running the blog there made sense when I was iterating on the setup, but a home server behind a residential connection is not where you want to host something you actually want people to read reliably.

Hetzner was the obvious move. Good pricing, solid European data centers, dead simple VPS provisioning. The question was how painful the migration would be.

Leave a Reply