The Caddyfile Is the Best Config Format I’ve Ever Written. Fight Me.

Most people who switch from NGINX to Caddy frame it as a practical decision. Automatic HTTPS, simpler setup, easier to maintain. That’s all true, and none of that is what I want to talk about.

What I want to talk about is something that nobody seems to say out loud: the Caddyfile is genuinely well-designed. Not just functional. Not just easier than the alternative. Actually designed well, in a way that suggests someone thought hard about what a human being has to do when they sit down and write one of these things.

That’s rare. Rarer than it should be in a field where we write configuration files for a living.

I’ve got about twenty services running behind Caddy right now on my homelab, everything from HookHouse-Pro and HomeBase to Authentik and Trilium Notes. I migrated off NGINX Proxy Manager last year, and I will not be going back.

Here’s the take people want to argue with: the Caddyfile isn’t just good for a config format. It’s good. Full stop. Better than most code I write. Better than most tool documentation I read. It accomplishes what it sets out to do with less friction than almost anything else I touch in this stack.

I’ll explain why, but first let me acknowledge the obvious objection.


“It’s Just Simpler Because It Does Less”

This is the most common pushback, and it’s wrong. It conflates scope with quality.

Yes, Caddy makes certain decisions for you. It handles certificate provisioning automatically. It assumes HTTPS. It has sensible defaults baked in. Critics frame this as Caddy hiding complexity rather than eliminating it.

But that argument proves too much. By that logic, Python is only easier than C because it hides memory management. PowerShell is only more readable than raw batch scripting because it abstracts the Windows API. At some point, the abstraction is the product. The question is whether it was done well.

And here’s where Caddy earns it.

When I write an NGINX config, I am thinking about NGINX. Block structure, location directives, proxy_pass syntax, whether I need to set headers manually, whether my upstream is http or https, what happens if I get the server_name order wrong. The configuration is a technical artifact, and I treat it like one.

When I write a Caddyfile, I am thinking about what I’m trying to do. Route this domain to that service. Add basic auth here. Strip a path prefix there. The syntax follows the mental model so closely that there’s almost no translation layer. What I want to happen is almost exactly what I write.

That is design. That is someone sitting down and asking “what does a human being actually mean when they configure a reverse proxy” and building the format around the answer.

A simple proxy block in Caddy looks like this:

homebase.internal.example.com {
    reverse_proxy homebase:3000
}

That’s it. Two lines. HTTPS handled automatically. No upstream block. No location block. No proxy_set_header boilerplate unless you actually need it.

Compare that to a minimal functional NGINX reverse proxy config, and you’re looking at three times as many lines for the same result, most of which exist to satisfy NGINX’s structural requirements rather than to express anything meaningful about what you want.

I will absolutely build a complex system to avoid doing a boring task once. But I also know the difference between complexity that creates capability and complexity that just taxes your working memory. NGINX has a lot of the second kind.


The pivot here is about what good design costs you.

There is a real price to Caddy’s elegance. If you need to do something unusual, the Caddyfile’s simplicity can work against you. The mental model that makes simple things easy also makes the documentation harder to navigate when you’re outside the common cases. You’re searching for where a concept lives in Caddy’s vocabulary, which may not match the vocabulary you already have from years of NGINX experience.

I’ve hit this. Configuring Authentik as my SSO layer for public-facing sites required digging into Caddy’s forward_auth directive and understanding how it interacts with the upstream. It wasn’t hard once I understood it, but the path there was less obvious than it would have been in NGINX, where I already knew the terrain.

That’s a real tradeoff. I’m not pretending it isn’t.

But here’s how I think about it: I spend far more time in the common cases than the edge cases. Every single service I add to the homelab requires a new reverse proxy entry. That happens constantly. The edge cases, by definition, don’t. Optimizing for the thing I do repeatedly is the right call, and Caddy wins that math easily.

The other cost is debugging. When something goes wrong with an NGINX config, the error messages are often awful, but the community has been debugging NGINX for twenty years. Stack Overflow has answers. Caddy is newer, the community is smaller, and when you hit a weird issue, you may be more on your own.

I’ve been there too. Worth knowing before you commit.

What I’ve found, though, is that I hit weird issues less often. Not because Caddy is magic, but because the config format makes it harder to accidentally write something ambiguous. Fewer surfaces for subtle mistakes. The configs I write for Caddy are shorter, which means there’s less to be wrong.

The other thing worth saying: Caddy’s automatic HTTPS gets mentioned constantly as a feature, but people underestimate what it actually changes about your workflow. Not having to think about certificate renewal means I added twelve services to my homelab last year and thought about certificates zero times. Authentik is handling SSO for knuckledustchronicles.com and familytechlab.com, and the TLS side of that is completely invisible to me. I’m comfortable changing my mind when better evidence shows up, and years of managing Let’s Encrypt renewals manually is exactly the kind of evidence that changes minds.

The Caddyfile is also just readable in a way that almost no config format is. I can open a Caddyfile I wrote six months ago and understand what it does immediately. I cannot say that about my old NGINX configs, some of which I wrote myself and still had to read twice to follow.

Readability in configuration is usually treated as a nice-to-have. I’d argue it’s critical. Config files are documentation. They describe your infrastructure. If you have to decode what a config file means before you can trust it, that’s a problem, and it’s a problem that compounds as your stack grows.

Twenty services. Clean configs. No certificate anxiety. Not a single midnight incident because I fat-fingered a location block.

If that sounds like I’m describing something obvious, you’re right. It should be obvious. The fact that it isn’t, the fact that we’ve spent decades accepting config formats that punish you for minor mistakes and reward arcane knowledge, says something about how low the bar was before someone at Caddy decided to build something better.

The best config format is the one you can read at 11pm after a long day without making a mistake. For me, that’s the Caddyfile, and it isn’t close.

Leave a Reply