Claude Code: The AI Wrangler That Actually Earns Its Keep

My Go-To Providers  ·  Part 1 of 1

It Started With a Broken Docker Compose File at 11pm

Three services wouldn’t talk to each other. The network names looked right. The environment variables were set. The logs said nothing useful, just that familiar wall of red text that means something is wrong and good luck figuring out what. I’d been staring at it for forty minutes.

I pasted the whole mess into Claude Code, typed “these three services can’t reach each other, here’s the compose file,” and had a working config in about four minutes. Not because the AI is magic. Because I’d been looking at the same file so long I’d stopped seeing it, and Claude spotted that I’d defined the network in one service block and then spelled it differently in another. A typo. Forty minutes lost to a typo.

That’s not a miracle story. That’s a Tuesday.

I’ve been running Claude Code as my primary AI coding tool for a while now, and this first post in the “My Go-To Providers” series is about why it earned that spot, what I actually use it for, and where I still have to keep both hands on the reins.

Why Claude Code Over Everything Else

Let me be straight: I’m not a professionally trained developer. I’m a self-taught homelab operator who builds and ships real things, apps with paying users, infrastructure that has to stay up, integrations that break in interesting ways. That context matters when you’re evaluating AI tools, because the thing that matters most to someone in my position isn’t raw benchmark performance. It’s how fast can I get unstuck.

I tried the others. GitHub Copilot is fine as an autocomplete engine inside your editor, but it’s passive, it waits for you to be almost right before it helps you get there. ChatGPT with code mode has moments of brilliance and moments where it confidently generates something that doesn’t work and then confidently explains why it definitely should. Cursor is genuinely interesting, especially if you’re living inside VS Code, and I don’t want to dismiss it.

But Claude Code hit differently for a few specific reasons:

It reasons about context, not just syntax. When I’m working on something, I’m not usually asking for a single function. I’m asking about a whole system. “Here’s my auth flow, here’s the error I’m getting, here’s the relevant part of my Supabase config, why is the session expiring early?” Claude Code handles that kind of multi-file, multi-layer question without losing the thread halfway through.

It pushes back when I’m wrong. This sounds like a small thing. It isn’t. I’ve had AI tools that will happily implement a bad approach and then help me debug it for an hour because I asked confidently. Claude Code will often say something like “you can do it that way, but here’s what’s going to break” before it writes a single line. For a self-taught builder, that’s worth a lot.

The agentic flow actually works. Claude Code isn’t just a chat window. It can read files, run commands, check its own output. When it’s running in that mode, it behaves more like a junior developer who can actually do things rather than just describe them.

What I Actually Use It For

Let me give you the real list, not the polished version.

Scaffolding new projects. When I’m starting something new, I don’t want to stare at a blank directory. I’ll describe what I’m building, a webhook handler, a small SaaS billing integration, a self-hosted tool with a simple frontend, and ask Claude Code to set up the structure. Not to write the whole thing, just to give me the skeleton so I’m making decisions rather than starting from zero. This alone saves me an hour on every new project.

Debugging gnarly errors. This is where it earns its keep the most. Especially the errors that are four layers deep, where the surface error is just a symptom and the actual problem is something weird about how two systems interact. I paste logs, I paste configs, I describe what I expected versus what happened, and I let it work through the chain of reasoning. It doesn’t always get it right on the first pass, but it usually gets me close enough that I can close the gap myself.

Drafting infrastructure configs. Nginx configs, Docker Compose files, systemd service units, Caddy configs, GitHub Actions workflows. I have a general sense of how these work, but I don’t have every option memorized. Claude Code is genuinely excellent at producing a solid first draft of an infrastructure file based on what I tell it I need. I still review everything line by line, more on that in a minute, but starting from a reasonable draft instead of a blank file is a material improvement.

Explaining unfamiliar territory. When I have to touch something I’ve never used before, a new API, an unfamiliar framework, some Linux subsystem I’ve always managed to avoid, I’ll use Claude Code to get up to speed faster than reading docs cold. Not as a replacement for documentation, but as a first pass that tells me what questions to ask.

Code review on my own work. I’ll paste something I’ve written and ask “what’s wrong with this” or “what would break at scale.” It’s caught things I’ve missed. Not every time, but often enough that it’s now part of my process before I ship anything I care about.

Where It Still Needs a Firm Hand

This is the honest part.

It can be confidently wrong about specifics. Library versions, API endpoints, specific feature availability in a given tier of a service, Claude Code sometimes states these with the same confidence it states things that are definitely correct. I’ve been burned enough times that I now verify anything specific before I run with it. The reasoning is usually sound. The particulars need a second look.

Long sessions drift. If I’m in a complex back-and-forth for an hour, building on earlier context and making incremental changes, things start to get fuzzy toward the end of the conversation. It might contradict something it said earlier, or forget a constraint I mentioned forty messages ago. I’ve learned to keep a working summary somewhere and re-anchor the conversation when it starts to feel like we’re losing the plot.

It can’t see your full system. This sounds obvious but it trips people up. Claude Code doesn’t know what’s actually running on your server, what version of things you have installed, what your database schema looks like, unless you tell it. When something isn’t working, the gap between “what Claude thinks your system looks like” and “what your system actually is” is often where the problem lives. Feed it real information and it performs better. Let it assume, and you’ll end up debugging Claude’s mental model instead of your actual problem.

It will implement what you ask for, even when you’re asking for the wrong thing. The pushback I mentioned earlier is real, but it’s not perfect. If you’re certain and direct, it will often defer to you. Which means you still have to know enough to know when you’re wrong. It’s a tool, not a safety net.

How It Fits Into the Bigger Stack

Claude Code doesn’t run anything. It doesn’t host anything, process payments, or manage your users. What it does is help me build and maintain the things that do all of that.

The rest of this series is about those things, the actual providers that make up the stack I run: where my servers live, how my traffic gets handled, how money moves, how users get authenticated. Claude Code sits upstream of all of it. It’s the tool I used to write the code that talks to Hetzner’s APIs, that drafts the Cloudflare Worker logic, that helped me understand Stripe’s webhook behavior when I was getting it wrong.

In Part 2, I’ll get into why all my server infrastructure lives in Helsinki and what Hetzner actually looks like when you’re running real workloads on a budget that isn’t backed by a venture check.

Leave a Reply