Every request is a little miracle when you think about it.
One second a person is staring at a browser tab. The next, they’ve authenticated, unlocked a feature, handed over seventeen dollars, and your database has a new row in it, all before they’ve finished reading the confirmation message. That chain of events doesn’t happen by accident. It happens because a handful of services, stitched together carefully, agree to do their jobs in the right order. This post is about that chain. One request, start to finish, through everything I’ve covered in this series, and what it actually feels like to have built it yourself.
Step One: The Request Leaves the House and Hits Cloudflare
A user types in a URL, let’s say it’s a SaaS project I run, something modest, a tool that does one thing well and charges a small recurring fee for the privilege. They hit Enter.
The first thing that happens has nothing to do with my code. Their DNS resolver reaches out, gets the answer back from Cloudflare’s nameservers (because as I covered in Part 3, every domain I own lives and breathes through Cloudflare), and the request routes to Cloudflare’s edge before it ever sniffs the direction of Helsinki.
This is where I feel the first layer of calm.
Cloudflare absorbs the initial contact. DDoS? Handled before it reaches me. SSL termination? Done at the edge. Bad bots hammering a login form? There’s a rule for that. The request that actually makes it through to my infrastructure is already a cleaned-up, validated, HTTPS-verified packet, not the raw chaos of the open internet.
For most page requests, Cloudflare either serves a cached response outright or forwards the request through to my origin. For anything with routing logic, A/B flags, auth redirects, lightweight middleware, a Cloudflare Worker intercepts it first. I’ve got Workers handling a few things: stripping tracking parameters before they hit the cache key, redirecting unauthenticated users before the request wastes a round trip to Hetzner, and injecting headers that my app uses to make smarter decisions downstream.
The Worker runs in milliseconds, at the edge, globally. My Hetzner box never sees the requests it doesn’t need to see.
Step Two: The Request Arrives at the Hetzner Box
For the requests that do need the origin, authenticated app views, API calls, webhook processing, they land on a VPS in Helsinki. As I wrote in Part 2, I run a small fleet of Hetzner servers, each sized appropriately for what it hosts. Nothing overprovisioned. Nothing that costs me money while it idles.
The app itself is a Node process (sometimes Go, depending on the project) running behind Caddy, which handles the internal SSL and proxying. Caddy is quiet and reliable in a way that makes me irrationally fond of it. It just works and never asks for anything.
When an authenticated request comes in, the first thing the app does is check the session. This is where Supabase steps in. As I laid out in Part 5, I offload auth entirely; I don’t store passwords, I don’t manage tokens, I don’t write session logic. The app validates the JWT that Supabase issued, confirms the user is who they claim to be, and pulls their user ID. That ID is the thread that connects everything else: their row in the database, their Stripe customer record, their subscription state.
If the JWT is expired or malformed, the app sends a 401 and that’s the end of that request’s story. The Worker upstream would normally catch unauthenticated users before they get here, but defense in depth means the app checks too. Never trust a single gate.
Step Three: The Money Part
Let’s say this particular request is a user upgrading their plan, going from a free tier to a paid one. They’ve clicked the button, selected a plan, and hit Confirm.
The app calls Stripe. As I described in Part 4, I’ve got Stripe wired up for both one-time purchases and recurring subscriptions, and this flow is the one I’ve rebuilt the most times as I got smarter about it. The app creates or retrieves a Stripe Customer (keyed off the Supabase user ID, stored in my database), initiates a subscription with the right price ID, and handles the response.
If everything goes well, Stripe sends back a success and a webhook fires asynchronously to confirm the subscription is active. My app updates the user’s record in the database, subscription tier, billing period, renewal date. The user sees a success state.
If the card declines, and they do, more than you’d expect, the app handles that gracefully, the webhook route catches the failure event, and the user gets a friendly prompt to update their payment method rather than a cryptic error or, worse, silent access that shouldn’t be there.
The webhook handling is where I spent the most time getting things right. It’s the least visible part of the stack and the most important. A webhook that fails silently is a revenue leak. I’ve got retry logic, idempotency checks, and logging that tells me exactly what Stripe told me and when. Claude Code (Part 1) wrote the first draft of most of that logic and I’ve refined it through real production incidents.
Step Four: The Response Heads Home
The app sends back a response. Caddy forwards it. Cloudflare applies any response-side rules, cache headers, security headers, compression, and ships it back to the user’s browser. The whole round trip, under normal conditions, is measured in hundreds of milliseconds.
The latency of hosting in Helsinki is real but rarely matters in practice. For a SaaS tool where users are interacting deliberately, filling out a form, clicking a button, the difference between 80ms and 180ms is imperceptible. Where it would matter is at scale, for real-time applications with a geographically concentrated user base. That’s a constraint worth knowing. But for what I build and who uses it, Helsinki is fine.
What I’ve gained in exchange for that latency is price, reliability, and servers I actually trust. No surprise bills. No credits that expire. No services deprecated under my feet. Hetzner is boring in the best way.
What I’d Do Differently Starting From Scratch
If I were building this stack from zero today, knowing what I know now, here’s what I’d change:
- I’d set up observability earlier. I flew blind for too long; basic logging isn’t enough. Structured logs, error tracking, and uptime monitoring should be day-one infrastructure, not something you bolt on after your first 3am incident.
- I’d invest in local development parity sooner. Getting the local environment to mirror production, same environment variables, same auth flow, mocked Stripe webhooks, saves enormous amounts of debugging time. I cobbled this together slowly. It should be the first thing.
- I’d lean on Claude Code for documentation as heavily as code. I used it primarily as a code generator early on. Now I use it to explain things back to me, to draft READMEs, to help me understand what I actually built before I forget. That use case snuck up on me and became just as valuable.
- I’d trust the boring choices faster. I spent time evaluating alternatives to almost everything in this stack. Most of the alternatives were fine. But this stack, Cloudflare in front, Hetzner underneath, Stripe for money, Supabase for auth, Claude Code keeping me from writing my worst code, works. It’s composable, it’s affordable, and every piece of it has documentation I can read and a support surface I can reach.
Running a production-grade stack solo is less lonely than it sounds, mostly because these providers have done the hard parts I had no business doing myself. I bring the ideas and the integration logic. They bring the hardened, battle-tested infrastructure. It’s a reasonable division of labor.
That’s the whole ranch, six posts, five providers, one self-taught operator trying to run tight systems without losing his mind or his margins. Coming up next, I’ll be stepping back even further with a new series on the operational habits that hold this all together: how I think about monitoring, incident response, deploy hygiene, and what a one-person on-call rotation actually looks like when you’re also the one who built the thing and the one who sells it. The code is only part of the job.