The Docker container came up clean. All green. Health checks passing. I typed the URL into the browser and got exactly what I’d coded, which was a form that asked for things nobody would ever need to enter, flowing into a database schema I’d designed before I actually understood what I was building. The software worked. It was just wrong.
That’s the thing about software that doesn’t get said enough: it can be completely functional and completely useless at the same time. And in my experience, that’s where most projects actually live, not crashed, not broken, just quietly solving the wrong problem with a lot of confidence.
I’ve been writing code in some capacity for years. Not well, mind you. I’m not going to pretend I sit down and architect elegant systems with clean separation of concerns and unit tests that would make a senior dev weep with joy. I hack things together. I use Claude to explain why my logic is inside-out. I debug by reading error messages out loud to myself like that helps. But I’ve built things that real people can use, and I’ve abandoned more things than I’ve shipped, and the pattern I keep seeing is this: the software graveyard isn’t full of projects that blew up. It’s full of projects that quietly stopped mattering.
The industry talks about software failures like they’re dramatic events. A startup runs out of money. A product launch gets buried by the algorithm. A company gets disrupted. Those things happen. But the quieter failure, the one I’ve lived through on personal projects and watched happen at the enterprise level in nearly three decades of IT work, is the software that survives but never really delivers. It just keeps running, accumulating features nobody asked for, until the people maintaining it can’t remember what it was trying to do in the first place.
There’s a version of this in every organization. Some application that’s been running since the early 2000s, that four different people have touched, that does something important enough that nobody wants to touch it, but nobody could explain its original purpose without reading a document nobody can find. It works. It just doesn’t serve anybody particularly well. The org has just arranged itself around its limitations so long that the limitations look like features.
I forget simple things constantly, names, where I put my coffee, whether I already restarted a service or just thought about restarting it. But I remember things like the exact configuration that broke a distribution list migration in 2019, or the specific version of a PHP library that introduced a session handling bug that cost me a weekend. My brain files real problems in permanent storage. The stuff that doesn’t matter floats off. Which means when I look at software that’s still running but collecting dust, I know it in my gut, because nothing about it is sticky. Nothing went wrong. Nothing went memorably right either.
The unpopular opinion I’ve landed on after building enough half-finished things is that we measure software success wrong. We measure uptime. We measure deployments. We measure features shipped, tickets closed, sprint velocity. What we don’t measure, not honestly, is whether the thing actually changed what someone could do. Whether it removed friction that mattered. Whether, if you pulled the plug tomorrow, anybody would feel it.
Most software, if you pulled the plug, people would adjust in a week.
That’s not a knock on developers. It’s a knock on how software gets conceived. It starts with a feature request, or a problem statement that’s actually a solution statement in disguise, or someone who saw a competitor do something and decided we need that too. The code gets written, the thing gets shipped, the box gets checked. Nobody asks, six months later, whether the problem it was built for actually got smaller.
I’ve been guilty of this with my own projects. I’ve built tools that solved the problem of me wanting to build a tool. The actual problem, the one I told myself I was solving, turned out to be either smaller than I thought or better handled by something that already existed. The software worked. I just wasn’t honest about what I was actually building it for.
The metric most software needs isn’t uptime or velocity. It’s whether the person it was built for can explain what’s different about their day because of it. If they can’t, all the green health checks in the world don’t add up to much.