The RIAA didn’t panic when drum machines replaced session drummers. They didn’t file lawsuits when Pro Tools made it possible to record a professional album in a spare bedroom. They adapted, extracted their cut, and moved on. So when the same organizations that spent thirty years monetizing Napster’s corpse suddenly found religion about protecting artists, something else is going on.
The panic is real. The reasoning is mostly theater.
1.
Here’s what the industry is actually scared of: not that AI will make bad music. Bad music has never slowed the business down. They’re scared that AI removes the tollbooth. For a century, the major label structure existed because making, distributing, and marketing music required capital. You needed a recording budget, a pressing plant, a distribution network, and a radio promotion deal. The label owned all of that. The artist needed the label. That relationship was the whole game.
AI doesn’t just lower the production cost. It removes the cost entirely in some cases. A guy in Gray, Georgia, sitting in a home office at ten o’clock at night, can now produce a Southern rock track with real emotional texture and actual sonic weight. No studio. No session players. No A&R person deciding whether his sound is commercially viable. That terrifies the labels not because the music is good or bad, but because the label was never in the music business. They were in the gatekeeping business.
2.
The copyright argument is where things get genuinely complicated, and I’ll give credit where it’s due: the legal questions are not stupid ones.
When a model trains on a specific artist’s catalog, absorbs the way they phrase a hook, structure a verse, the tonal signature of their production, and then produces something that sounds like a stylistic descendant of that catalog without any license agreement, that’s a real problem. Not a theoretical one. Asking whether that constitutes infringement isn’t hysteria, it’s a legitimate question that courts are going to be wrestling with for a decade.
But watch how the industry uses the legitimate argument as cover for the self-serving one. The lawsuits aren’t really about protecting the artist whose sound got absorbed. They’re about protecting the catalog asset. There’s a difference. The artist whose 1987 album trained a model probably isn’t seeing meaningful royalties from that catalog anyway. The label that owns the master recording is. The copyright argument and the artist protection argument get conflated deliberately, because one sounds noble and the other sounds like what it is.
3.
My curiosity tends to spiral, and I’ve spent a lot of late nights going down rabbit holes about where the actual creative labor lives in AI-generated music. Here’s what I’ve landed on, for now: the model is not the songwriter. It’s the instrument.
This sounds like a convenient argument from someone who makes AI music, and I’m aware of that. But think about it from a craft perspective. The prompt is a compositional decision. The style reference is an arrangement decision. The choices about what to keep, what to rebuild, what to reject, those are editorial decisions that require taste. Taste requires knowing what good sounds like. That knowledge doesn’t come free. I’ve put serious time into understanding Southern rock structure, the way classic country uses space, how funk rhythm sections breathe. That knowledge shapes every prompt I write.
The model handles execution. I still have to know what I’m asking for. Strip that out of the equation and you get the slop that makes everyone think AI music is a novelty. It isn’t, when someone who knows music is driving it.
4.
Where the industry’s panic is genuinely, completely wrong is in the assumption that AI music will devalue listening.
More music does not mean music matters less. People read more books after the paperback was invented. People watched more movies after home video dropped ticket prices. Abundance doesn’t kill the appetite for quality, it raises the floor and makes the ceiling more obvious. The truly great stuff gets more valuable, not less, when the average gets raised.
What dies is the mid-level professional, and that’s a real human cost that deserves more honest acknowledgment than it gets. The session musician who made a solid living doing one-off recording work. The jingle composer. The person who made music that was fine, functional, and licensed for a fee. AI doesn’t replace the artist. It replaces the serviceable. That’s not nothing. Those were real jobs that paid real rent.
5.
The artists who are genuinely irreplaceable already know they’re irreplaceable, and they’re not the ones panicking. The performers who build a relationship with an audience, who have a story and a presence and a reason people want to see them in a room, they’re fine. They were always selling something that had nothing to do with the recorded product anyway. The recorded product was always just an advertisement for the live experience.
What’s collapsing is the recorded product as a revenue stream, and that collapse started with Napster and never really stopped. AI music didn’t create that problem. It arrived at the funeral.
6.
The honest state of things right now: AI music is real, it’s getting better faster than most people want to admit, and it is not going away because the RIAA files another lawsuit. The legal framework will get sorted eventually, probably badly, in ways that protect corporate interests more than artist interests, because that’s how legal frameworks get sorted. Some kind of licensing structure will emerge for training data. Some artists will participate voluntarily, some won’t, and the market will figure out which argument the audience actually cares about.
Meanwhile, the music will keep getting made. By hobbyists, by professionals, by people like me who have things to say musically and never had the resources to say them before. The industry spent decades deciding who got to make records. That era is done. The question now isn’t whether AI music belongs, it’s what you do with it once you accept that it does.
The panic was never really about the music. It was about who controls the door. The door is open now, and that’s the part they can’t figure out how to monetize.
I make AI music strictly for myself to listen to. If someone else wants to listen, fine, but I am not slapping all this stuff to streaming platforms, that’s not me. I don’t want to make money off of this. I want relatable songs in the genre of my choosing about subjects I like. That’s it.