AI Changed How We Build. Not What Makes Software Work.
Every decade or so, something comes along that changes how software gets built. And every time, the same conversation follows.
"The old rules don't apply anymore."
We've heard this before. Client-server was going to make database expertise irrelevant. The web was going to kill desktop development. Cloud was going to mean nobody needed to understand infrastructure. Each time, the conversation followed the same arc: excitement about what's new, premature obituaries for what came before, and a quieter correction when the fundamentals reasserted themselves.
AI is the latest. And the conversation is louder this time.
What's actually different
Let's be honest about the shift, because the rest of this post has no credibility without it.
AI changed real things about building software:
- Who can participate. Domain experts, founders, and operators are building working software who never would have touched code two years ago. This is genuinely transformative.
- The speed of the first draft. Idea to working prototype in hours, not months. The economics of getting to "it works on my laptop" are fundamentally different.
- The volume of code being produced. More code, faster, than at any point in the history of this industry.
- The abstraction level. People are describing intent in natural language rather than specifying implementation in syntax.
These are structural changes. Anyone who dismisses them isn't paying attention.
But here's what's worth noticing. Every one of those changes is about how software gets built. None of them are about what makes software work once it's running.
That distinction matters more than most people realize.
The six things that haven't changed
Software has properties that don't care how it was written. They're not artifacts of a particular era or toolchain. They're properties of complex systems. And AI doesn't just leave them intact. It makes most of them matter more.
Architecture matters. AI makes it matter more. A rocket ship 1% off on its bearing barely notices at launch. A million miles later, it misses by thousands of miles. AI lets you build that million miles of code in a weekend. Bad architecture at AI speed means you're further off course, faster, with more code to untangle when you realize it. Separation of concerns, clear boundaries, intentional data flow. These were always important. At AI velocity, they're existential.
Security is a discipline, not a feature. AI generates code that works. It doesn't generate code that's safe. And it does it fast, so vulnerabilities accumulate at a pace no manual review can match. Authentication, authorization, input validation, secrets management. Someone has to think about these deliberately, because the AI won't. Peter McKay, CEO of Snyk, recently put it directly: AI security debt "compounds at machine speed." He's right.
Testing is how you know it works. Especially when you didn't write it. When a human writes code, they carry a mental model of what it does and why. AI-generated code arrives without that model. You can't hold the whole system in your head when the system was built in hours. Automated testing isn't optional anymore. It's the only way to have confidence in code you didn't write line by line.
Maintainability is a choice you make early. AI generates thousands of lines without considering whether anyone will need to change them later. It doesn't refactor toward clarity. It doesn't name things for the next developer. It optimizes for "works now." The result: codebases that are easy to create and nearly impossible to maintain. The house goes up fast, but the foundation wasn't designed to hold another story.
Operations don't end at deploy. AI can build your app. It won't set up monitoring. It won't design error handling for the failure modes that only surface under real traffic. It won't page you at 2am when the database connection pool is exhausted. Software is a living system. Shipping is the beginning, not the end. AI tools treat it as the finish line.
Requirements still need to be understood. AI builds what you describe. It can't know what you should have described. It won't push back on a bad idea. It won't ask "what happens when two users do this at the same time?" Product thinking (understanding users, defining scope, making tradeoffs) is still the hardest and most important part. It starts with knowing what to build and why. AI made the building faster. It didn't make the thinking faster.
The gap that keeps repeating
Every paradigm shift in computing has produced the same gap: the distance between "I can build something" and "I understand why it works."
GUIs made it easy to build databases with Microsoft Access. A generation of fragile, unmaintainable business applications followed. Content management systems made it easy to publish websites. A generation of insecure WordPress installations followed.
The pattern is the same every time. Lower the barrier, expand who participates (this is genuinely good), and watch the gap between capability and understanding widen (this is predictable). The gap isn't a failure of the new tools. It's a natural consequence of making powerful things accessible before the knowledge catches up.
AI is producing the same gap. At larger scale. At higher speed. With considerably more at stake. If you've worked with AI coding tools, you've probably already felt where this leads.
What experience actually gives you
It's not resistance to new tools. Not nostalgia for how things used to be done.
It's pattern recognition.
The ability to look at a codebase and see the failure modes before they manifest. The judgment to know when to move fast and when to slow down. The vocabulary to name problems before they become crises. The instinct to ask "what happens at scale?" before scale arrives.
This is what years of building software actually produces. Not the ability to write code faster (AI handles that now). The ability to see around corners. To know which shortcuts compound into debt and which ones are genuinely free. To recognize the difference between "it works" and "it's going to keep working."
That's what's missing from the "anyone can build" narrative. Not because the narrative is wrong. Anyone can build. But building and building well have always been different skills. AI widened that gap. It didn't close it.
The nail gun
We're not skeptics. We work with AI agents every day. Aggressively. We build faster now than we ever have.
But we use them the way an experienced carpenter uses a nail gun. Faster, yes. But with the same understanding of load-bearing walls that a hammer required.
The tools changed. The physics didn't.
Follow the thinking.
We write when we learn something worth sharing. No schedule, no spam.