The Closest to the Problem Should Be Closest to the Build

A financial advisor we worked with had spent fifteen years watching clients make the same mistake. They'd get a lump sum (an inheritance, a home sale, a bonus) and freeze. The money would sit in a checking account for months because every financial planning tool assumed the person already knew what they wanted to do. None of them started where the actual person was: overwhelmed, uncertain, and needing to see options before they could think clearly.
She didn't describe this problem in a requirements document. She built the first version of the solution herself. An AI-generated prototype that walked a user through three scenarios based on their actual situation, not a generic risk questionnaire. It wasn't production-ready. But the core insight (start with the person's paralysis, not their portfolio) was something no outside developer would have designed. Not because developers aren't smart. Because they haven't sat across from a thousand clients watching the same hesitation.
That's the argument in one example. The closest to the problem should be closest to the build. And for the first time in the history of software, they can be.
The telephone game
Traditional software development is a game of telephone played with expertise.
The domain expert knows the problem. They describe it to a business analyst. The analyst translates it into requirements. The requirements go to a project manager, who prioritizes them. The developer reads the prioritized requirements and writes code. A QA engineer tests the code against the requirements. The product ships.
At every step, something is lost. Not through negligence. Through the fundamental limitation of translation. The financial advisor's insight about paralysis becomes "user should see multiple scenarios." Which becomes a dropdown with three preconfigured options. Which technically satisfies the requirement but misses the entire point.
The developer built exactly what was specified. The specification was a pale shadow of the expertise that generated it.
This isn't a people problem. It's a structural one. The architecture of traditional software development places layers of translation between the person who understands the problem and the artifact that's supposed to solve it. Each layer is staffed by competent, well-intentioned professionals. Each layer still degrades the signal.
The knowledge that can't be written down
Here's what makes this worse than it sounds. The most valuable knowledge domain experts carry isn't the kind that transfers well to documents.
Michael Polanyi called it tacit knowledge: the things you know but can't fully articulate. A sommelier can taste a wine and tell you the region, the vintage, the soil type. Ask them to write down how, and the explanation will be incomplete. Not because they're withholding. Because the knowledge lives in the doing, not in the describing.
Domain experts are full of tacit knowledge. The financial advisor doesn't just know that clients freeze with lump sums. She knows the shape of the hesitation. How it shows up differently in someone who inherited money (guilt) versus someone who sold a business (identity loss). She knows which questions unlock forward motion and which ones make the paralysis worse. She knows this because she's lived it a thousand times.
None of that fits in a requirements document. It barely fits in a conversation. It shows up when she's looking at working software and says "no, that's wrong" in a way she couldn't have predicted until she saw it.
We can know more than we can tell.
This is why proximity matters. Not because domain experts are better people. Because the knowledge they carry is the kind that only transfers through presence. Through being in the room when the thing takes shape. Through pointing at a screen and saying "that's not how it works" and watching the change happen in real time.
The gap AI closes
For most of software history, there was a legitimate reason to keep domain experts away from the build. They couldn't participate. The build required writing code, and code required years of specialized training. The layers of translation weren't ideal, but they were necessary. You couldn't put the financial advisor at the keyboard and expect production software.
AI changed the equation. Not by making domain experts into developers. By making the build accessible to people who understand the problem.
When a logistics director can describe a routing constraint in plain language and watch it become working logic in minutes, the translation layers become optional. When a compliance officer can point at a workflow and say "that step needs to happen before the approval, not after" and see it change immediately, the distance between knowing and building collapses.
The domain expert doesn't need to understand the code. They need to be close enough to the build to apply their knowledge directly. To catch the things that no specification could capture. To contribute the tacit knowledge that only shows up when they're looking at the work in progress.
This isn't about eliminating developers or product professionals. Their expertise is genuinely valuable. It's about removing the structural separation that kept the most important knowledge (the deep understanding of the problem) from reaching the place where it matters most: the moment the solution takes shape.
What changes
When the person closest to the problem is closest to the build, two things happen that the old model couldn't produce.
First, the product reflects knowledge that was previously inaccessible. The routing tool works the way routes actually work, not the way a developer imagined they might. The financial planning flow starts where real people actually are, not where a persona document said they'd be. The compliance workflow handles the edge cases that only someone who's lived through an audit would know to anticipate. The product is better not because the technology is better, but because the right knowledge is in the room.
Second, the feedback loop compresses from weeks to minutes. Instead of building for two months and discovering the approach was wrong in a review meeting, you discover it in the first hour. Not because you asked the right questions in advance. Because the expert was there when the answer became visible.
The hardest part of building software was never writing code. It was knowing what to build and why. AI made code cheaper. It didn't change what makes a product good.
The closest to the problem should be closest to the build. That's always been true. What's new is that it's finally possible.
Follow the thinking.
We write when we learn something worth sharing. No schedule, no spam.