It Was Never a Technology Question
I said "stop talking about magic."
That was my response to a business partner who asked whether AI could analyze a website visually — look at the design, understand the brand, suggest changes. She wanted to connect an AI tool to the site and experiment with the visual expression.
I explained that the tools she was talking about deal with database access, not visual analysis. That what she was describing was science fiction.
I was wrong. Not entirely — but wrong enough to matter. And the interesting thing isn't that I was wrong about the technology. The interesting thing is what my mistake was hiding.
What I Got Wrong
MCP — Model Context Protocol — which I had implemented in the project, can do more than database access. It's a protocol for how AI models connect to external systems. Databases are one implementation. But web pages can be scraped, images analyzed, CSS read and modified. Not through magic — through building the right integration.
My definition was too narrow. I had implemented MCP for a specific purpose and generalized my implementation to the entire technology. It's like saying a hammer can only drive nails because that's the only thing I've used it for.
Her instinct was closer to reality than my explanation. The direction was right. And I dismissed it — not with curiosity, not with "let me check," but with "stop talking about magic." That was defensive. And it was wrong.
But I Was Also Right. Partly.
What she wanted to do — experiment with the visual expression — requires more than the right tools. It requires someone who knows what good looks like.
Technically, we can connect AI to the site's CSS, feed in screenshots, let the model suggest changes. That works.
But the question remains: who decides whether the change is actually better?
She's not a designer. I'm not a designer. We have no one on the project with training or deep knowledge in visual design, brand identity, or user experience. We have opinions. We have preferences.
That's not the same as knowing.
We were having a conversation about tools — about what the technology can and can't do — when the real question was something else entirely.
The Gap Nobody Sees
I've written about this before. In "You Can't Validate What You Don't Understand," I described how AI amplifies what you already know but doesn't replace what you're missing. That the tool is a catalyst that needs fuel.
That article had an unstated assumption: that someone in the room has the expertise. That there's a senior person, an expert, someone who can validate.
What happens when nobody in the room can do that?
That's our situation. She wants to change the visuals. I can build the tools. But neither of us can look at the result and say with confidence: this is better. This communicates the right thing.
We can have opinions. We can guess. But that's not validation — it's intuition without foundation. And AI makes that intuition more dangerous, not less, because it produces results that look convincing.
The Pattern
Two people arguing about tools when the real gap is a competency neither of them has.
It looks like a technical question: "Can AI do this?" The answer might be yes. But the underlying question — "Do we have the knowledge to use the result?" — never gets asked. It's invisible because nobody in the room can see it.
We each had our own blind spot. Hers was that the tools solve the problem. Mine was that the problem doesn't exist. She overestimated the tool's ability to compensate for missing knowledge. I underestimated the tool's technical capability. Together, we almost had the full picture. But instead of putting it together, we argued about our respective halves.
The hardest gaps to see are the ones where nobody in the room has the expertise to even identify that there's a gap. "You can't validate what you don't understand" assumed that you know you don't understand. The truly dangerous situation is when you don't even know there's something to understand.
AI doesn't solve invisible gaps — it keeps them invisible longer.
The Correction
Instead of "stop talking about magic," I should have said: "What you're describing is possible. But I don't think the tool is the problem. I think the problem is that neither of us knows what we're aiming for."
That would have opened a conversation about what we actually need — maybe a designer, maybe a structured process for evaluating visual decisions. Instead, I shut the conversation down with a technical answer that wasn't even correct.
We need to talk about design. Not about tools, not about AI, not about protocols. About who has the competency to decide what the site should look like and feel like. And if the answer is "neither of us" — which I believe it is — then we need to find that competency. Not replace it with better technology.
The tool was never the problem. The competency was.
And I needed to be wrong to see it.