It doesn’t start with the prompt — it starts with you
16 of 17 · March 2026

It doesn’t start with the prompt — it starts with you

It doesn't start with the prompt — it starts with you

Everyone talks about the prompt. How you phrase it. Which template you use. What system messages you write. Prompt engineering has become an entire discipline — as if the secret to good results lay in how you talk to the machine.

It doesn't.

I have watched teams use LLM tools for eight months now. Not demos, not workshops — in production, every day, on real client projects. And the pattern that emerges has nothing to do with the prompt.

What actually makes the difference

Two developers. Same tools. Same codebase. Same task.

One produces a solution that holds together, handles edge cases, follows the project's architecture. The other produces something that looks right but falls apart at the first review.

The difference is not the prompt. The difference is what the person understood before the prompt was written.

The first developer understood the task. Not just what the requirement said — but why the customer needs it, what can go wrong, which dependencies exist, which context matters. That understanding shaped everything that was fed into the tool. The prompt was a consequence of understanding, not a substitute for it.

The second developer read the requirement, copied it in, and expected a result. The prompt was perfectly formulated. The understanding was missing.

Why the LLM perspective matters

When you call it AI, you think: the tool should understand. It should raise questions. It should compensate for what I miss.

When you call it what it is — a large language model, an LLM — you understand the mechanics. The model processes text. It predicts the next token based on what you gave it. The quality of what you put in determines the quality of what comes out.

This is not a metaphor. This is literally how it works.

The context window — the space where the model operates — is not a technical detail. It is the entire playing field. And you decide what goes in there. Your understanding of the problem. Your description of the customer's needs. Your follow-up questions. Your domain knowledge. All of that is input — and that input is you.

The listener beats the coder

Here is what surprised me most after eight months.

It is not the best coders who get the best results. It is those who understand the task best. And those are two entirely different things.

One person on the team is not a developer in the traditional sense. But he understands infrastructure, he understands what the customer needs, he can take a vague idea and make it concrete. With LLM tools, he takes on tasks he would never have said yes to before — and delivers. Not because the tool is magical, but because he knows what needs to be done. The tool helps him do it.

Another person with a stronger technical background gets worse results. Not because the tool fails — but because the understanding of the task is shallow. The requirement is read but not interpreted. The context is missing. And then it doesn't matter how well you prompt.

It is pure soft skills. Listening. Asking questions. Building a picture of what actually needs to be solved before you open the terminal.

That is why we say LLM

We stopped saying AI at Mindtastic. Not to be pedantic — but because the word shapes the behaviour.

Say AI and you expect the tool to think. You lean back. You expect it to fill in the gaps.

Say LLM and you understand that it is a language model. It processes what you give it. If you give it a vague description, you get a vague result. If you give it your deep understanding of the problem, your specific questions, your domain context — then you get something extraordinary.

Context window and transcription. Those are the two most important words. Not because they are complicated — but because they describe exactly what is happening. You transcribe your understanding into text. The text enters the context window. The model works with that.

You are the prompt.

What this means in practice

I saw it recently in a conversation with a consultant. She is migrating some twenty companies between two ERP systems. She is not a developer. She has no technical background in the traditional sense.

But she knows migration. She knows which questions must be asked. She understands the processes, the risks, the dependencies.

So I said: record the conversation with the client. Feed the transcription into an LLM. Ask Socratic questions. You will have your migration plan — without writing a single document from scratch.

She understood immediately. Because she understood the task. The tool was secondary.

It is the same regardless of whether you are a developer, consultant, project manager, or business analyst. If you understand what needs to be solved, an LLM can help you deliver it in ways that were impossible two years ago. If you don't understand it, no prompt in the world will help.

Not garbage in, garbage out — but you in, you out

There is an old saying: garbage in, garbage out. It is true. But it is too passive. It sounds like the problem is lazy material.

The truth is more specific: what comes out is a reflection of you. Your understanding. Your curiosity. Your ability to listen and ask questions before you type.

The prompt is never the starting point. Understanding is.

It doesn't start with the prompt. It starts with you.


Mindtastic on the terminology that shapes how you work — LLM, not AI: why it matters.

See also: You can't validate what you don't understand (series 6) and My job is the questions (series 13).

Was this helpful?