Context Beats Prompting. Every Time

A mediocre prompt with great context outperforms a brilliant prompt with none

A few weeks ago, I wrote about how investing in context, specifically through Claude Code, creates AI systems that compound over time. The response was brilliant, but a lot of people came back with the same question: "I don't use Claude Code. Does this apply to me?"

Short answer: yes. The principle I was describing isn't a Claude Code feature, per se. It's ‘context strategy’ in action. And it can work across every AI tool you use.

The gap between someone who uses AI casually and someone who gets consistently excellent output isn't about which model they use (with some exceptions) or how clever their prompts are. It's about context. The person getting 10x value has built a structured, maintained set of documents that give the AI deep understanding of who they are, what they're working on, and what good output looks like in their world.

Without that layer, every conversation starts from zero. You waste time re-explaining your role, your preferences, your constraints. The AI gives you generic output. You spend more time editing than you saved. And then you conclude that AI is overhyped.

Most people's version of "giving AI context" stops at custom instructions. A paragraph about your role, maybe your tone preferences. That's fine as a starting point, but it's a single instruction trying to cover everything about you, your work, and your current situation. It's the equivalent of handing a new hire a one-page bio and expecting them to be effective. What comes next?

I've built a framework around this. I call it the AI Context Stack (catchy, I know…), and it works across Claude, ChatGPT, Gemini, Claude Code, Cowork, and anything else. Here's how it works.

Context is King.

The Three Layers That Make AI Actually Useful

The AI Context Stack has three layers, each serving a different purpose. Let me explain.

Create once. Maintain it weekly.

Layer 1: Identity Files (who you are)

These are plain text documents (better referred as ‘markdown’ when you talk to AI) that capture the basics about you and your work. Think of them as briefing a new colleague on day one: here's who I am, here's what I do, here's how we work, here's what matters.

At minimum, create one identity file per major domain. For me, that's a personal file (communication style, preferences, how I like to receive information), an organisation file (my role at my current client, team structure, stakeholders, internal terminology), and a business file (my Atomic Theory company positioning, services, newsletter tone).

The key is to be specific. "I value clear communication" adds nothing. "I want status updates that lead with the decision needed, not the background" changes the output dramatically. Include your actual communication preferences, your stakeholder map and quality standards. Obviously ask AI to help you write it.

These files get saved locally (on your laptop etc) as your master copies, then get distributed where you need them: uploaded to Claude Projects, ChatGPT Projects, or dropped into a Claude Code project folder. The local file is always the source of truth.

A sneak peek of my own personal identity file, this is a small extract, not the complete file

Similarly, a look at an extract from the business file I maintain

Layer 2: State of Play (where you are right now)

This is the layer most people are missing entirely, and it's the highest-leverage one.

In fairness, and for full transparency, I’ve only recently started doing this religiously.

Your identity files tell AI who you are in general. Your state of play tells it what's actually happening this week: current priorities, what's blocked, pending decisions, stakeholder dynamics, key meetings coming up. Without it, your AI has a general understanding of you but no idea what you're dealing with today.

Create one state of play document per domain, matching your identity files. Update them weekly. The most effective approach: open a conversation with your primary AI at the end of each week and ask it to draft an updated state of play based on your recent conversations. Review it, correct anything wrong, add what's missing, save the file.

This weekly conversation also feeds your AI's memory system, giving it ambient awareness even in casual chats where you haven't pasted the full document. But for high-stakes work (drafting a stakeholder email, preparing for a difficult meeting, writing a proposal), paste the full state of play. Memory is ambient awareness. Pasted context is a complete briefing. Use both.

Layer 3: Working Materials (the raw inputs)

These are the reference documents, examples, and templates that feed specific tasks. Build this collection progressively.

The highest-value materials I've found: meeting transcripts from important discussions (rotate out anything older than a month once key decisions are captured elsewhere). Decision logs that capture not just what you decided but why, so the AI doesn't suggest the approach you already ruled out. Writing samples of your actual voice, because AI calibrates to examples far better than instructions (for example, I save every issue of this newsletter locally in markdown so my archive can be easily queried by an LLM). And failure files: when AI produces something you hate, save the prompt and the bad output with a note on what went wrong. Negative examples are powerful.

The 15-Minute Weekly Habit

Context maintenance is like tending a garden. Neglect it and everything goes stale. A bit like my back garden.

But it doesn't require hours. Fifteen minutes weekly keeps everything current.

Minutes 1-5: Generate updated state of play docs. Ask your AI to draft them based on recent conversations. Review and correct.

Minutes 6-8: Save and distribute. Overwrite previous versions locally, upload to your projects in ChatGPT/Claude.

Minutes 9-12: Rotate working materials. Remove stale transcripts, add new ones. Drop in any strong emails you sent, decision log entries, or new failure files.

Minutes 13-15: Skim your identity files. Has anything changed? New stakeholder? Shift in strategy? Most weeks this is a quick "still accurate" check and nothing more.

The Mistake That Wastes Everything

The most common failure I see: creating context but never updating it. Stale context is worse than no context because the AI will confidently reference outdated priorities. The weekly habit isn't optional. It's what makes the whole system compound.

Other mistakes worth avoiding: dumping everything into every tool (ChatGPT Projects degrade when overloaded, be selective), keeping the master copy inside the AI tool instead of locally on your laptop (this causes drift between tools), and relying solely on memory without pasting full context for important work.

The weekly habit isn't optional.

If You Want Help Building This

I've been refining this system across my own work. The pattern is clear to me: if you build and maintain a context layer, you’ll get dramatically better output from the same AI tools everyone else is using.

I'm now offering one-to-one sessions for individuals who want to build their personal AI context system, and group workshops for companies through Atomic Theory Consulting. Whether you're a solo operator wanting to get more from your AI tools or a team trying to use AI more effectively, the starting point is the same: give AI the context it needs to actually help you.

If you're interested, reply to this email.

Your AI Context Stack: Start Here

  1. Create one identity file. Pick your primary work domain. Spend 20 minutes documenting your role, communication preferences, key stakeholders, and current priorities. Ask your AI to save as a markdown file (remember, that’s just a plain text file).

  2. Write your first state of play. What are you working on this week? What's blocked? What decisions are pending? One page, plain language.

  3. Upload both to your primary AI tool. Claude Project, ChatGPT Project, or Claude Code folder.

  4. Test it. Ask for something you'd normally request, like an email draft or meeting prep. Compare the output to what you'd get without context.

The difference in output, especially when your ask is more complex, won't be subtle. It will be substantially better.

A mediocre prompt with excellent context will outperform a brilliant prompt with no context, every time. Context isn't a nice-to-have. It's the single biggest determinant of whether AI is a genuine productivity multiplier or an expensive autocomplete.

Build the system. Maintain it. Everything else gets easier.

And if your AI still produces generic nonsense after all this, at least you'll know it's not a context problem. It might just be having a bad day. We all do.

So remember, context is king, but the weekly habit isn’t optional.

See you next week!

Faisal

P.S. Know someone else who’d benefit from this? Share this issue with them.

Received this from a friend? Subscribe below.

The Atomic Builder is written by Faisal Shariff and powered by Atomic Theory Consulting Ltd — helping organisations put AI transformation into practice.