My Over-Engineered Two-AI Method for Prompting Claude Code
Experimenting with niche meta-prompting and using psychology with AI
AI has given us an almost ridiculous ability to work across contexts we’d never touch otherwise. Front-end, back-end, databases, APIs, infrastructure (hell, I’m a video production guy and I’m building full-stack applications now). Multiple languages, frameworks, platforms. It’s theoretically amazing. It’s also hard as shit to keep up with all of it.
Reality check before we go further: This workflow is niche. It’s probably over-engineered. It’s definitely not for everyone. This is part of my learning process, experimenting with AI-assisted development workflows and trying to figure out what actually helps me get better at this stuff versus what just makes me ship code faster without understanding it.
But if you’re trying to level up your coding skills WITH AI (not just use AI as a magic code generator), maybe this will resonate.
The Problem I Was Actually Trying to Solve
I kept running into two related issues when working with Claude Code:
1. Context Pollution Is Real
When I ask Claude Code a vague question (because let’s be honest, most of my questions are vague when I’m working outside my expertise), it does what it’s supposed to do: investigates, explores, tries solutions. But by the time we get to actually fixing something, the context window is absolutely polluted with:
False starts
Exploratory dead ends
My confused terminology (I regularly mix up technical terms)
Multiple attempts at understanding what I’m even asking for
The AI’s understanding is muddied by the entire messy investigation process.
2. I Want to Understand what I’m doing (and what the AI is doing)
Here’s the thing: I’m not just trying to ship code faster. I mean, that’s nice, but it’s not the goal. I’m trying to actually learn and get better at coding with AI as a tool. Quick fixes don’t help me understand architecture, patterns, or why things work the way they do.
I need comprehensive context BEFORE deciding what action to take. I want to make informed decisions based on understanding, not just accept the first suggestion Claude throws at me, which 50% of the time ends poorly.
What if investigation and execution were separate operations?
The workflow is pretty straightforward:
Prompt Writer AI (Instance 1): Takes my vague, non-technical question, investigates the codebase deeply, and rewrites it as a precise XML prompt
Worker AI (Instance 2): Receives that clean, comprehensive prompt and executes with full context already loaded
The the Prompt Writer Meta-Prompt (find it below)
Investigate the codebase deeply (use all the tools, search everything)
Translate my non-technical language into correct terminology
Map my vague question to specific files, functions, systems
Output a structured XML prompt for another AI to execute
Include comprehensive technical context that I can review
Critically: Never solve the problem itself; only write the prompt
The psychology here is important. Instead of fighting the AI’s natural instinct to solve problems, I redirect it: “Your job is to write an EXCEPTIONAL prompt that will make another AI succeed brilliantly.”
Why XML Instead of Markdown?
Claude is explicitly trained to pay attention to XML tags. It’s better for complex meta-instructions because you get clear structure, explicit hierarchy, and better handling of nested contexts. When you’re writing prompts about prompts, structure matters.
How Many Times Can One Prompt Fail? (apparently a LOT)
Version 1: The “Please Don’t” Approach
My first attempt was polite but firm: “Do NOT spawn subagents, do NOT solve problems, just understand and rewrite.”
Result: Complete failure. Both Haiku and Sonnet investigated deeply and then... solved the problem anyway. They interpreted my confirmation (”yes, you understood correctly”) as “great, now go solve it.”
Version 2: Stronger Constraints
I got more aggressive. Added explicit FORBIDDEN ACTIONS lists. Added thinking steps to force self-checking. Used language like “STOP immediately” and “under NO circumstances.”
Result: Still failed. Both models did Step 1 perfectly. They understood the problem, asked for confirmation. I said “yes.” They immediately spawned investigation tools and solved the entire problem.
Sounds like a win right? Well, not if you remember my original goal of not polluting the context of the worker prompt with my dumb-ass questions. Each model was forgetting that the goal was to write a prompt to solve the problem, NOT to solve the problem themselves.
Version 3: The Psychology Hack
This is where it got interesting. I decided to stop fighting the AI’s problem-solving drive and to redirect it instead.
I reframed the entire task: “You are a prompt engineering specialist. Your job is to write an EXCEPTIONAL prompt. Investigate deeply so you can craft something great. Then explain what makes your prompt effective: show off your craftsmanship.”
Result: Success.
The AI gets to:
Investigate (satisfying)
Demonstrate competence (satisfying)
Show off its work (satisfying)
But the output is a prompt, not a solution.
Who’d have thought? Psychology for the win.
The Haiku Surprise: When “Less Smart” Wins
Going into testing, I assumed Sonnet would crush this. It’s the smarter model, better at instruction-following, more sophisticated reasoning. I figured Haiku would be too simple to handle the nuance.
What actually happened: Haiku won. Decisively. By a LOT. (I can’t overstate how big the spread was!)
Sonnet’s Behavior:
Investigated thoroughly ✓
Then... explained the entire system to me in detail
Answered my original question comprehensively
Had to be explicitly asked: “Create a prompt please!”
Only then created the XML prompt ✓
Haiku’s Behavior:
Investigated thoroughly ✓
Immediately output the XML prompt ✓
Explained what made it effective ✓
Never deviated from the task ✓
Why? (My Hypothesis needs More Testing)
I think Sonnet is just simply “too helpful.” It’s so sophisticated that it saw through the task structure and thought, “I already understand this problem completely, let me just help this person directly.” Its intelligence worked against it.
Haiku is more literal. It followed the workflow mechanically. For this specific use case, being “less smart” might actually be better. The simpler model followed constraints better because it didn’t try to optimize around them.
Also, Haiku is cheaper, which matters when you’re burning tokens on meta-prompts.
Important caveat: This is educated guessing based on limited testing. I need way more trial and error to confirm this pattern. But it’s a fascinating early result.
How to Actually Use This Thing
Step 1: Prompt Writer (Haiku recommended)
Open a Claude Code instance, paste the meta-prompt (I put at the bottom of this post), and replace [PASTE YOUR QUESTION HERE] with your actual vague question.
Example: “on the 90-day training plan page a message appears that says ‘Read-only plan. Upgrade to Tier 1 to track your improvement and stay accountable to growth.’ this refers to the tier system that is no longer supposed to exist. Can you explain to me how the 90-day learning plan is implemented. How it is supposed to work under normal circumstances and how we can be sure every user gets the AI-assisted learning plan”
**side note: if you read between the lines, you’re getting a sneak peek of something I haven’t announced yet, shhhh…**
Step 2: Review the Output
You’ll get back two things:
The comprehensive XML prompt - Ready to copy/paste, with all the technical details, file paths, and context
The “what makes this effective” explanation - This is where you learn what was actually discovered
Read both. Understand what the Prompt Writer found. Decide what action you want to take based on that understanding.
Step 3: Worker AI (Claude Code - Any model)
Copy just the XML prompt portion. Paste it into a fresh Claude Code instance.
Now the Worker AI has:
Clean context (no investigation pollution)
Comprehensive understanding already loaded
Clear instructions on what to do
And you have:
Full understanding of the system
Time to decide on the right approach
Documentation of what was discovered
Is This Actually Worth It?
Honest assessment: This is probably over-engineered for most use cases.
If you just want fast fixes and you already know what you’re doing, one AI instance is fine. There’s no need for this complexity.
But if you’re learning, (and I mean really trying to understand systems, not just get them working), then separating investigation from execution teaches you more. You see how your vague questions get translated into precise technical language. You understand the architecture before making changes. You make informed decisions instead of accepting first suggestions.
Clean context produces better results. Understanding comes before action. And you actually get better at prompting, which is increasingly important.
The Meta-Lesson
Working with AI effectively is as much about psychology as it is about prompts. You’re redirecting drives, managing context, and structuring workflows that align with how these models actually behave.
Want to argue about whether this is ridiculous over-engineering? Comments are open. I’m genuinely curious if anyone else is experimenting with multi-AI workflows like this.
The full meta-prompt:
<meta_prompt>
<your_mission>
You are a prompt engineering specialist. Your job is to help a non-technical user communicate effectively with another AI (the “Worker AI”) that will solve their problem.
Your user understands concepts but may use imprecise terminology or vague descriptions. Your mission is to investigate their question, understand what they’re ACTUALLY asking for, and craft an exceptional XML prompt that will make the Worker AI succeed brilliantly.
This is about prompt craftsmanship. Make it great.
</your_mission>
<your_workflow>
<step_1_investigate>
Dig into the codebase to understand:
- What is the user actually talking about? (specific files, systems, patterns)
- What technical concepts are involved?
- What are they really trying to accomplish?
- What incorrect assumptions or terminology are in their question?
Search the web if you encounter unfamiliar tools, frameworks, or technologies.
Use your full investigative capabilities. Understanding the real problem deeply is what makes great prompts possible.
</step_1_investigate>
<step_2_craft>
Write an exceptional XML prompt that another AI will execute. Structure it clearly with tags like:
- <task> - Crystal clear statement of what needs to be done
- <context> - Why this matters and what the goal is
- <technical_details> - Specific files, functions, patterns discovered
- <requirements> - What must be included or considered
- <constraints> - Limitations or things to avoid
Use correct terminology. Reference specific files. Remove ambiguity. Make it actionable.
This prompt will be copy/pasted to a fresh Claude instance. Make it so good that the Worker AI can execute perfectly without asking clarifying questions.
</step_2_craft>
<step_3_explain>
Now here’s where you get to show off: Explain what makes your rewritten prompt effective.
Tell the user:
- **What you discovered** - What was the real question beneath their vague phrasing?
- **What you corrected** - What terminology or assumptions did you fix?
- **What you added** - What context or technical details will help the Worker AI succeed?
- **Why it’s better** - What makes this prompt more likely to get the right solution?
Be specific. Be proud of your work. This is your chance to demonstrate prompt craftsmanship.
</step_3_explain>
</your_workflow>
<critical_output_requirements>
Your response must contain:
1. The rewritten XML prompt (properly formatted, ready to copy/paste)
2. Your explanation of what makes it effective
DO NOT include:
- Solutions to the user’s original problem
- Code fixes or implementations
- Direct answers to their technical question
- Anything that isn’t the prompt or explanation of the prompt
Remember: You’re solving the problem of “how to write a great prompt” - not the problem the prompt is about.
</critical_output_requirements>
<user_query>
</user_query>
</meta_prompt>


This article comes at the perfect time, as I've been grappling with similar issues of context management and the temptation to just 'ship code faster' instead of truly understanding the underlying logic when using AI tools for development. It realy echoes your consistent focus on using AI as a tool for deeper learning and leveling up skills, rather than just a quick fix, which is something I truly appreciate about your approach.
I love this… yes! I have been experimenting with Meta prompts for about three months now, really opens up any model. Most of what I see is; people restraining the answer or framing the answer, prior to it utilizing its full multi domain capabilities. “By asking it to reason and explain its thinking (this is what the user wants), then reflect on its reasoning and correct by clarifying (what you want, or checking its own output, “prior to final output & correct it) then ask it to take action on that request ( fix, create) you force the model to structure and automate its reasoning output. In short ( think about how you think, put that into steps, write out those steps, ask the model to take those steps) = full multi domain answers. Keep up the good work 👍
You may find this useful:
https://open.substack.com/pub/karmalane1/p/are-your-prompts-too-confined?r=53vulf&utm_medium=ios