How I Work With Claude Code
I sat down and wrote it out - there's a bit more to it than I thought.
How I Work With Claude Code
I keep getting asked how I built things like the OpenClaw multi-agent system and the Catalyst dashboard. People assume it’s about writing better prompts. It’s not (I mean, mostly it’s not). It’s about what happens before you ever open the tool. This isn’t for everyone, and it evolves.
Quick note before we get started: I CANNOT STRESS ENOUGH HOW MUCH THIS PROCESS EVOLVES!!! THIS IS NOT CANON!
The Process
1. Start with a goal, not a prompt
I open a blank text file — literally Apple Notes — and write down what I want to build. No AI involved yet. This is just me thinking.
Goal: A dashboard that shows my AI agents' daily briefs with feedback input2. List the parts you need
This is where experience matters. This is still your moat as a human. You need to know enough to decompose a goal into requirements. The AI can help you implement each piece, but the decision to have those pieces comes from you.
Goal: A dashboard that shows my AI agents' daily briefs
Requirements:
- Supabase (SQL + auth, already have a project)
- Pinecone (vector DB for semantic search, already have an index)
- Domain name (Hover)
- Hosting (Vercel)
- OpenClaw connected to the site (API routes)
I haven’t opened AI yet.
3. Describe HOW the AI should work
I think about what I know about Claude’s current capabilities and what I want to test. This is where I set expectations for the collaboration.
How Claude should work:
- Browser control for Supabase setup (it's good at this now)
- Task manager for long-running parallel work
- YOLO mode (full permissions, no approval gates)
4. Add the WHY to everything
This is the step most people skip, and it’s doing more heavy lifting than anything else. The “why” gives the AI a constraint system. Without it, the AI has to guess your intent when it hits ambiguity. With it, entire categories of wrong answers get eliminated before they’re ever generated.
Goal: A dashboard that shows daily briefs
WHY: The work Slack channel is overwhelming, I need to read briefs on mobile without SSH-ing into a server
Requirements:
- Supabase
WHY: Already have a project with auth configured, don't want a second DB provider
- Pinecone
WHY: Need semantic search across historical data, already have an index with 3072-dim embeddings
- Vercel
WHY: I already have pro-tier, auto-deploys from GitHub, handles Next.js natively“I need Supabase” is an instruction. “I need Supabase because I already have a project there with auth configured” tells the AI not to suggest Firebase, not to create a new project, and to use the existing setup.
5. The handshake
Now I open terminal, start Claude Code in YOLO mode, paste the whole thing in, and add:
“Please tell me what you think I am asking for and wait for me to confirm or clarify before doing anything.”
Then we talk. I’m still in full bypass permissions mode — no plan mode, no approval gates. After every response I give, I append that same line:
“Please tell me what you think I am asking for and wait for me to confirm or clarify before doing anything.”
This is a calibration loop. I’m testing the AI’s model of my intent before giving it permission to act. I keep going until I’m confident it understands not just WHAT I want but WHY I want it that way.
6. “Go”
When I’m satisfied, I say “go.” That’s it. One word. Clear gate between planning and execution.
Before “go”: we talk, we align, we iterate on understanding.
After “go”: it builds.
I don’t do the “yeah that sounds right, maybe start on that and we’ll see” thing. I hold the line until confidence is high, then release fully.
The Primitives
Looking at this process, the actual primitives are:
Think before you prompt. Don’t open the tool and start noodling. Show up with a brief. The quality of your output is directly proportional to the quality of your input, and that input happens in your head before any AI is involved.
Decomposition is the human’s job. You need to know enough about what you’re building to break it into pieces. The AI implements the pieces. You decide what the pieces are. This requires real expertise — there’s no shortcut.
The WHY eliminates wrong answers. Every requirement without a “why” is an opportunity for the AI to go in a direction you didn’t want. The why narrows the solution space from thousands of possibilities to a handful.
Align before you execute. The “tell me what you think I’m asking for” loop is a handshake protocol. You’re not asking the AI to work — you’re asking it to prove it understood. Repeat until confident.
Conversational guardrails > technical guardrails. I run with maximum permissions and zero approval gates. The guardrails are in the conversation — the repeated alignment checks, the clear “go” gate. This lets the AI focus on understanding the problem instead of constantly stopping to ask “can I read this file?”
Scope aggressively. Don’t say “build me a dashboard.” Say “build me a dashboard that shows the daily brief from this specific Supabase table, with a feedback input in the right column, deployed on Vercel at this domain.” Every requirement narrows what the AI has to figure out.
Work on the system, not the symptoms. When something isn’t right, I adjust the instructions — not the output. If the daily briefs are missing nuance, I don’t manually edit the brief. I change the agent’s instructions so future briefs are better. Build the machine that builds the thing.
What Most People Get Wrong
They start with the AI. They open ChatGPT and type “build me a website.” That’s asking the AI to do the thinking AND the building. Split those jobs.
They skip the WHY. “Use React” vs “Use React because the team already knows it and we have an existing component library” — these produce fundamentally different results from the AI.
They treat AI like a search engine. They ask a question, get an answer, ask another question. That’s not collaboration. Collaboration is: here’s my goal, here are my constraints, here’s why, prove you understand, then go build.
They micromanage execution instead of managing intent. They watch every line of code being written and interrupt constantly. If you did the alignment work upfront, you can trust the execution. If you can’t trust the execution, your alignment wasn’t good enough — go back and fix that.
They confuse permissions with safety. Restricting tool access doesn’t make the AI safer. It makes it slower and more annoying. Conversational alignment is the actual safety mechanism. A well-aligned AI with full permissions will build exactly what you want. A misaligned AI with restricted permissions will build the wrong thing, just more slowly.
The Meta-Insight
The whole process is really just project management. Goal, requirements, constraints, alignment, execution. The same skills that make someone good at managing a team of humans make them good at working with AI. The tool changed. The skill didn’t.


Canon or not, this is quality stuff. It aligns with my process so I would say that but I couldn't put it this well!
Put it this way, this is getting shared in my work chat Monday morning and I'll push to have some of it adopted as best practice.
Just because it's basic, obvious, and fluid for you doesn't mean it is for everybody. I tend to work in a very similar fashion, but still learned a lot from reading this. Thanks for sharing!