I Tested Nate Jones's GPT-5 "Structured Brief" Meta-Prompt
Spoiler: It Works. (also there's a song)
Hey- play the video while you read this… it’s literally a high speed screen recording of the whole process of testing this prompt set to a “work punk” song I wrote about it… enjoy!
Quick credit where credit is due
This entire experiment was sparked by Nate Jones's "structured brief → then execute" approach. Read his original post here: The ChatGPT-5 Prompting Manual: Building Prompts That Don't Break. All I did was test his method on my own broken code. Spoiler alert: it worked.
Thing is, I wasn't expecting much
I've been fighting this stupid JSON parsing bug for hours. It was a "simple" node in n8n that only looked complicated because of all the variables - a meta-prompt generator for image prompts that kept shitting the bed with "unbalanced JSON braces" errors.
Here's what I tried first (the usual bullshit): poke a line of code, cross my fingers, re-run the workflow. Maybe twice if I'm feeling optimistic. You can guess how that went.
Then I saw Nate's post about structured briefing and thought, "fuck it, let's see if this actually works." I wrapped my whole request in his "structured brief → then execute" template. Same task. Same model. Different fucking outcome.
It worked. Not just "worked once," but actually robust.
Real-time chaos (straight from my notes while testing)
Here's exactly what happened, because I was taking notes while testing Nate's approach:
"Ran my original prompt with Nates appended to the front. Results: interesting… it's not erroring out right away but it's spinning. Maybe I should refresh the n8n page or something."
"shit… now I have to run the last few nodes again… haha"
"ran the node again… looks like it worked. interesting."
Then I realized my Thoughtcast workflow still had a zombie node from an old version, which triggered that expensive ElevenLabs call again on the same script.
"I suck at pinning data… dammit. just reran the fucking expensive elevenlabs node on the same damn script. shit."
"IT WORKED."
(Also funny: I was testing Nate's meta-prompt while running an automation that was writing about why Claude Desktop is STILL better than ChatGPT. The irony wasn't lost on me.)
Why Nate's approach actually worked (according to ChatGPT's analysis)
I asked ChatGPT to analyze what it did differently this time versus when I didn't use Nate's structured brief. Here's what it said:
The meta-prompt changed the optimization target. Instead of framing the task as a quick code fix (which biases toward minimal edits), the structured brief pushed a full refactor approach.
The big functional difference was the JSON extraction strategy. My old code grabbed the very first "{" and counted braces. Any stray brace in the narrative before the JSON made the depth counter never return to zero, throwing that "Unbalanced JSON braces" error every fucking time.
The new code that came out of Nate's template:
Tries fenced JSON blocks first (```json)
Scans from every "{", not just the first
Is string-aware (ignores braces inside quoted strings)
Only accepts candidates if JSON.parse succeeds
Includes proper validation and error handling
That single change removed the entire failure mode I'd been fighting.
Nate's "Structured Brief Loop" that I'm stealing
Here's Nate's template that actually worked (and yes, it's going straight into my TextExpander library as "nate-brief"):
ROLE: Name the expertise the model should simulate
OBJECTIVE: Make the vague ask specific and testable
APPROACH: Force a methodology that includes failure-mode thinking
OUTPUT: Specify format and success criteria the code must pass
GUARDS: Require validation logic like parse checks and type checks
EVIDENCE: Ask for a short "why this design" note so you can review choices
When I wrapped my broken JSON request in Nate's format, the model stopped "patching" and started "hardening." That alone changed the strategy from brittle pattern matching to defensive parsing.
The bigger picture (why this isn't really about JSON)
This isn't about JSON parsing. It's about steering, which is exactly what Nate was getting at in his original post.
LLMs follow the path you frame for them. If you ask for a quick patch, they bias toward minimal edits. If you ask for a structured brief with explicit objectives and success criteria (like Nate's template does), they bias toward building something robust.
Think of it like the difference between asking your friend to "help you move" versus hiring actual movers. Same energy, completely different optimization target.
What I learned from testing Nate's approach
The meta-prompt changed the process, and the process change produced the fix. My difference wasn't in the code I wrote - it was in how I framed the problem for the AI to solve.
Root cause was naive JSON extraction. The defensive parser that came out of Nate's template removed that entire class of errors, plus several other latent bugs I didn't even know about.
Structure created room to fix secondary issues that would have bitten me later under pressure - proper template literals, scoped switch cases, safer regex escaping, predictable output shapes.
The lesson I keep having to relearn
There's a pattern here that connects to something bigger. It's the same lesson I learned when I started that evangelism ministry 20 years ago, when I started repairing iPhones 15 years ago, and when I started my film production company 10 years ago.
The thing that looks like extra work upfront (building a system, asking better questions, planning for failure modes) is actually the shortcut. The thing that looks like the shortcut (quick fixes, minimal changes, crossing your fingers) is actually the long way around.
I'm not evangelical anymore, and I don't repair phones anymore, but this lesson keeps showing up. Structure beats heroic debugging. Process beats panic. Better questions beat desperate assumptions.
Do this next
Test Nate's structured brief template on your next "quick fix" and see what happens when you optimize for robustness instead of speed.
Replace any first-brace JSON extractors with the fenced-block-first, multi-start, string-aware, parse-validated approach that came out of this.
Add telemetry while you monitor. Log which parse branch fired and why. Future you will thank you.
Pin your data before any paid TTS steps. Learn from my expensive ElevenLabs mistakes.
Nate's structured brief template is now living in my TextExpander library as "nate-brief" because I know I'll need this approach again. If you want to see the original method that actually made this work, go read Nate's original post - he deserves the credit for figuring this out.
And if this test was useful to you, you can find more experiments like this at limitededitionjonathan.com or subscribe to get these as they happen.
All credit to Nate Jones for the structured brief approach that actually solved this problem. I just tested it and documented the results.

The secret isn’t GPT-5, it’s how you frame the task. Nate’s structured brief forces methodology and validation, producing predictable, error-free results.
I think you should name your band “Flogging JSON.”