SPECTRA: A Prompting Framework Born From Personalization
This is wild.
One of our users reached out this week with a question about their learning plan. They’d completed a module on prompt engineering and wanted to know more about the “SPECTRA framework” it referenced.
I told them I’d send over some additional resources. Then I searched for SPECTRA to grab some links.
Nothing came up. RISEN exists. CLEAR exists. CARE exists. SPECTRA? Nowhere on the internet.
Because our system created it. Specifically for them.
This Is the Point
Here’s what AI CRED actually does: it assesses your AI fluency, identifies your specific gaps, and generates a personalized learning path to close them. Not “here’s a generic curriculum, good luck” - actually personalized. The system looks at where you’re strong, where you’re weak, and builds content that targets your specific bottlenecks.
This user’s assessment revealed they were getting decent results but burning extra iterations to get there. Their domain expertise was compensating for prompts that could’ve been sharper on the first try. So when the system generated their learning module, it didn’t just say “write better prompts.” It built them a diagnostic framework with a memorable acronym, scoring criteria, and exercises designed around their actual patterns.
The framework it created is genuinely good. Good enough that we’re formalizing it as an AI CRED methodology. But the point isn’t “look at this cool thing that happened.” The point is: this is what adaptive learning looks like when you actually commit to it.
The SPECTRA Framework
Most people treat prompting like a slot machine. Type something in, hope for the best, iterate until it works. This trains bad habits. You end up relying on AI to figure out what you actually wanted instead of communicating clearly from the start.
SPECTRA is a pre-flight checklist. It forces you to answer seven questions before you write anything. By the time you’ve worked through each element, you’ve done the cognitive work that actually matters. The prompt almost writes itself.
S - Situation
What context can’t the model infer on its own?
AI models are smart, but they’re not psychic. They don’t know your industry, your company’s quirks, your audience’s sophistication level, or what happened in the meeting yesterday.
Ask yourself: What background would a smart stranger need to help me effectively?
Weak: “Write a project update email.”
Strong: “I’m a product manager at a 50-person SaaS startup. We just missed a sprint deadline because of an unexpected API deprecation. My audience is the executive team who approved extra budget for this project last month.”
P - Persona
Who should the AI “be,” and who is the output for?
This isn’t about making AI pretend to be Shakespeare. It’s about establishing the expertise lens and the target audience. A security explanation for developers looks different than one for executives.
Ask yourself: What expertise should inform this response? Who will actually use this output?
Weak: “Explain this code.”
Strong: “You’re a senior developer explaining this code to a junior developer who understands Python basics but hasn’t worked with async patterns before.”
E - Expectations
What does success look like before you ask?
If you don’t define the target, you can’t complain when AI misses it. This forces you to know what you want - which is half the battle.
Ask yourself: How will I know if this response is good enough to use?
Weak: “Make this better.”
Strong: “I need this to clearly explain our pricing change, address the top 3 customer objections we’ve heard, and end with a specific call to action. Success means a customer reads this and understands exactly what’s changing and why.”
C - Constraints
What should the AI avoid, and what limits apply?
Without constraints, AI will give you 2,000 words when you needed a tweet, or use technical jargon when you needed plain English.
Ask yourself: What’s the length/format requirement? What should definitely NOT be in the response?
Weak: “Write a bio.”
Strong: “Write a bio. Max 100 words. Don’t mention my education - focus on practical experience. Avoid buzzwords like ‘passionate’ or ‘thought leader.’ Must work for both LinkedIn and conference speaker intros.”
T - Tone
What voice, formality, or style should the output have?
The same information delivered formally vs. casually creates completely different impressions.
Ask yourself: How formal or casual should this be? What emotion should come through?
Weak: “Write a rejection email.”
Strong: “Write a rejection email. Tone should be warm but direct - we genuinely appreciated their application but the role isn’t right. Avoid corporate stiffness. Sound like a human who respects their time.”
R - References
What examples or background should inform the response?
References give AI concrete models to work from. Embedding them directly in your prompt beats assuming AI will extract meaning from attachments.
Ask yourself: Do I have examples of what “good” looks like? Is there background material the AI should consider?
Weak: “Write a product description like our other ones.”
Strong: “Write a product description. Here’s an example of our current style: [paste example]. Match this voice but make it work for our enterprise audience. Key product facts: [paste specs].”
A - Action
Is your core request unambiguous and single-purpose?
Vague or multi-part requests create vague or scattered responses. One clear action, stated directly.
Ask yourself: Could someone misinterpret what I’m asking for? Am I asking for one thing or three things pretending to be one?
Weak: “Help me with my presentation.”
Strong: “Write the opening 3 slides for a 15-minute investor pitch. Each slide needs: a headline, 3 bullet points max, and a suggested visual element.”
Why Frameworks Beat Refiners
There’s a category of tools that “improve” your prompts - you paste in what you wrote, they make it “better.” These have their place, but they solve a different problem.
Prompt refiners can only optimize words. They can’t manufacture context, constraints, or success criteria you never provided. They’re polishing the surface of incomplete thinking.
SPECTRA addresses the thinking gap, not the writing gap.
A refiner can make “Write a blog post” into a more eloquent request. It can’t know that your audience is technical, your constraint is 800 words, your tone should match your existing content, and success means driving newsletter signups. Only you know that. SPECTRA makes you say it.
The difference between a prompt that works on the first try and one that takes four iterations usually isn’t better word choice. It’s that the first prompt contained the information the AI actually needed.
Using It As a Diagnostic
Beyond building prompts, SPECTRA works as a diagnostic tool. When a prompt fails or needs excessive iteration, score each element 0-2 (missing, partial, complete). Your lowest scores reveal your personal blind spots.
Most people consistently underspecify the same 2-3 elements. Once you know your patterns, you can build a personal pre-flight checklist targeting your specific gaps. That’s what the original learning module was teaching - not just “use SPECTRA” but “figure out which parts of SPECTRA you personally skip.”
The Bigger Picture
AI CRED isn’t just an assessment. It’s a context accumulation engine. Every interaction, every assessment result, every gap identified - it feeds into increasingly personalized learning. The system doesn’t hand you a generic curriculum and wish you luck. It builds content around your actual weaknesses.
SPECTRA emerged from that process. One user needed a structured approach to prompt construction that addressed their specific pattern of underspecifying context and constraints. The system built it for them. We liked it enough to formalize it.
But here’s what made me actually stop and stare at my screen: it doesn’t stop there.
The user’s Module 2 - the next step in their learning path - references “your hard-won SPECTRA insights” and uses “SPECTRA as scaffolding” for the next skill. The system created a framework in Module 1, then built Module 2 assuming that framework is now part of the user’s mental toolkit. It’s not generating isolated lessons. It’s constructing a coherent curriculum that compounds on itself.
Module 2 teaches template engineering - turning your SPECTRA-informed prompts into reusable starting points. It uses analogies specific to the user’s background (”like building functions instead of writing the same code repeatedly”). It references their specific work contexts. It even identified that they have a teaching instinct and channeled that into a learning exercise: “well-designed templates are inherently teachable artifacts.”
This is the system thinking multiple moves ahead. Not “here’s a framework” but “here’s a framework, and here’s how we’ll build on it next week, using what we know about how you think.”
This is what we’re building toward. Not “take our course” but “let us build your course, for you, based on evidence of what you actually need.”
SPECTRA is now available to everyone. Use it if it helps.
Get your AI fluency score at aicred.ai


Building on Rick and Sam's points, amazing and something nobody is doing yet ...
is connecting my belief that the majority of humans on this planet who are going to engage with AI are not (and will never be) programmers. Their thinking is not (and probably never will be) as structured as a programmer
your Spectra methodology is beautiful.
I actually have already created a gem and I'm testing it out on my non-technical users and while we're only three tests in, everybody really likes it. I've just watched three of my charity team do real work. One of them giggled because they didn't think they were capable of this.
So Jonathan, waking up today know you're helping a charity in Canada prompt better!
That is absolutely amazing and my mind is alive with ideas for other applications apart from prompt crafting. There are strategic planning, critical thinking and problem solving applications to name a few. Well done inded on this insight.