Context.
Stop prompting. Start engineering.
I wasted six months collecting prompts.
Saved hundreds of them. Organized them in Notion. Bookmarked every viral Twitter thread. Built a whole system around “the perfect prompt.”
And my AI outputs were still garbage.
Generic. Bland. Obviously machine-written. I’d spend 30 minutes rewriting everything ChatGPT gave me.
Then I learned something that changed everything:
The best AI users aren’t writing better prompts. They’re engineering better context.
One shift. Completely different results.
This is not about magic words or secret formulas.
This is all about understanding how AI actually thinks — and giving it what it needs to give you what you want.
At the end of this newsletter, you will have:
✦ The science behind why your prompts keep failing
✦ My 5-part context engineering framework
✦ 7 copy-paste templates for immediate use
✦ Before/after examples showing the real difference
Let’s get into it.
Why Your Prompts Keep Failing
Here’s what most people do.
They open ChatGPT. They type something like:
“Write me a marketing email.”
They get garbage. Generic opener. Buzzword soup. Could be for any product, any company, any audience.
So they try again:
“Write me a really good marketing email that converts well.”
Still garbage. Adding adjectives doesn’t help.
Then they Google “best ChatGPT prompts” and find something like:
“You are an expert copywriter with 20 years of experience. Write me a high-converting marketing email using proven persuasion techniques and psychological triggers.”
Better. But still not good enough to actually use.
Why?
Because the AI has no idea what it’s actually writing about.
Think about it.
It doesn’t know your product. Your audience. Your voice. Your goals. What’s worked before. What’s failed. What makes your offer different.
It’s guessing.
And when AI guesses, it defaults to the statistical average of everything it’s ever seen.
The statistical average is mediocre by definition.
The Science: Why Context Beats Prompts
Let me explain what’s actually happening under the hood.
Large language models — ChatGPT, Claude, Gemini — are prediction machines. They predict the most likely next word based on everything that came before.
The key phrase: everything that came before.
Your prompt isn’t the only input. The entire conversation — everything in what’s called the “context window” — shapes the output.
Think of it like this:
Narrow context = narrow predictions = generic output
Rich context = informed predictions = specific output
When you give the AI a two-sentence prompt, it has almost nothing to work with. It fills the gaps with assumptions. Those assumptions are always average.
When you give the AI rich, specific context about your situation, your audience, your voice, your goals — it has data points to work with. The predictions become specific to YOU.
This isn’t theory.
Research on LLM performance consistently shows that outputs improve dramatically when models receive:
Relevant background information
Examples of desired output
Clear success criteria
Specific constraints
The shift from “prompt engineering” to “context engineering” isn’t semantics.
It’s a fundamental change in how you work with AI.
The 5-Part Context Framework
Here’s the framework I use for every AI interaction that matters.
I call it the BRIEF method — because you’re not casting spells, you’re briefing a capable collaborator.
Part 1: Background
Give the AI the information it needs to understand your specific situation.
Don’t assume it knows anything.
Include:
What your product/service does
Who your audience is (be specific)
What problem you’re solving
Relevant constraints or requirements
What’s been tried before
Weak: “Write an email for my startup.”
Strong: “My company is Flowstate. We make a focus app for remote workers who struggle with distraction. Our main differentiator is that we use AI to learn when you’re most productive and protect those hours automatically. Our audience is mostly 25-40 year old knowledge workers. They’ve tried Pomodoro timers and website blockers but found them too rigid.”
The second version gives the AI something to work with. The first is a shot in the dark.
Part 2: Role
Define who the AI should “be” for this task.
Not just “you are an expert.” That’s too vague.
Specify:
What kind of expert
With what specific experience
Operating in what context
Weak: “You are a marketing expert.”
Strong: “You are a senior growth marketer who’s worked at early-stage B2B SaaS companies. You specialize in email sequences for trial users. You’ve seen what works at companies like Notion, Slack, and Linear in their early days.”
The specificity matters. Different roles activate different patterns in the model. A “startup growth marketer” writes differently than a “Fortune 500 CMO.”
The AI knows this. Use it.
Part 3: Instructions
Be explicit about what you want delivered.
Not just “write an email” but:
What type (cold outreach? onboarding? win-back?)
What length
What structure
What sections to include
What tone
Weak: “Write a marketing email.”
Strong: “Write an onboarding email for users who signed up but haven’t completed setup. 150-200 words. Structure: acknowledge they started, identify likely friction point, offer specific help, single clear CTA. Tone: helpful and casual, not salesy. No exclamation points.”
When you specify exactly what you want, the AI stops guessing and starts executing.
Part 4: Examples
This is the most underused part of context engineering.
Show the AI what good looks like. And what bad looks like.
Examples (what to do): “Here’s an email I wrote that got a 45% open rate and 12% click rate: [paste example]. Notice the specific number in the subject line, the single-sentence paragraphs, and how it leads with the user’s problem, not our features.”
Anti-examples (what to avoid): “Don’t write anything that sounds like this: ‘I hope this email finds you well’ or ‘Just checking in’ or ‘I wanted to reach out.’ I hate these openers. Also avoid: ‘Excited to share’ and ‘Game-changing solution.’”
The model learns from patterns.
Give it patterns to follow. Give it patterns to avoid.
Your anti-examples are just as important as your examples. They prevent the AI from falling into clichés.
Part 5: Format
Define what “done” looks like.
Most people skip this. They know what they want when they see it, but they don’t tell the AI.
Weak: “Make it good.”
Strong: “Success looks like: A busy founder can scan this in 10 seconds and understand exactly what to do next. The CTA is specific and low-friction (one click, not a form). No jargon. No fluff. Every sentence earns its place. If I can delete a sentence without losing meaning, delete it.”
When you define success criteria, you give the AI a target to optimize for.
The Framework in Action
Let me show you the actual difference.
Without Context Engineering
Prompt: “Write a marketing email for my AI product.”
Output: “Subject: Transform Your Business with Cutting-Edge AI
Dear Valued Customer,
In today’s fast-paced digital landscape, staying ahead of the competition requires innovative solutions. That’s why we’re thrilled to introduce our revolutionary AI platform that will transform the way you work.
Our state-of-the-art technology leverages the latest advances in artificial intelligence to streamline your workflows and boost productivity. With intuitive features designed for modern professionals, you’ll wonder how you ever managed without it.
Don’t miss this opportunity to revolutionize your business.
Click here to get started!
Best regards, The Team”
This is garbage.
Generic. Could be for any product. Full of buzzwords. No specificity. Would get deleted instantly.
With Context Engineering
Full Context Brief:
“Background: My product is Flowstate, a focus app for remote knowledge workers. We help people protect their deep work time using AI that learns their productivity patterns. Users are typically 25-40, work from home, and have tried other focus tools but found them too rigid. They signed up for a free trial but haven’t completed onboarding (they stopped at the calendar integration step).
Role: You’re a growth marketer who’s worked on user activation at tools like Notion and Superhuman. You know that the calendar integration is our highest-friction step but also our highest-value — once users connect their calendar, retention doubles.
Instructions: Write a re-engagement email for trial users who dropped off at calendar integration. 120-150 words max. Structure: acknowledge they started, name the specific friction point, explain WHY it matters (the benefit), make the CTA feel easy.
Examples of tone I like: Superhuman’s onboarding emails — direct, confident, helpful, not desperate. Avoid: ‘Just checking in,’ ‘Hope you’re doing well,’ anything that sounds like a sales robot.
Format: Subject line + body. Subject should create curiosity, not describe the email. Body should be scannable in 5 seconds. One clear CTA. No images needed.”
Output:
“Subject: Your Wednesdays are wide open
Hey — noticed you started setting up Flowstate but stopped before connecting your calendar.
Totally get it. Giving an app calendar access feels like a lot.
But here’s why it matters: Flowstate uses your calendar to find your natural focus windows — and then protects them. No more meetings scheduled over your deep work time. No more context-switching during your peak hours.
Users who connect their calendar get 2.3 more hours of uninterrupted focus per week.
Takes 30 seconds: [Connect Calendar]
If you’d rather not, no stress — you can still use Flowstate manually. But the magic is in the automation.
— Team Flowstate”
Night and day.
Same AI. Same model. Completely different output.
The difference is context.
The 7 Templates
Here are templates you can adapt immediately.
Template 1: Content Writing
BACKGROUND:
- Topic: [what you're writing about]
- Audience: [who will read this, what they know, what they care about]
- Goal: [what should readers think/feel/do after reading]
- My voice: [describe your style or paste a sample]
ROLE:
[Type of writer] who writes for [specific audience/publications]
INSTRUCTIONS:
Write [specific format] that is [length]. Structure it as [outline if you have one].
EXAMPLES:
Writing I like: [paste sample or describe]
Writing I hate: [describe what to avoid]
FORMAT:
Success = [what makes this piece work]Template 2: Email/Communication
BACKGROUND:
- Recipient: [who they are, your relationship, what they know]
- Situation: [what prompted this email]
- Goal: [what should happen after they read]
ROLE:
[Your position/relationship to recipient]
INSTRUCTIONS:
Write [type of email]. [Length]. Tone should be [describe].
EXAMPLES:
Do: [what you want]
Don't: [what you hate]
FORMAT:
Must include: [specific elements]
Success = [desired response/action]Template 3: Strategy/Analysis
BACKGROUND:
- My situation: [describe current state]
- My goal: [describe desired outcome]
- Constraints: [budget, time, resources, other limits]
- What I've tried: [past approaches and results]
ROLE:
[Type of strategist] who works with [similar situations]
INSTRUCTIONS:
Analyze [specific question] and recommend [what you need].
EXAMPLES:
Good advice for me: [describe]
Bad advice for me: [describe what won't work]
FORMAT:
Structure as [how you want it organized].
Success = [what makes this useful]Template 4: Research/Learning
BACKGROUND:
- I'm trying to understand: [specific topic/question]
- Current knowledge level: [what you already know]
- Why I need this: [application/context]
ROLE:
Expert in [domain] who can explain to [your level]
INSTRUCTIONS:
Explain [concept] in a way that [specific outcome].
Include [examples, analogies, exercises, etc.].
EXAMPLES:
Explanations that work for me: [describe your learning style]
Explanations that don't work: [what to avoid]
FORMAT:
[How you want it structured]
Success = [how you'll know you understand]Template 5: Editing/Rewriting
BACKGROUND:
- Original piece: [paste]
- What's wrong with it: [specific issues]
- What's right with it: [what to preserve]
ROLE:
Editor who specializes in [type of content]
INSTRUCTIONS:
Rewrite to fix [specific issues] while keeping [what works].
Constraints: [length, tone, format requirements]
EXAMPLES:
Target voice: [describe or show example]
Avoid: [what you don't want]
FORMAT:
Show me [one version vs. multiple options]
Explain significant changesTemplate 6: Brainstorming/Ideas
BACKGROUND:
- Challenge: [what you're trying to solve/create]
- Context: [relevant constraints, audience, goals]
- What's been done: [existing solutions, past attempts]
ROLE:
[Type of creative/strategist] with experience in [relevant domain]
INSTRUCTIONS:
Generate [number] ideas for [specific need].
Each idea should [criteria].
EXAMPLES:
Ideas I like: [describe what resonates]
Ideas I don't want: [what won't work]
FORMAT:
For each idea: [what to include — title, description, etc.]
Prioritize: [what matters most]Template 7: Coding/Technical
BACKGROUND:
- Project: [what you're building]
- Tech stack: [languages, frameworks, tools]
- Current code: [paste relevant sections]
- Problem: [what's broken or what you need]
ROLE:
[Type of developer] with expertise in [specific technologies]
INSTRUCTIONS:
[Write / debug / refactor / explain] [specific code/feature].
Constraints: [performance, style, compatibility requirements]
EXAMPLES:
Code style I follow: [conventions, patterns]
Code I avoid: [anti-patterns]
FORMAT:
[Code only / code with comments / explanation then code]Common Mistakes
Now that you have the framework, here’s what trips people up:
Mistake 1: Context dumping
More isn’t always better. Include relevant context, not every piece of information you have. If it doesn’t help the AI understand what you need, leave it out.
Mistake 2: Vague success criteria
“Make it good” isn’t a success criterion. “A busy executive can understand the key points in 30 seconds” is.
Mistake 3: No examples
Examples are the most powerful part of context engineering. Even one good example dramatically improves output. Even one anti-example prevents common failures.
Mistake 4: Skipping the role
The role isn’t fluff. Different roles activate different knowledge and styles. A “startup founder” writes differently than a “Fortune 500 executive.” The AI knows this.
Mistake 5: One-shot prompting
Context engineering isn’t just the first message. It’s the entire conversation. Build context over multiple exchanges. Refine. Iterate. The best results come from dialogue, not monologue.
The Mindset Shift
Here’s the real change:
Old mindset: “How do I write a prompt that gets what I want?”
New mindset: “How do I give AI everything it needs to give me what I want?”
The first is about finding magic words.
The second is about clear communication.
You’re not casting spells. You’re briefing a capable collaborator who has no context about your situation.
The better your brief, the better the work.
That’s context engineering.
Start Here
Don’t try to implement everything at once.
This week, pick one task you regularly do with AI.
Before you prompt, write out:
Background: What does the AI need to know about my situation?
Role: Who should the AI “be” for this task?
Instructions: What exactly do I want delivered?
Examples: What does good look like? What should it avoid?
Format: What does success look like?
Then give it all to the AI at once.
Compare the output to what you were getting before.
You’ll never go back to basic prompting.
What’s Next
Tomorrow: How I trained AI to write exactly like me.
I’ll show you the exact process I use to capture my voice, my opinions, my writing patterns — and transfer them to any AI. So it doesn’t just write well. It writes like me.
If this framework was useful, share it with someone still typing “write me a marketing email” into ChatGPT.
They need this.
Humanly yours,
Nick
What’s the first task you’re going to apply this to? Reply and tell me — I read every response.

