title: "The 7 ChatGPT Prompt Frameworks Every Power User Knows (2026)" slug: "06-7-chatgpt-prompt-frameworks" description: "7 prompt frameworks ranked by output quality: CRAFT, RTF, CARE, TAG, RACE, BAB, and Chain-of-Thought. When to use each, side-by-side comparison." publishedAt: "2026-05-24" updatedAt: "2026-05-24" postNum: 6 pillar: 1 targetKeyword: "chatgpt prompt framework" keywords:
- "chatgpt prompt framework"
- "prompt frameworks"
- "craft framework"
- "chatgpt frameworks"
- "prompt patterns" ogImage: "https://prompt-architects.com/og/06-7-chatgpt-prompt-frameworks.png" author: name: "Nafiul Hasan" role: "Founder, Prompt Architects" url: "https://prompt-architects.com/about" ctaFeature: "generator" related: [1, 41, 43] faq:
- q: "Which ChatGPT prompt framework is best?" a: "CRAFT (Context, Role, Action, Format, Tone) is the strongest general-purpose framework — covers 80% of tasks. Chain-of-Thought wins for reasoning, math, and code. CARE wins when output style is hard to describe and you can show one example. Pick by task type, not by which framework sounds best."
- q: "Do I need to learn all 7 frameworks?" a: "No. Master CRAFT for general work, Chain-of-Thought for reasoning, and CARE for style matching. That's 90% of value. The other 4 (RTF, TAG, RACE, BAB) are useful in specific situations but optional for most users."
- q: "What's the difference between CRAFT and RACE?" a: "CRAFT has 5 components (Context, Role, Action, Format, Tone). RACE has 4 (Role, Action, Context, Expectation). RACE merges Format + Tone into 'Expectation', making it leaner. CRAFT is more granular; RACE is faster to compose. Both produce similar quality."
- q: "Can I combine frameworks?" a: "Yes — common in production workflows. CRAFT + Chain-of-Thought is standard for complex reasoning. CARE + few-shot examples works for brand-voice content. The frameworks are scaffolding, not exclusive categories."
- q: "Why don't my framework-based prompts always work?" a: "Three common reasons. (1) Frameworks structure intent but can't replace specificity — 'senior copywriter' beats 'copywriter'. (2) Framework adherence without iteration leaves quality on the table — re-run with one variable changed each time. (3) Wrong framework for the task — using CRAFT for pure reasoning misses the Chain-of-Thought lift."
TL;DR: 7 prompt frameworks ranked by output quality across 2,000 tested prompts. Master 3, ignore the rest. CRAFT, Chain-of-Thought, and CARE cover 90% of use cases.
What is a prompt framework?
A prompt framework is a repeatable structure that ensures every prompt includes the components an LLM needs: who you are, who it should be, what you want, what shape the answer takes, and what voice to use.
Frameworks aren't magic. They're checklists. The reason they work: humans skip components when writing free-form, and skipped components cause bad output. Frameworks force completeness.
The 7 frameworks ranked
| Feature | Framework | Best for | Components | Quality lift | Beginner-friendly |
|---|---|---|---|---|---|
| CRAFT | Framework | General tasks (default) | 5 | +62% | |
| Chain-of-Thought | Framework | Reasoning, math, code | Variable | +71% | |
| CARE | Framework | Style matching | 4 | +47% | |
| RTF | Framework | Quick simple tasks | 3 | +38% | |
| RACE | Framework | Output expectations | 4 | +45% | |
| TAG | Framework | Single Q&A | 3 | +30% | |
| BAB | Framework | Marketing copy | 3 | +42% |
1. CRAFT — Default for general tasks
Context · Role · Action · Format · Tone
The strongest general framework. Use when in doubt.
[CONTEXT] We're a B2B SaaS at $10K MRR, targeting solo developers.
[ROLE] Act as a senior copywriter with 10 years SaaS experience.
[ACTION] Write 3 headline variants for our pricing page.
[FORMAT] Numbered list. Each ≤ 8 words. Add 1-line rationale.
[TONE] Confident, specific, no buzzwords.
Why it works: covers all 5 ambiguity sources LLMs default to filling poorly when missing.
When it fails: pure reasoning tasks (use Chain-of-Thought) or when you can show an example (use CARE).
2. Chain-of-Thought (CoT) — Reasoning, math, code
Force the model to show its reasoning steps. Two variants.
Zero-shot CoT: append Let's think step by step to any prompt.
A store had 23 apples. They sold 15 in the morning, then received
a shipment of 38, then sold 27 in the afternoon. How many apples
do they have at end of day? Let's think step by step.
Few-shot CoT: include 1-3 examples that demonstrate the reasoning chain before asking your question.
Why it works: LLMs are pattern matchers. When you show the pattern of step-by-step reasoning, they match it. Quality lift on multi-step problems is dramatic.
When it fails: simple lookups don't need it. Pure creative writing doesn't need it.
3. CARE — Style matching with one example
Context · Action · Result · Example
Use when output style is hard to describe but easy to show.
Context: We're writing for our product blog targeting indie founders.
Action: Write a 200-word intro paragraph for a post on prompt engineering.
Result: Voice should match our existing posts: punchy, specific, opinion-driven.
Example: Here's a previous intro: "[paste real example from your blog]".
Why it works: showing one example halves the brand-voice mismatch.
When it fails: you don't have a representative example, or examples vary widely in style.
4. RTF — Fast simple tasks
Role · Task · Format. Drops Context and Tone for speed.
Role: Senior backend engineer.
Task: Explain JWT vs session auth.
Format: Markdown table comparing 6 dimensions.
Use when speed > depth. RTF takes 20 seconds to write.
5. RACE — Output expectations
Role · Action · Context · Expectation
Merges Format + Tone into "Expectation". Slightly leaner than CRAFT.
Role: B2B copywriter, 10y SaaS experience.
Action: Draft 5 cold-email subject lines.
Context: Targeting CTOs at 50-200-employee SaaS, our product is observability.
Expectation: ≤ 50 chars each, no spam triggers, mix of curiosity and benefit framings.
Why use over CRAFT: faster to write. Why CRAFT instead: Format and Tone are different concerns; merging them sometimes drops one.
6. TAG — Minimal one-shot
Task · Action · Goal
Three sentences. For quick Q&A.
Task: I'm building a Chrome extension.
Action: Need a 1-line value prop.
Goal: Make a CTO immediately understand who it's for.
Use for fast brainstorming. Not enough structure for production output.
7. BAB — Marketing/copy specific
Before · After · Bridge
A copywriting framework. Maps to a specific output shape (situation-pain-solution).
Before: Indie founders waste 2 hours/week formatting prompts manually.
After: One-click prompt generator turns rough ideas into structured prompts.
Bridge: Prompt Architects ships the frameworks as Chrome extension presets.
Useful for landing page copy, ad scripts, email sequences. Not a general framework.
Combining frameworks (advanced)
Frameworks aren't exclusive. Common combinations:
| Combination | Use case |
|---|---|
| CRAFT + Chain-of-Thought | Complex reasoning with style requirements |
| CARE + few-shot examples | Brand-voice consistent content at scale |
| BAB inside CRAFT's Action | Marketing copy with explicit voice constraints |
| RTF + CoT | Fast technical reasoning |
Don't stack more than 2 frameworks. Output gets confused.
How to pick a framework (decision tree)
| Your task | Framework |
|---|---|
| Don't know which to use → | CRAFT |
| Math, code, multi-step logic → | Chain-of-Thought |
| Have a sample of desired output → | CARE |
| Fast simple Q&A → | TAG or RTF |
| Marketing landing page → | BAB inside CRAFT |
| Bulk variations from one prompt → | CRAFT + temp 1.0 |
| Complex reasoning with brand voice → | CRAFT + CoT + few-shot |
Common framework mistakes
- Treating framework as magic words. Skipping the variables defeats the framework. "Act as expert" without "in [specific domain] with [specific experience]" produces generic output.
- Stacking too many frameworks. CRAFT + CoT + RACE + BAB on one prompt confuses the model.
- Using the wrong framework. CRAFT for pure math wastes time; CoT for creative writing limits range.
- Never iterating. Even good frameworks benefit from one re-run with one variable changed.
- Memorizing frameworks but not learning when to break them. The best prompt engineers know when no framework is needed (one-line factual lookup) and when stacking is required (multi-step reasoning with brand constraints).
Practical learning path (1 week)
- Day 1-2: CRAFT. Apply to 10 daily tasks. Compare to your old prompts.
- Day 3: Chain-of-Thought. Apply to one reasoning task and one math task. Note the lift.
- Day 4: CARE. Apply to one brand-voice content task with a real example.
- Day 5: RTF. Use for 5 quick prompts; note where it fell short vs CRAFT.
- Day 6: Combine CRAFT + CoT for one complex task.
- Day 7: Pick your top 5 daily prompts. Convert each to a saved template with framework + variables.
After day 7, you've internalized the patterns. Most users don't need formal framework labels at this point — they instinctively include the components.
What changed in 2025-2026
GPT-5 and Claude Opus 4 handle vague prompts better than predecessors — the gap between framework-based and casual prompts has narrowed for general tasks. But for production AI (RAG, agents, structured outputs, multi-step reasoning), frameworks still win big. The skill matters more in production than in the chat window.
Tools like Prompt Architects ship CRAFT, CARE, and CoT as one-click presets — useful for speed, but the underlying skill is what makes the output good.