Stop Staring Blankly at AI: A Guide to Common Prompt Frameworks
Frameworks are rigid, but when used right, AI’s answers come alive.
0 Preface
Using AI to write code, look things up, or do translations is nothing new at this point. But I’ve noticed a very common pattern: the way most people talk to large language models isn’t much different from typing keywords into a search engine. The vaguer the question, the more useless the answer.
“Prompt Engineering” has been talked to death, but very few people actually use structured frameworks in their day-to-day prompting. Most people are still stuck at “write me a XXX” or “explain XXX,” then shaking their heads at a bland, unhelpful response.
I’ve compiled over a dozen Prompt frameworks I’ve come across. Some are genuinely useful, some are narrow in scope. I’ll walk through each one — what it is, how to use it, and when to reach for it.
1 Why You Need Frameworks
There’s a massive difference between saying “write me an email” and “You are a senior business manager. Write an apology email to a partner regarding a project delay. Keep the tone formal but not subservient, and stay within 200 words.”
The essence of a framework is to take the fuzzy idea in your head and break it down into structured information a model can actually work with. The more complete your context and the clearer your constraints, the closer the output will be to what you actually want.
Don’t overthink it — a framework is just a fill-in-the-blank template that forces you to think a few steps ahead before you ask.
2 Framework Breakdown
2.1 RTF — Role / Task / Format
The simplest framework. Three fields, great for quick everyday use. Role sets the persona, Task states the job, Format defines the output. That’s it.
Example:
Role: You are a senior frontend engineer
Task: Review the following React component for performance issues and provide optimization suggestions
Format: Output as a numbered list, each item containing the problem description and proposed fix
RTF’s strength is its low barrier to entry — it covers most everyday conversations. The downside is that it lacks context and constraints, so complex tasks can easily go off the rails.
2.2 ROSES — Role / Objective / Scenario / Expected Solution / Steps
A more complete version of RTF, with added scenario description and expected steps. Start with the Role you want AI to play, state the Objective, describe the current Scenario, tell it what Expected Solution you’re looking for, and finally break down the Steps for it to follow.
Example:
Role: You are a DevOps engineer
Objective: Help me troubleshoot why a Kubernetes Pod in production keeps restarting
Scenario: A Node.js service gets OOMKilled every 5 minutes after deployment, memory limit is set to 512Mi
Expected Solution: Provide a troubleshooting approach and possible solutions
Steps: 1. Analyze possible root causes 2. Provide diagnostic commands 3. Offer fixes for each possible cause
ROSES works well for technical problems — especially debugging and solution design. The structure gives AI’s response much better depth and organization.
2.3 CO-STAR — Context / Objective / Style / Tone / Audience / Response
A framework popularized by Singapore’s GovTech team, which went viral after winning a Prompt Engineering competition in 2024. Six fields, each handling one thing: Context sets the background, Objective clarifies the goal, Style defines the writing style (academic, conversational, technical docs, etc.), Tone sets the register (formal vs. casual), Audience specifies who the output is for, and Response defines the output format.
Example:
Context: Our company just closed a funding round and needs to post an announcement on social media
Objective: Write a LinkedIn post
Style: Professional but not stiff
Tone: Confident and grateful
Audience: Potential investors, partners, and job seekers
Response: A 150–200 word post in English with 3 relevant hashtags
CO-STAR’s biggest advantage is its separate handling of “tone” and “audience” — incredibly useful for content creation tasks. Not much use when writing code, but excellent for copywriting, translation, and email drafting.
2.4 CRISPE — Capacity & Role / Insight / Statement / Personality / Experiment
This framework leans toward putting AI into a more “personified” state. Capacity & Role defines the scope of expertise and persona, Insight feeds it background knowledge you already have, Statement says what you want it to do, Personality tells it how to respond, and Experiment asks it to try multiple approaches.
Example:
Capacity & Role: You are a technical writer with 10 years of experience
Insight: The target audience is junior developers who may not be familiar with cloud-native concepts
Statement: Write a beginner's tutorial on Docker containerization
Personality: Patient, humorous, skilled at using analogies to explain abstract concepts
Experiment: Provide two different opening versions for me to choose from
CRISPE works well when you need AI to produce content with personality — blog posts, podcast scripts, product copy. But if you just want it to run a SQL query, don’t bother.
2.5 SCOPE — Situation / Complication / Objective / Plan / Evaluation
Inspired by McKinsey’s SCQA (Situation-Complication-Question-Answer) narrative structure, with an execution plan and evaluation criteria added. The flow: describe the Situation, identify the Complication, define the Objective, ask for a Plan, and set Evaluation criteria to judge whether the solution works.
Example:
Situation: Our team uses a Monorepo to manage 10+ microservices
Complication: CI/CD build times have exceeded 30 minutes, making every PR a waiting game
Objective: Reduce build time to under 10 minutes
Plan: Analyze current pipeline bottlenecks and propose optimization strategies
Evaluation: Solutions must account for implementation cost, maintenance complexity, and impact on existing workflows
SCOPE is particularly well-suited for consulting and decision-making questions, since it comes with a built-in “problem → solution → evaluation” loop.
2.6 SAGE — Situation / Action / Goal / Expectation
A simplified version of SCOPE, dropping Complication and Evaluation — better for quick scenarios. Four fields: Situation for background, Action for what to do, Goal for the target, Expectation for the desired output.
Example:
Situation: I have an Express.js backend project with no logging system in place
Action: Help me integrate the Winston logging library
Goal: Implement tiered log output with support for both file and console output
Expectation: Provide complete configuration code and usage examples, compatible with TypeScript
SAGE hits a sweet spot — not so simple that response quality suffers, not so complex that writing the prompt becomes a chore.
2.7 BROKE — Background / Role / Objective / Key Results / Evolve
A framework with OKR thinking baked in. The Key Results field is the standout — it forces you to define “what does done look like” before you even ask. Flow: Background sets context, Role assigns a persona, Objective defines the goal, Key Results quantifies success metrics, and Evolve leaves room for iteration.
Example:
Background: Our SaaS product's registration conversion rate is only 2%, while the industry average is 5%
Role: You are a growth hacking expert
Objective: Improve the conversion rate on the registration page
Key Results: The solution must raise conversion to above 4% without increasing the ad budget
Evolve: If the first round of solutions shows limited results, suggest an A/B testing iteration direction
2.8 RISEN — Role / Instructions / Steps / End Goal / Narrowing
A framework that emphasizes step decomposition and scope restriction. Role sets the persona, Instructions give detailed directives, Steps break down the process, End Goal clarifies the final outcome, and Narrowing draws the red lines — what’s in scope and what isn’t.
Example:
Role: You are a database optimization expert
Instructions: Analyze the following slow SQL query and provide an optimization plan
Steps: 1. Explain why this SQL is slow 2. Provide the optimized SQL 3. Recommend indexes to add
End Goal: Reduce query time from 5 seconds to under 500 milliseconds
Narrowing: Cannot modify the table schema — only add indexes and optimize the SQL
The Narrowing field is the highlight. AI solutions are often ideal but impractical. Adding constraints makes the answers far more actionable.
2.9 APE — Action / Purpose / Expectation
The ultra-minimalist trio — even more direct than RTF. Action states what to do, Purpose explains why, Expectation defines the desired result. No role needed.
Example:
Action: Migrate the following Python 2 code to Python 3
Purpose: The project needs to upgrade its runtime environment; Python 2 is end-of-life
Expectation: Keep functionality unchanged, annotate every modification, and explain the reason for each change
APE is ideal for tasks with clear objectives that don’t need much role-playing — code conversion, format adjustments, text rewriting, and similar tasks.
2.10 RACE — Role / Action / Context / Example
A framework with examples built in — the structured version of few-shot learning. Role sets the persona, Action states the task, Context provides background, and Example attaches a reference — that last part is what sets it apart from everything else.
Example:
Role: You are an API documentation specialist
Action: Write documentation for the following REST API endpoints
Context: This is a CRUD interface for a user management module, consumed by the frontend team
Example: Follow the format of this existing documentation:
## GET /api/users
**Description**: Retrieve a list of users
**Parameters**: page (int), size (int)
**Returns**: { data: User[], total: number }
Giving AI one example is worth more than ten lines of description. RACE’s core philosophy: show, don’t tell.
2.11 ICIO — Instruction / Context / Input / Output
An engineering-oriented framework whose structure resembles a function call. Instruction gives the directive, Context adds background, Input feeds the data, Output defines the return format. Anyone who’s written an API will find this structure immediately familiar.
Example:
Instruction: Extract all user email addresses from the following JSON data
Context: This is user data returned from a third-party API; the structure may be inconsistent
Input: [{"name": "Alice", "contact": {"email": "[email protected]"}}, {"name": "Bob", "email": "[email protected]"}]
Output: Return a plain array of email addresses, e.g. ["[email protected]", "[email protected]"]
ICIO is especially well-suited for data processing tasks. With clearly defined inputs and outputs, AI rarely goes off track.
2.12 CREATE — Character / Request / Examples / Adjustments / Type / Extras
One of the most field-heavy frameworks, suited for scenarios requiring fine-grained control. Character sets the persona, Request states the need, Examples provides references, Adjustments flags areas needing special attention, Type defines the output type, and Extras adds any supplementary information.
Honestly, this framework has too many fields for everyday use — it can feel heavy. But if you’re building a Prompt template that gets called repeatedly (like an automated content production pipeline), filling in all these fields will significantly improve output consistency.
2.13 CARE — Context / Action / Result / Example
Very similar to RACE, but swaps Role for Result — making it more outcome-oriented.
Context: We're building an e-commerce app targeting international markets
Action: Translate the following Chinese product description into English
Result: The translation should be natural and compelling, suitable for an Amazon product listing
Example:
Chinese: 轻便折叠伞,一键开合,晴雨两用
English: Ultra-light foldable umbrella with one-touch open/close, perfect for sun and rain
3 How to Choose
After all that, you might be thinking: which one should I actually use?
Honestly, don’t stress over it. The differences between these frameworks aren’t that significant — the core logic is the same: provide enough context, state your goal clearly, define the format.
If you want a quick recommendation:
- For quick everyday conversations, RTF or APE is enough — don’t overcomplicate simple things
- For debugging technical issues, use ROSES or RISEN — the clearer the steps, the better the answer
- For content creation like copywriting or translation, CO-STAR or CRISPE gives the best control over tone
- For planning and decision-making, SCOPE or BROKE comes with a built-in evaluation loop
- For data processing or code generation with clear inputs/outputs, ICIO is the most precise fit
- If you have a reference example on hand, RACE or CARE lets AI learn directly from it
- When in doubt, SAGE is the all-purpose fallback that works in almost any situation
4 Practical Takeaways
Frameworks are tools, not dogma. After using them for a few months, here are a few real-world lessons:
-
You don’t need to strictly follow a framework every time. Often you just want to ask a quick question — just ask. Frameworks are a checklist for when AI’s response falls short, helping you reflect: “Did I leave something out?”
-
Role actually matters. Even just adding “You are a senior XX engineer” noticeably improves the depth and professionalism of the response. This isn’t magic — Role helps the model locate which subset of knowledge to draw from.
-
Examples beat descriptions. If you spend three lines describing the output format, just give an example instead. Few-shot consistently outperforms zero-shot on most models.
-
Constraints are critical. “No more than 500 words,” “use only Python’s standard library,” “don’t modify the existing database schema” — these constraints filter out a huge number of impractical suggestions.
-
Iteration is more realistic than getting it right the first time. An imperfect first Prompt is normal. Look at where AI’s response went wrong, add context or adjust constraints, and after two or three rounds you’ll usually get what you want.
5 Final Thoughts
Prompt frameworks aren’t some deep, arcane knowledge — they’re just “the art of asking good questions” with a fancier name. But they genuinely help you escape the frustration of “AI seems kind of dumb” — most of the time, it’s not that AI can’t do it, it’s that we’re not asking well.
Pick one or two frameworks that feel natural, use them consciously in your daily work, and you’ll find your collaboration with AI jumps up a level.
A good Prompt isn’t written for the AI — it’s written for yourself. It forces you to figure out what you actually want before you ask. A competent product manager should be able to hit exactly what they’re aiming for with every prompt.