PPromptCount.ai
← All posts
guide·4 min read

Prompt engineering frameworks — when to use CIDI, ROSES, and others

A practical look at the named prompt-engineering frameworks, when each one helps, and when they're overkill.

by PromptCount Team

Prompt engineering has acquired an alphabet soup of frameworks: CIDI, ROSES, CRISPE, RTF, RISEN, and a dozen more. Each one promises to be the magic formula. Most are repackaged versions of the same handful of ideas.

This is a working person's guide to which frameworks actually save you time and which ones just give you another mnemonic to forget.

What's in a framework

Every prompt framework is trying to do the same thing: ensure your prompt covers the essential dimensions of a well-formed instruction. Those dimensions are:

  • What to do (task)
  • For whom (audience)
  • In what shape (format)
  • With what constraints (limits, tone, style)
  • With what context (background)
  • Following what example (one-shot or few-shot)

Different frameworks emphasize different subsets. None of them is wrong. They're checklists with marketing.

The frameworks worth knowing

CIDI — Context · Instruction · Details · Input

The most common starter framework, taught in nearly every prompt-engineering tutorial.

Context: I'm writing a launch announcement for a new product. Instruction: Draft a tweet. Details: Under 280 characters, no hashtags, ends with a concrete benefit. Input: The product is an AI prompt counter that estimates tokens in your browser.

When it helps: When you're starting from scratch and not sure what to include. The four slots force coverage of the basics.

When it's overkill: When the task is obvious. "Translate this email to Spanish" doesn't need a CIDI breakdown.

ROSES — Role · Objective · Scenario · Expected Solution · Steps

Heavier than CIDI. Adds a role assignment and asks you to spell out the steps.

Role: Senior backend engineer. Objective: Diagnose why a cache layer is returning stale data. Scenario: React Query v5, 5-minute staleTime, mutations sometimes don't trigger refetch. Expected solution: A list of three to five concrete things to check, in order of likelihood. Steps: Walk through each check with a one-line explanation.

When it helps: Reasoning-heavy tasks. Debugging, analysis, decision support. The "steps" slot forces the model to externalize its thinking, which improves accuracy.

When it's overkill: Most creative or copy tasks. Adding "steps" to a tweet-writing prompt slows the model down without improving the tweet.

CRISPE — Capacity · Insight · Statement · Personality · Experiment

Marketing has adopted this one. The "personality" slot encourages voice work.

Capacity: You are a marketing strategist. Insight: Today's audiences are skeptical of AI hype. Statement: Write three landing-page headline options for an AI tool. Personality: Plainspoken, slightly skeptical, no hype words. Experiment: Vary the angle — one benefit-led, one objection-handling, one curiosity-led.

When it helps: Voice-sensitive tasks. The "personality" + "experiment" slots are the unique additions worth keeping.

When it's overkill: Anything technical. The roleplay framing adds noise.

RTF — Role · Task · Format

The minimalist framework. Three words, three slots.

Act as a copywriter. Write three subject lines for a webinar invite. Each under 50 characters.

When it helps: When you want a framework that fits in one sentence. Most simple tasks fit RTF without strain.

When it's overkill: Almost never. RTF errs on the side of being too small, which is the better error to make.

RISEN — Role · Input · Steps · Expectations · Narrowing

Mostly equivalent to ROSES but rebranded. "Narrowing" means constraints.

Not worth memorizing as a separate framework. If you know ROSES, you know RISEN.

Which framework should you actually use

In practice, most experienced prompt writers don't follow any single framework. They follow a shorter mental checklist:

  1. Action verb first (always)
  2. Output format (whenever non-obvious)
  3. Audience (for any voice-sensitive work)
  4. One constraint (length, tone, or both)
  5. One example (only when stakes are high)

That's not a framework, that's a habit. The frameworks above are training wheels for that habit.

When stakes are low — drafting, brainstorming, quick code — RTF is sufficient. When stakes are high — production copy, customer-facing content, important reasoning — ROSES or CIDI structures help.

The signal that you don't need a framework

If your prompts consistently score above 80 in the AI Prompt Counter, you don't need a framework. You're already covering the essentials. Frameworks help when your scores are stuck in the 50–70 range — they force coverage of the missing dimensions.

The score panel will tell you which dimensions you're skipping. That's a more useful signal than picking the right named framework.

A meta-framework

If you're going to remember one prompt-engineering rule:

Add what's missing, not what sounds professional.

Most under-performing prompts are missing one specific thing — usually format, sometimes audience, occasionally an example. The frameworks help you notice what's missing. They're a diagnostic, not a recipe.

Once you've used them long enough to internalize the slots, you can drop the acronyms and just write good prompts.

Continue reading