A working structure for ChatGPT prompts
Six slots that work for nearly every ChatGPT task — short enough to remember, complete enough to use on real work.
by PromptCount Team
If you write ChatGPT prompts for living — drafting copy, debugging code, summarizing meetings, analyzing data — you eventually settle into a structure. Mine took about a year to find. Sharing it might save you the time.
It's six slots. Memorable enough to use without a template. Complete enough that prompts in this shape consistently score 80+ in our Prompt Score.
The six-slot structure
- Task — what to do, with an action verb
- Audience — for whom
- Output — format, shape, length
- Style — tone, voice, what to avoid
- Context — background the model needs
- Example — one optional reference
Most prompts only need 3 or 4 of these slots filled. The skill is knowing which ones to skip.
Examples in three sizes
Tiny task
Translate this paragraph to Spanish.
One slot: task. The other five are obvious from the context. Don't pad.
Medium task
Task: Draft a Slack message announcing a feature launch. Audience: Engineering team (50 people). Output: Under 80 words, no bullets, ends with a thanks-and-tag of the engineer who shipped it. Style: Casual but professional. Skip "excited to announce."
Four slots filled. Good ratio of instruction to filler.
Heavy task
Task: Review this 600-word draft and rewrite for clarity. Audience: Senior engineering manager reading at 6am, skimming for what they need to know. Output: Same length or shorter. Same structure. Preserve technical terms. Style: Plainspoken, direct. Active voice. Cut hedging language. Context: This is a quarterly engineering summary. Tone needs to match a similar doc the recipient writes — see example. Example: [paste 200-word sample of the recipient's own writing]
All six slots. Used for high-stakes voice-match work.
How to use the structure
Don't write the labels in your actual prompt. They're a mental checklist, not formatting.
What this looks like as a single paragraph:
Draft a Slack message announcing a feature launch to our 50-person engineering team. Under 80 words, no bullets, ends with a thanks-and-tag of the engineer who shipped it. Casual but professional. Skip "excited to announce."
The model never sees "Task:" or "Audience:" — they're scaffolding for you to make sure the prompt is complete.
The slot that breaks most prompts
Audience.
Most prompts skip it. The model defaults to a generic mid-internet voice. If you want output that sounds like a human professional, you have to name the human.
"Senior engineering manager at a 200-person company, reading at 6am" produces different output than "developer audience." Different from "general technical reader." Different from "newsletter subscribers."
The more specific the audience, the more voice-aware the output. The fastest improvement most prompts can make is adding 8 words of audience description.
The slot that's overrated
Style.
Style descriptors are useful but limited. "Write in the voice of Paul Graham" produces something faintly PG-shaped but not actually like PG. "Professional but warm" produces neither.
Style works much better when paired with Example. One paragraph of the target style beats five adjectives.
If you skip an Example, expect Style to give you about 60% of what you wanted.
The slot that's underrated
Context.
Most prompts under-specify context because the writer feels it's obvious. It's never obvious to the model.
Rewrite this paragraph.
The model doesn't know who wrote it, who's reading it, what it's for, what came before it, or what's after. The rewrite is generic by necessity.
Rewrite this paragraph from a quarterly engineering update written by an EM for their VP. The rest of the doc is concise; this paragraph is too long.
Now the model can match register, length, and purpose.
Adding 15 tokens of context routinely improves output more than 100 tokens of style instruction.
What about long system prompts?
For products that wrap ChatGPT, system prompts can be long — sometimes 1,000+ tokens. They're justifiable when:
- You're encoding consistent rules across many interactions
- You're providing a persistent persona or knowledge base
- The system prompt cost amortizes over many requests
For one-off prompts in the ChatGPT interface, long system prompts are usually overkill. The six-slot structure for a single request is enough.
Putting it on rails
To make this a habit:
- Before sending any prompt, ask: "What slots am I filling?"
- If you're using fewer than 3 slots, the task is probably small enough that you don't need more.
- If you're using 4–5 slots, you're in the productive middle.
- If you're trying to use all 6, ask whether the example would replace the style slot — usually it does.
Run a week of your prompts through the AI Prompt Counter. The Prompt Score panel correlates well with which slots are missing. Add what's flagged. Drop everything else.
The structure should feel like scaffolding you can fold away — not a template you fill in.