Five prompt mistakes that waste tokens and time
Common patterns we see in low-scoring prompts — and the small edits that fix them.
by PromptCount Team
After reading tens of thousands of AI prompts (both ours and customers'), the same handful of mistakes show up over and over. None of them are hard to fix. They just don't get noticed until output quality drops or token bills creep up.
Here are the five most expensive ones.
1. The "act as" preamble
Act as a senior marketing expert with 20 years of experience in B2B SaaS who has worked at companies like Stripe, Shopify, and Notion, and is known for writing compelling copy that converts...
This was useful in early GPT-3 days when models needed a strong role nudge. Modern models don't need 50 tokens of pretend-resume to write good copy.
Replace with: "Write copy for a B2B SaaS audience."
Same outcome. 40 fewer tokens.
2. Stacking adjectives instead of describing
Write a captivating, engaging, compelling, attention-grabbing, persuasive intro paragraph for a blog post about productivity.
The five adjectives all point at the same fuzzy idea: "good." The model can't disambiguate. It picks one and ignores the rest, or it picks an averaged blob.
Replace with: "Write a blog intro that hooks a reader who scrolled past three other productivity posts today. One sentence, no buildup."
You traded five adjectives for one concrete instruction. The output is sharper and the prompt is shorter.
3. Hidden contradictions
Write a short, comprehensive guide to React Server Components that covers everything a developer needs to know.
"Short" and "comprehensive" are contradictory. "Everything a developer needs to know" makes it worse.
The model resolves the contradiction by giving you something neither short nor comprehensive — a medium-length, surface-level overview. Worst of both worlds.
Pick one:
- "Write a 200-word overview of React Server Components."
- "Write a comprehensive guide to React Server Components, no length limit."
Either is better than the contradictory version.
4. Restating the obvious
You are a helpful AI assistant. Your task is to help the user with their question. Please carefully consider the user's needs and provide a thoughtful, helpful response.
Every modern AI is already trained to be helpful, careful, and thoughtful. These preambles add no information and steal tokens from your actual instruction.
Replace with: [your actual instruction]
Save 30+ tokens per call. If you're running thousands of calls a day, this compounds.
5. The dump-everything prompt
We see prompts like:
I'm working on a React project. We use TypeScript and Tailwind. The team is 4 people. Our deploy is on Vercel. We use GitHub for version control. We have a design system in Figma. The product is a B2B tool for sales teams. Yesterday I noticed that the table on the dashboard page sometimes shows stale data. Can you help me debug it?
The first six sentences are context the model doesn't need to answer the question. The actual problem is in the last sentence.
Replace with: "Table on a React dashboard sometimes shows stale data. Suspected stale React Query cache. What's a good debug approach?"
The model doesn't need to know about your design system or your team size to debug a caching issue. Save the context for when it's actually relevant.
The pattern behind all five
These mistakes share a structure: they substitute length for clarity. The instinct is to add more words so the model "understands better." In practice, more words usually dilute the prompt's signal.
Strong prompts tend to have:
- One clear action verb
- Concrete constraints (length, format, audience)
- One specific example when needed
- No filler
You can verify against this by running your prompt through the AI Prompt Counter. The Prompt Score panel flags most of these patterns:
- "No clear action verb" — fix mistake #4
- "Heavy keyword repetition" — fix mistake #2
- "Very long prompt" — fix mistake #5
- "Sentences too long" — fix mistake #3
The score also rewards the moves that work: specifying output format, providing context, naming style, setting constraints, including examples.
The trim test
Take your last AI prompt that produced disappointing output. Open the AI Prompt Counter, paste it, and look at the score.
If it's under 70, you almost certainly have one of these five mistakes. Most prompts can be cut by 30–50% with no loss of useful information — and the cut version usually produces better output, not worse.
Try it once. It changes how you write prompts permanently.