Be clear and direct
Say exactly what you want, to whom, in what form.
Most "bad AI output" is a response to an ambiguous prompt. Models pattern-match to the most likely interpretation of vague input. Specifying your task, audience, and desired format resolves that ambiguity up front.
Before
tell me about photosynthesis
After
Explain photosynthesis to a high school biology student who already knows about cell structure. In 150 words, cover: what enters, what exits, and why it matters for the carbon cycle. Use one concrete analogy.
Why it works
The second prompt fixes four things: the audience (so the vocabulary is right), the length (so the model doesn't over-deliver), the structure (so key points aren't missed), and the style (so it's concrete instead of abstract).
Role prompting
Give the model a perspective before giving it a task.
Assigning a role shapes vocabulary, default assumptions, and depth of explanation. It's not about trickery, it's about specifying the voice and expertise level you want the response delivered in.
Before
review this paragraph for problems
After
You are an academic writing tutor for undergraduate students. Review the paragraph below for three things: clarity of the main claim, evidence-to-claim fit, and topic sentence strength. Suggest one specific revision for each.
Paragraph: [...]
Why it works
The role (academic writing tutor) narrows the feedback to what matters for the writer. The three-point structure prevents a generic "here are ten things" list and forces prioritization.
Few-shot examples
Show two or three examples of the output you want. Then ask for another.
Describing a style or format in words is imprecise. Showing examples is dramatically more precise. Few-shot prompting is the single biggest quality upgrade for any task with a consistent structure (summaries, rewrites, classifications, formatting).
Before
summarize these meeting notes in a useful way
After
Summarize meeting notes in this format:
Example 1:
Decisions: [comma-separated]
Open questions: [as questions]
Next steps: [who · what · by when]
Example 2:
Decisions: Moving to Monday standups; dropping the Friday retro
Open questions: Who owns the analytics dashboard?
Next steps: Mohamad · draft outline · Thursday
Now summarize these notes: [...]
Why it works
The examples teach the model the exact shape, tone, and granularity you want. Patterns communicate what instructions can't.
Chain-of-thought
Ask the model to think before it answers.
On reasoning tasks (math, logic, multi-step analysis), asking the model to explain its thinking step-by-step before giving a final answer measurably improves accuracy. You also get a reasoning trace you can check for errors.
Before
A lab has 3 researchers. Each runs 4 experiments per week, each experiment needs 2 runs, and 15% of runs fail and need redoing. How many runs per week?
After
A lab has 3 researchers. Each runs 4 experiments per week, each experiment needs 2 runs, and 15% of runs fail and need redoing.
Think step by step:
1. Compute total experiments.
2. Compute baseline runs.
3. Compute additional runs from 15% failure rate.
4. Sum for weekly total.
Show each step, then give the final number on its own line.
Why it works
Forcing the model to externalize intermediate steps reduces the chance of skipping a computation, and lets you catch an error at step 2 before it contaminates step 4.
Decomposition
Break big tasks into a sequence of smaller prompts.
A single mega-prompt asking for analysis + writing + formatting + citations will produce mediocre output in all four dimensions. Running each step as its own prompt, with the previous output as input to the next, produces noticeably better results with clearer points to verify.
Before
Read this article and write me a 500-word literature review paragraph with citations comparing it to two other relevant sources.
After
Step 1: Summarize the article's main argument and method in 80 words.
(verify → next)
Step 2: Based on the summary, list three specific claims this article makes that I could compare to other sources.
(verify → next)
Step 3: For each claim, suggest the type of source that would support or challenge it.
(verify → next)
Step 4: Draft a 500-word paragraph comparing the article to [sources I provide after step 3].
Why it works
You verify at each step. Errors can't compound silently. And each sub-task is simple enough that the model performs well on it.
Constraints & format
Tell the model what NOT to do, what length, what shape.
Negative constraints ("don't use jargon", "no introduction", "no more than 3 bullets") are often more useful than positive ones. Format constraints (exact length, specific structure, required sections) keep the output usable instead of a wall of prose you then have to re-shape.
Before
write me a short email asking my professor for an extension
After
Write a short email to my professor asking for a 48-hour extension on the midterm paper.
Constraints:
- Under 120 words
- No apologizing more than once
- Do not mention reasons I haven't specified
- Plain text, no markdown
Context: I've been managing a family situation I'd rather not detail.
Why it works
Constraints prevent the model's default tendencies (over-apologizing, over-explaining, generic sympathy) and protect your voice. The "do not mention reasons I haven't specified" line alone rescues most of these emails.
Iterative refinement
Treat the first response as a draft, not the deliverable.
Novice users accept the first output or start over from scratch. Skilled users refine. Ask what's weak. Ask for the strongest version of a specific point. Ask for three alternatives to a passage you don't like. The best prompts are almost always the third one.
Before
(get mediocre first response)
make it better
After
(get mediocre first response)
Tighten the second paragraph: it's vague where it needs specifics. Replace 'many studies' with either a number or a specific example.
Then, rewrite the opening sentence three different ways: one factual, one with a question, one starting from a specific observation. Show all three; I'll pick.
Why it works
"Make it better" has no target. Specific, localized revision requests with concrete replacement instructions give the model something to aim at. And asking for alternatives lets you choose instead of edit.