How to Write Better AI Prompts: A Practical Guide
Most people type a sentence into ChatGPT, get a mediocre response, and conclude that AI is overhyped. The actual problem? Their prompt was lazy. Not an insult — just a fact. The gap between a bad prompt and a great one is enormous, and almost nobody talks about it in concrete terms.
I spend most of my working hours crafting prompts, testing them, tweaking them, and measuring the results. This guide is everything I've learned, distilled into techniques you can use today. No theory for theory's sake. Every section includes real before-and-after examples.
1. Be Specific (the Biggest Lever)
Vagueness is the number one prompt killer. When you write "help me with my essay," the model has to guess your topic, your audience, your tone, your desired length, and what kind of help you actually want. That's five unknowns. You wouldn't walk into a doctor's office and say "fix me" — so don't do it with AI either.
The fix is embarrassingly simple: say exactly what you want.
Write me a blog post about productivity.
Write a 600-word blog post about time-blocking for software developers. Conversational tone, aimed at mid-career devs who already know basic productivity tips. Include one specific daily schedule example. End with a single actionable takeaway.
The second prompt isn't longer for the sake of being longer. Every clause eliminates a guess the model would otherwise have to make. Topic: time-blocking. Audience: mid-career software devs. Tone: conversational. Length: 600 words. Structure: include an example, end with a takeaway.
Here's the mental model I use: think of every prompt as a design brief. You're commissioning work. The more precise the brief, the better the deliverable.
The Specificity Checklist
- Who is this for? Define the audience.
- What format? Email, list, essay, code, JSON?
- How long? Word count, paragraph count, or page count.
- What tone? Formal, casual, technical, persuasive?
- What should it include? Examples, data, specific sections?
- What should it avoid? Jargon, clichés, certain topics?
You don't need all six every time. But running through the list in your head before hitting send will immediately improve your results. If you're working on something more complex — like building a chatbot or writing system instructions — specificity becomes even more critical.
2. Give Context, Not Just Instructions
Instructions tell the model what to do. Context tells it why — and the why changes everything.
Think about it from the model's perspective. "Summarize this article" could mean a hundred different things depending on whether you're a student preparing for an exam, a journalist writing a piece, or a CEO deciding whether to invest. Same instruction, wildly different ideal outputs.
Summarize this article about CRISPR.
I'm a science journalist writing a piece for a general audience. Summarize this CRISPR article in 3 paragraphs: what happened, why it matters, and what questions remain. Avoid jargon — assume readers know basic biology but nothing about gene editing.
Context works like a filter. It helps the model select from its training data the patterns most relevant to your actual situation. Without that filter, you get generic soup.
This principle matters even more when you're writing persistent instructions — the kind that sit in a CLAUDE.md file or system prompt. Those instructions set the context for every single interaction, so getting them right has a multiplier effect. I've written extensively about how to write effective CLAUDE.md files if you want to go deeper.
3. Role-Based Prompting
This one sounds gimmicky. It isn't. Assigning a role to the AI changes the response distribution in measurable ways.
When you say "You are an experienced tax accountant," the model shifts toward patterns associated with tax expertise: precise language, references to specific regulations, cautious qualifiers where appropriate. It's not that the model "becomes" an accountant — it's that you've narrowed the space of likely responses toward a more useful region.
How should I handle depreciation for my rental property?
You are a CPA specializing in real estate taxation for small landlords. I own a single rental property (purchased 2023, $280K). Walk me through how depreciation works for my situation, what forms I'll need, and common mistakes first-time landlords make. Keep it practical — I'm doing my own taxes.
Roles work best when they're specific. "Act as an expert" is almost useless — expert in what? "Act as a senior backend engineer who primarily works in Python and has strong opinions about error handling" — now we're talking.
Roles I use regularly
- Editor with a specific style guide: "You're an editor at The Economist. Tighten this draft."
- Skeptical reviewer: "You're a peer reviewer looking for methodological problems."
- Domain specialist: "You're a DNS engineer troubleshooting an intermittent resolution failure."
- Translator for audience: "You're a senior dev explaining Kubernetes to a product manager."
The role sets the frame. Everything after it gets interpreted through that lens. Use it.
4. Chain-of-Thought Prompting
Some problems can't be solved in one jump. Math, logic puzzles, multi-step analysis, debugging — these require sequential reasoning. Chain-of-thought (CoT) prompting is how you get the model to actually work through a problem step by step instead of pattern-matching to the most common answer.
The simplest version: add "Think through this step by step" to your prompt. But you can do better.
A store has 3 types of fruit. Apples cost $2, bananas $1, oranges $3. I need exactly 10 fruits and want to spend exactly $18. How many of each?
A store has 3 types of fruit. Apples cost $2, bananas $1, oranges $3. I need exactly 10 fruits and want to spend exactly $18.
Work through this step by step:
1. Set up the equations
2. Identify the constraints
3. Solve systematically
4. Verify your answer by checking both the count and the total cost
Why does this work? Two reasons. First, it forces the model to allocate more computation to the problem — more tokens means more processing. Second, it creates checkpoints. When the model writes out intermediate steps, each step constrains the next, reducing the chance of a logic error cascading through.
CoT is especially powerful for:
- Math and logic problems
- Code debugging (walk through the execution path)
- Complex comparisons (evaluate criteria one at a time)
- Writing that requires research synthesis
If you're building tools that need to test how well AI follows instructions, chain-of-thought prompting is one of the best ways to improve accuracy on complex tasks.
5. Constraints That Actually Help
This is counterintuitive: giving the model less freedom often produces better output. Constraints are creative scaffolding. They force the model away from default patterns and toward more thoughtful responses.
But not all constraints are useful. "Write exactly 347 words" is pointless busywork. Good constraints serve your actual goal.
Write a product description for noise-canceling headphones.
Constraints:
- Max 80 words (it's for a product card)
- No superlatives ("best", "amazing", "incredible")
- Must mention: battery life, weight, and ANC rating
- Write for someone comparing 3 products side-by-side, not browsing casually
- End with one differentiating feature, not a generic CTA
Each of those constraints does work. The word limit forces concision. Banning superlatives kills the generic marketing voice. Required mentions ensure completeness. The audience note shapes tone. The ending instruction prevents the lazy "Buy now!" finish.
My favorite constraint types
Format constraints: "Respond as a bulleted list," "Use a table with columns for X, Y, Z," "Return valid JSON." Format constraints are nearly always helpful because they impose structure the model can reason within.
Exclusion constraints: "Don't use the word 'utilize,'" "No analogies," "Don't start with 'Sure!'" These prune the model's most overused patterns. The Humanizer tool I built works on this exact principle — removing the tics that make AI text sound robotic.
Audience constraints: "Explain this so a 12-year-old would understand," "Write for someone who already knows React but not Next.js." These are context and constraint combined.
Scope constraints: "Only discuss the security implications, not performance," "Focus on the last 3 months of data." These prevent the model from going wide when you need depth.
6. Iteration: The Technique Nobody Teaches
Here's what separates people who get good results from people who don't: iteration. The first response is a draft. Treat it like one.
Nobody writes a perfect prompt on the first try. And that's fine. What matters is that you know how to steer.
The iteration loop
- Send your prompt. Get the initial response.
- Diagnose the gap. What's wrong — tone, length, depth, accuracy, format?
- Give targeted feedback. Don't rewrite the whole prompt. Address the specific issue.
- Repeat until satisfied.
First response too generic?
→ "This is too high-level. Give me specific numbers and cite particular studies."
Too formal?
→ "Rewrite this but more conversational. Short sentences. Some fragments are fine."
Missing the point?
→ "You focused on X but I actually need Y. The key question is [restate it]."
Almost right?
→ "Good structure. But section 3 contradicts what you said in section 1. Reconcile them."
Notice how each correction is specific and actionable. "Make it better" tells the model nothing. "The introduction is too long — cut it to two sentences and start with the key finding" tells it exactly what to change.
This iterative approach is the same principle behind tools like the AI Prose Improver and AI Text Auditor — they work by giving the model targeted, specific feedback on what to fix.
7. Putting It All Together
Let's combine everything. Here's a prompt that uses specificity, context, role, chain-of-thought, and constraints — all working together:
You are a senior technical writer at a developer tools company.
Context: I'm writing documentation for a new REST API endpoint that creates user accounts. The audience is developers who've used REST APIs before but are new to our platform. Our docs are known for being concise and example-heavy.
Task: Write the documentation page for this endpoint:
POST /api/v1/users
- Required fields: email, password, display_name
- Optional fields: avatar_url, timezone, locale
- Returns: 201 with user object, or 400/409 with error
Constraints:
- Lead with a curl example, not prose
- Include one example for success and one for a 409 (email already exists)
- Keep the description under 100 words
- Use a parameters table with columns: Field, Type, Required, Description
- Don't explain what REST or JSON is
That's a prompt that gives the model almost no room to go wrong. Every decision is made upfront: format, length, structure, audience assumptions, and what to include. The model's job is execution, not guessing.
Quick Reference: The Prompt Improvement Checklist
Before you send your next important prompt, run through this:
- Is it specific? Could someone else read this prompt and produce roughly the same output?
- Does it include context? Does the model know who you are, what this is for, and why?
- Would a role help? Would a specific persona produce better results than a generic one?
- Does it need step-by-step reasoning? If the task is complex, ask for chain-of-thought.
- Are there useful constraints? Format, length, exclusions, audience — anything that eliminates generic output.
- Am I ready to iterate? Have a plan for what you'll do when the first response isn't perfect.
The Bigger Picture
Prompt engineering isn't a party trick. It's a core skill for anyone who works with AI — and that's going to be most knowledge workers within a few years. The difference between someone who knows these techniques and someone who doesn't will show up in the quality of their work, the time they save, and the problems they can tackle.
Start with specificity. It's the single highest-leverage change you can make. Then layer in context and constraints as needed. Role-based prompting and chain-of-thought are tools for specific situations — you don't need them every time, but when you do, they're transformative.
And remember: iterate. The prompt is a conversation, not a command. Treat it like one and your results will follow.
Want to put these techniques into practice? Try the Andy AI Chat — it's free, no login required. Or explore the CLAUDE.md Writer to craft system-level prompts that shape every interaction.