The difference between a mediocre AI output and an excellent one almost always comes down to the prompt. Not the model. Not the tool. The prompt. Output quality is determined almost entirely by input quality — and most people are writing inputs that make the model's job unnecessarily hard.

This guide covers seven practices that will immediately improve your results with ChatGPT, Claude, Gemini, or any other large language model.

1. Assign a Role Before You Ask Anything

AI models have no context about who you are or what you need unless you provide it. The single most powerful thing you can do before asking your question is tell the model who it should be.

This isn't a magic trick — it works because it primes the model to activate the right knowledge, tone, and reasoning patterns. A "senior contract attorney" and a "plain-language writer" will approach the same contract very differently. Both responses might be technically correct, but only one is what you need.

❌ Weak prompt
Review this contract and tell me if there are any issues.
✅ Strong prompt
Act as a senior contracts attorney specializing in B2B SaaS agreements. Review the following contract and identify: 1) Terms that unfairly favor the other party, 2) Missing standard clauses, 3) Vague language that could be exploited. Flag each issue as HIGH/MEDIUM/LOW risk.

2. Give More Context Than You Think You Need

The most common prompt mistake isn't asking the wrong question — it's asking the right question with too little background. AI doesn't know your industry, your constraints, your audience, or your specific situation unless you tell it.

Think of it like briefing a new consultant. You wouldn't say "fix our marketing." You'd explain who your customers are, what's broken, what you've tried, and what success looks like. The more relevant context you provide, the more targeted and useful the output.

Key Insight

Context you should almost always include: your role or expertise level, your audience, relevant constraints (budget, time, jurisdiction), and what you'll do with the output.

3. Specify the Format Explicitly

If you don't tell the model what format you want, it'll guess — and it usually guesses wrong for your specific use case. Do you need a bulleted list, a numbered procedure, a table, a formal memo, prose paragraphs, or JSON? Say so.

This matters more than most people realize. The same content in the wrong format wastes your time. A legal analysis that's perfect as a memo is useless if you needed talking points. An answer formatted as prose is frustrating when you needed a checklist to work through step by step.

Useful format specifiers:

  • "Format this as a numbered list"
  • "Present this as a comparison table with columns for X, Y, and Z"
  • "Draft this as a professional memo with TO/FROM/DATE/RE headers"
  • "Give me 5 bullet points of no more than 15 words each"
  • "Write this at a 6th-grade reading level"

4. Replace Vague Words With Specific Requirements

Words like "detailed," "comprehensive," "thorough," and "professional" mean nothing to an AI model. They're subjective — what's thorough to one person is shallow to another. Replace them with specific, measurable requirements.

❌ Vague
"Write a detailed marketing strategy."
✅ Specific
"Write a marketing strategy that covers: 1) Target audience definition with 3 firmographic criteria, 2) The 2 primary acquisition channels with rationale, 3) A 90-day activation plan with weekly milestones, 4) The 3 most important metrics to track. Limit to 600 words."

5. Use Examples to Set the Standard

If you have a specific style, tone, or structure in mind, show the model an example. This technique — called few-shot prompting — dramatically narrows the interpretation space and gets you outputs that match what you're actually picturing.

You can provide examples of the format you want, the tone you're targeting, or the type of analysis you need. Even one good example makes a significant difference.

Pro tip

If you have a past output you liked, paste it into the prompt and say: "Use this as a style reference for the format and tone of your response. Don't use the same content — just the structure and voice."

6. Define Constraints and Boundaries

Tell the model what NOT to do, not just what to do. This is particularly useful when you've found that models default to unhelpful behaviors — like hedging everything with "consult a professional," padding responses with unnecessary caveats, or including sections you don't need.

Explicit constraints also help with scope. Without them, models often either under-deliver (too short, too shallow) or over-deliver (so comprehensive it takes 20 minutes to read when you needed a 5-minute answer).

Useful constraint patterns:

  • "Do not include general disclaimers — I know this is not legal advice"
  • "Limit the response to 400 words"
  • "Do not suggest I hire a consultant — give me the analysis directly"
  • "Focus only on [X] — do not cover [Y] or [Z]"

7. Treat Prompting as Iteration, Not a Single Shot

The best prompt engineers don't get perfect outputs on the first try. They use the first output as a starting point and iterate. The model maintains context throughout the conversation — you can refine, redirect, ask follow-up questions, and build on good sections without starting over.

Useful follow-up patterns after an initial output:

  • "The first point is good. Expand on that one specifically."
  • "Rewrite section 2 with a more conservative risk assessment."
  • "This is too long. Condense to the 3 most important points."
  • "Now apply the same analysis to [scenario B]."

This iterative approach is especially powerful for complex tasks — financial models, legal documents, marketing strategies — where the first draft is always a starting point, not a finished product.

Putting It Together

A well-structured prompt has five elements: a role, sufficient context, a clear task, explicit format requirements, and defined constraints. You won't always need all five, but when you're not getting the results you want, run through this checklist — you'll almost always find something missing.

The professionals who get the most out of AI tools aren't the ones using more advanced models — they're the ones who've learned to communicate precisely what they need. That's a learnable skill, and these practices are where it starts.

Skip the prompt-writing and get straight to work

PromptSonar's prompt library has 75+ expert-crafted prompts for legal, finance, healthcare, and marketing professionals — all structured with these best practices built in.

Browse Free Prompt Library →

If these fundamentals were useful, the next step is understanding where most people go wrong. Read AI Prompt Engineering: Common Mistakes and How to Fix Them to go deeper, or see how these principles apply specifically to legal professionals using ChatGPT.