How to Prompt Claude and ChatGPT - The Real Rules That Work

TL;DR
Most people prompt AI like they're texting a friend and then wonder why the output is generic — the fix isn't a better model, it's better structure. Eight rules handle 90% of bad output: be specific, break big asks into small ones, build the foundation before the polish, state what to avoid, feed in images and examples, paste errors back in instead of restarting, edit one piece at a time, and swap vague adjectives for technical words. Works the same on Claude, ChatGPT, Gemini, or Perplexity — model choice barely matters once your prompt structure is right. The post ends with a reusable prompt template plus a ChatGPT image prompt for generating your own cover art.
Someone DM'd me last week asking why ChatGPT kept giving them "boring, generic marketing emails." I asked to see their prompt. It said: "write a marketing email for my business."
That's the whole prompt. That's the whole problem.
AI doesn't read minds. It reads instructions. If your instructions are four words long, the output is going to be whatever the model thinks an average person wants — which is, by definition, average. The gap between a mid prompt and a sharp one is not talent. It's structure. Here are the eight rules I use every day across Claude, ChatGPT, Gemini, and Perplexity. They work regardless of which model you pick.
1. Be specific or eat generic output
The single biggest prompt mistake is leaving too much to the model's imagination. When you say "write a landing page," the AI has to guess: for what product, for whom, in what tone, at what length, with what CTA. It will guess averages. You get averages.
Compare these two prompts I used last month:
- Bad: "Write a landing page for my coaching business."
- Good: "Write a landing page for a mindset coach targeting burnt-out agency owners aged 35-45 in Sydney. 450-600 words. Conversational but authoritative — no hype words. Include one hero section with a single question headline, three pain bullets, a 3-step program overview, and a CTA for a 20-min discovery call. Avoid 'transform' and 'unlock.'"
The second one gave me something I could ship with ten minutes of editing. The first needed a full rewrite.
Specificity isn't about writing more — it's about writing tighter. Every sentence in your prompt should remove ambiguity, not add words. We went deeper on this distinction in what's the difference between prompt engineering and context engineering, because they're related but not the same thing.
2. Break big asks into small ones
AI models handle one well-defined task better than six interlinked ones. If you ask ChatGPT to "build me a website, write the copy, pick the colours, create a logo, and generate social posts," it will produce something for each — but none of it will be great, and when one piece is off you'll have to unpick a giant tangled mess.
Here's how I actually do it for a client landing page:
- Prompt 1 — structure: "Draft a wireframe layout for the page, section by section, as a bullet list."
- Prompt 2 — copy for each section, one at a time
- Prompt 3 — design direction: "Given that copy, suggest a colour palette and typography system"
- Prompt 4 — refine the weakest section only
- Prompt 5 — generate image prompts for the hero and supporting visuals
Each step has a narrow goal. Each output can be checked before moving on. Errors don't compound.
The technical name for this is decomposed prompting. The plain-English name is "stop dumping everything in at once."
3. Build the foundation before the polish
This one trips up people who are new to building with AI. They start with the fun stuff — animations, colour, copy flair — before the core is locked down. Then a structural change later forces them to redo the polish.
Order of operations that actually works:
- → Lock the goal and main user flow first
- → Get the bare structure working end to end
- → Layer on the logic and edge cases
- → Style it, polish it, tweak copy last
If you're prompting an AI to build an app — or even just a long doc — resist the urge to iterate on visuals until the bones are right. Six prompts into a redesign is a bad time to realise your information architecture was wrong from the start.
4. Tell it what NOT to do
Most people only tell AI what they want. Half the prompt's power is in what you exclude.
When I prompt Claude for copy, I add things like:
- "No em dashes at the start of sentences"
- "Don't use the words leverage, robust, empower"
- "Skip the intro sentence — start with the first real point"
- "No sign-off, no 'hope this helps'"
When I prompt for code:
- "Don't add comments for obvious lines"
- "Don't create a new file — edit the existing one"
- "Don't add tests in this step"
- "No TypeScript generics unless strictly needed"
Constraints narrow the solution space. A narrower space means the AI can't drift into its default patterns, which is where the generic vibe comes from. If you want to go further on how constraints change model behaviour, how AI prompts are changing the game in 2025 covers the shift from "ask nicely" to "design the guardrails."
5. Feed it images, screenshots, and examples
Words are lossy. Images are not. If you can show the AI what you want, do.
Real examples I've used in the last month:
- Screenshot of a competitor's pricing page → "match this structure and density but for my product"
- Photo of a napkin sketch → "turn this into a clean wireframe in Mermaid syntax"
- Screenshot of a broken UI → "here's what it's rendering, here's what I want — fix it"
- Paste of three subject lines that worked → "write eight more in this exact voice"
Claude, ChatGPT, and Gemini all accept image uploads. Use them. One screenshot saves a page of description, and it eliminates the "that's not what I meant" loop.
One thing to watch: don't feed it copyrighted designs and ask for a copy. Ask for "something inspired by this style, original layout."
6. When something breaks, paste the error back in
When the output is wrong — whether it's a broken code snippet, a weird tone, or a factual miss — do not restart the conversation. Paste the exact problem back in.
For code:
Got this error running your snippet:
TypeError: Cannot read property 'map' of undefined at line 14
The issue is that 'users' can be null before the fetch resolves.
Fix line 14 to handle that case without rewriting the rest.
For copy:
The second paragraph contradicts the first — you say "quick setup" then
describe a 3-step process with manual config. Make the second paragraph
consistent with the "under 10 minutes" promise.
Targeted corrections cost fewer tokens, keep the good parts, and usually land in one shot. Full regenerations throw away context you already paid for.
7. Edit one piece at a time, not the whole thing
If you're using Claude Code, Cursor, or any AI that lets you select and edit specific sections — use that feature aggressively. Regenerating an entire document to fix a two-sentence issue is wasteful and often makes other sections worse.
My rule: if the fix is localised, the prompt should be localised.
- Heading 3 is weak? Highlight it, prompt "rewrite this heading to be more specific"
- One bullet feels off? Select it, ask for three alternatives
- A function is buggy? Select the function, not the whole file
The broader the scope of your edit, the broader the scope of unintended changes. This is true in AI-assisted coding and AI-assisted writing equally.
8. Use real words, not "make it nice"
Vague adjectives are prompt poison. "Make it nice" gets you nice-to-the-AI, which is rarely what you meant.
Swap vague for technical every time:
- ❌ "Make it simple" → ✅ "Use one sentence per idea, no sub-clauses, 8th grade reading level"
- ❌ "Make it pop" → ✅ "Increase contrast between headline and body, bold one keyword per sentence"
- ❌ "Make it modern" → ✅ "Flat design, generous whitespace, sans-serif, neutral palette with one accent colour"
- ❌ "Make it punchy" → ✅ "Max 12 words per sentence, verbs at the front, zero filler adverbs"
Technical words are not gatekept. You can learn the vocabulary for any domain in an afternoon — and the return on that investment is massive every single time you prompt. Worth it.
9. Know what AI still can't do
Even with a perfect prompt, some things are still outside what current LLMs handle well. Save yourself the pain:
- Real-time data — unless the tool has web or MCP access, it doesn't know today's prices, news, or stock
- Long-term consistency — across very long docs or projects, models drift. Break things into smaller chunks.
- High-stakes finance, legal, medical — AI can draft. A human professional must sign off.
- Fresh company-specific knowledge — it doesn't know your internal tools, private docs, or last quarter's numbers unless you feed them in or plug in MCP servers to connect them
Knowing where the edges are stops you from wasting prompts trying to push through a wall. If you want the full current vocabulary of what's possible and what's not, 13 new AI terms every founder should actually understand in 2026 is the cheat sheet.
10. The prompt template I actually use
Save this. Paste it. Fill in the blanks. Works for Claude, ChatGPT, Gemini, whatever.
Role: You are a [specific expert — e.g., senior landing page copywriter].
Task: [One-sentence goal. What are you producing?]
Context:
- Who this is for: [audience, including any detail about their pain or goal]
- Why it exists: [the business or personal reason]
- What's already decided: [constraints, brand voice, inputs they must include]
Format:
- Length: [word count or line count]
- Structure: [bullets / numbered steps / table / prose / specific section headings]
- Tone: [three specific adjectives — e.g., direct, warm, no hype]
Must avoid:
- [banned words or phrases]
- [tonal traps — e.g., no salesy openers, no "I hope"]
- [structural things — e.g., no intro paragraph, no sign-off]
Examples of what good looks like:
[Paste 1-3 concrete examples, even short fragments]
Output:
Start directly with the first line. No preamble, no "Here is your...".
Eight out of ten times, this gets me usable output on the first shot. The remaining two times I paste the issue back in and iterate. Clean loop.
What to Do Now
- → Take your last three ChatGPT or Claude sessions and look at your prompts. Count how many followed the template above. Rewrite one and see the difference.
- → Build yourself a prompt library. Notion, a text file, whatever — save the ones that worked. You'll stop reinventing the wheel.
- → If you're using AI for business work — client intake, proposals, content, code — subscribe to the DSRPT think-tank. New prompting patterns, agent builds, and real examples from client work go up every week.
Prompting isn't magic. It's writing with constraints. Write better constraints, get better output. Every single time.
Frequently Asked Questions
What is the best way to prompt Claude or ChatGPT?
Write prompts with a clear goal, specific constraints, and the format you want the output in. Break complex requests into smaller steps instead of one giant instruction. Tell the AI what to avoid, not just what to do. And when you have a reference — a screenshot, an example, a doc — paste it in. A good prompt has four parts: what you want, why, what to avoid, and how it should be delivered.
Why does ChatGPT give generic answers?
ChatGPT gives generic answers because the prompt is generic. "Write a marketing email" produces a marketing email anyone could have written. Swap it for: "Write a 120-word cold email for a SaaS founder targeting hospitality operators in Australia, casual tone, reference a specific pain they feel Monday morning, no exclamation marks, no 'I hope this finds you well.'" Different prompt, different output.
Should I write one long prompt or many short ones?
Many short ones, chained. A long prompt forces the AI to juggle ten things at once, and when it fails you won't know which instruction it tripped on. Break the work into steps — get the structure right first, then the tone, then the details. Short prompts are faster to debug, cheaper on tokens, and produce cleaner output. The only exception is a project brief where you genuinely need the AI to hold all the context at once.
Do I need to be technical to write good prompts?
No, but using technical words helps when they apply. If you know what "bullet list," "table," "JSON," or "300-word summary" means, use those exact words. Vague descriptors like "nice" or "simple" make the AI guess. Specific words — "minimalist," "conversational," "numbered steps," "single paragraph" — remove the guesswork and dramatically improve output consistency.
How do I fix errors when Claude or ChatGPT gives wrong output?
Paste the wrong output or error message directly back into the chat and ask the AI to fix the specific thing. Don't restart from scratch — that wastes tokens and loses context. Say something like: "The third bullet contradicts the second. Fix the third bullet to align with the principle in bullet two." Targeted corrections beat full regenerations every time.



