Forget Everything You Heard About "Prompt Engineering"
Schulhoff et al. cataloged 58 distinct prompting techniques in "The Prompt Report." Fifty-eight. The DAIR.AI Prompt Engineering Guide has trained over 3 million people on these methods.
You do not need 58 techniques. You are a real estate agent, not a machine learning researcher.
Britney Muller put it best in "6 Reasons Most Prompt Engineering Tips Are BS": most prompting advice is academic theory dressed up as practical tips. Ethan Mollick at Wharton says the same thing — stop overthinking the prompt, start thinking about the problem.
Here is the truth. Four techniques handle 95% of what you need as a real estate professional. Everything else is edge-case optimization for people building AI products, not using them.
Let me show you which four.
The 4 Techniques That Matter
Each technique below builds on the last. Start with zero-shot. When you need more consistency, add few-shot. When you need analysis, add chain-of-thought. When you need everything at once, use SPEAR. That is the progression.
Zero-Shot: Just Ask
Zero-shot prompting means giving the AI an instruction with no examples. You tell it what to do, and it figures out how.
"Write a listing description for a 4-bed colonial in Brentwood with a pool."
That is zero-shot. No examples. No context. Just a direct instruction.
It works for simple, low-stakes tasks. Quick emails. Social captions. First drafts you plan to edit. The output will be decent but generic, because the AI is guessing your preferences.
Andrew Ng's first golden rule applies here: write clear, specific instructions. "Write a listing description" is vague. "Write a 120-word listing description emphasizing the renovated kitchen and pool, targeting families relocating from LA" is specific. Same technique. Wildly different output.
Zero-shot is your baseline. Use it for 60% of your daily tasks.
Few-Shot: Show, Don't Tell
Few-shot means including 2-3 examples before your instruction. You show the AI what good looks like, then ask for more.
The Google prompt engineering whitepaper (60+ pages from Lee Boonstra) calls this "in-context learning." The AI pattern-matches against your examples for style, structure, and tone. No training required. No fine-tuning. Just examples.
This is where real estate agents see the biggest jump in quality. Paste your three best listing descriptions. Tell the AI to write the next one in the same style. The output sounds like you, not like a robot.
Few-shot is essential for:
- Listing descriptions (consistent voice across properties)
- Email sequences (matching tone across a drip campaign)
- Market reports (same format every month)
- Social posts (consistent brand voice)
Three examples is the sweet spot. Two is not enough for pattern recognition. More than five wastes tokens without improving output. The 5 Essentials framework starts with knowing your audience — your examples teach the AI your audience better than any instruction could.
Chain-of-Thought: Think Step by Step
Elvis Saravia at DAIR.AI popularized chain-of-thought prompting for practical use. The technique: ask the AI to show its reasoning step by step before giving you the answer.
Andrew Ng's second golden rule: give the model time to think. Adding "Let's think step by step" to a pricing analysis prompt reduces errors because the AI cannot skip logical steps.
Instead of: "What should I price this home at?"
Try: "Walk me through the pricing analysis step by step. Consider these comps, current inventory, days-on-market trends, and seasonal factors. Then give me a recommended price range with your reasoning."
Chain-of-thought maps directly to the OODA Loop. Observe the data. Orient around the relevant factors. Decide based on the analysis. Act on the recommendation. When you force the AI through this loop, you can verify each step instead of trusting a black-box answer.
Use chain-of-thought for CMAs, investment analysis, market forecasts, and any prompt where the answer depends on multiple variables. If the task involves math or multi-step logic, chain-of-thought is non-negotiable.
SPEAR: The Power Framework
Britney Muller's SPEAR framework combines everything into one structure:
- Specificity — details, numbers, constraints
- Persona — who the AI should be
- Examples — few-shot references
- Ask — the clear instruction
- Refinement — iterate on the output
SPEAR is not a separate technique. It is a checklist that combines the other three. When you write a SPEAR prompt, you are using zero-shot specificity, few-shot examples, and (when needed) chain-of-thought reasoning in one structured instruction.
A SPEAR prompt for a listing description:
"[Specificity] 130 words, luxury tone, short sentences. [Persona] You are a luxury real estate copywriter in Nashville. [Examples] Here are two of my best descriptions: [paste examples]. [Ask] Write a listing description for 123 Main St — 5 bed, 4 bath, 4,200 sq ft, pool, guest house. [Refinement] After the first draft, tighten any sentence over 15 words."
This is the format you graduate to once zero-shot and few-shot feel natural. For most agents, it takes a week of daily practice to get there.
The Context Card: Your Secret Weapon
IndyDevDan's Context-Prompt-Model framework argues that your context is the "first-class citizen," not the prompt itself. The prompt is the instruction. The context is the intelligence behind it.
A Context Card is a pre-written block that you paste at the start of any AI conversation. It tells the AI who you are, how you write, who your clients are, and what your market looks like. Every prompt after it is automatically personalized.
Before a Context Card, your prompt produces generic content. After a Context Card, the same prompt produces content that sounds like you wrote it. The difference is not subtle.
Here is what happens without a Context Card:
Prompt: "Write a listing description for a 3-bed ranch in East Nashville."
Output: "Welcome to this charming ranch-style home nestled in the heart of East Nashville..."
Nestled. Charming. Heart of. Garbage.
Here is the same prompt after loading a Context Card with your voice, market data, and writing samples:
Output: "Corner lot on a dead-end street. Three beds, one level, zero maintenance. The kitchen was gutted last year — new cabinets, quartz counters, gas range. Five minutes to Five Points. Two blocks from the greenway."
Same prompt. Completely different output. The Context Card did the work.
Build your Context Card using the HOME Framework: Hero (your role and expertise), Outcome (what you need from the AI), Materials (your data, writing samples, market stats), Execution (formatting rules and banned words). Update it quarterly. That 10-minute investment pays off on every single prompt.
Prompting Mistakes That Waste Your Time
Mistake 1: Being vague and then blaming the AI. "Write me something about the market" is not a prompt. It is a wish. Specificity is the single biggest lever. Include numbers, constraints, audience, and format in every prompt.
Mistake 2: Writing a 500-word prompt for a 50-word task. Zero-shot exists for a reason. If you need a quick text message to a lead, do not write a paragraph of instructions. Match the prompt complexity to the task complexity.
Mistake 3: Never iterating. The first output is a first draft. SPEAR's "Refinement" step exists because you should expect to refine. Say "make it shorter," "remove the cliches," or "match this tone instead." Two rounds of refinement beats one attempt at a perfect prompt.
Mistake 4: Ignoring context. Starting every conversation from scratch means the AI relearns you every time. A Context Card solves this. Paste it once, prompt all session. Colibri's research showing agents cut 15-20 hours to 3-5 hours? Those agents are using context. They are not typing better prompts. They are giving the AI better context.
The ROI of Better Prompting
Let me do the math.
Colibri Real Estate found that AI prompts cut manual work from 15-20 hours per week to 3-5 hours. Call it 12 hours saved per week at the midpoint.
Conduit AI reports AI-powered lead generation boosts volume 300% and conversion rates 30-40%. Those gains come from faster response times and more personalized communication — both driven by better prompts.
Here is the simple version:
- 12 hours saved per week
- Your time is worth at least $50/hour (most agents undervalue their time)
- 12 x $50 = $600/week
- $600 x 50 weeks = $30,000/year in recovered time
That is not revenue. That is time you can reinvest in prospecting, showing homes, or building relationships. The activities that actually close deals.
And the cost? ChatGPT free tier is $0. ChatGPT Plus is $20/month ($240/year). Claude Pro is $20/month.
$240/year investment. $30,000/year return. That is a 125x ROI.
The gap between agents who use AI well and agents who dabble is widening. Placester reports 58% of agents use ChatGPT. But using it and using it well are different things. The four techniques in this guide — zero-shot, few-shot, chain-of-thought, and SPEAR — are the difference.
Pick one. Practice it today. Stack the next one tomorrow. By Friday, you are prompting better than 90% of agents who have been using ChatGPT for a year.