AI can hallucinate. That's the technical term for when AI produces confident, convincing output that's completely wrong. It sounds right. It reads professionally. And it's made up.
In real estate, an AI hallucination could mean a fabricated statistic in your market report, a Fair Housing violation in your listing description, or an incorrect property detail that becomes a legal claim.
You can't not use AI—it's too powerful. But you can't publish AI output without verification—the risks are too high.
There's a framework for this. It comes from military decision-making. It's called OODA.
The Verification Problem
AI isn't trained for accuracy. It's trained to predict what comes next based on patterns. That's different.
A 2024 study tested multiple AI models on 120 facts. The best accuracy rate? 72.3%. That means roughly 28% potential for errors in AI output.
In real estate, those errors look like fake property comps, invented market statistics, incorrect zoning interpretations, and fabricated source citations (AI will confidently cite studies that don't exist).
The liability is real. Fair Housing violations start at $21,663 for a first offense. Jury awards in discrimination cases have reached $850,000 to over $2 million.
What OODA Is
OODA was developed by Colonel John Boyd in the 1970s for military combat operations. The framework helps make better decisions faster under pressure.
- O - Observe: Gather information
- O - Orient: Analyze and contextualize
- D - Decide: Choose course of action
- A - Act: Execute the decision
The loop repeats. Observe again. Orient. Decide. Act.
Why does a military decision-making framework work for AI verification? Because it forces systematic review. Instead of randomly scanning AI output and hoping you catch errors, OODA gives you a repeatable process that prevents skipping steps.
The OODA Verification Process
O - OBSERVE
Read the AI output carefully. Don't skim.
Identify: All statistics and numbers, property-specific claims, descriptive language about people or neighborhoods, and any sources cited.
Mark everything that needs verification. Highlight it. Put a flag on it. Don't trust your memory.
O - ORIENT
Now compare what AI produced against what you know.
Market knowledge: Do these numbers match current data? Is this consistent with your experience? Would you stake your reputation on this?
Property accuracy: Is every detail correct per MLS? Are features accurately described?
Voice and brand: Does this sound like you? Would clients recognize your style?
D - DECIDE
Categorize each element of the output:
- Keep as-is - Accurate, compliant, sounds like you
- Edit - Close but needs adjustment
- Verify - Need to check the source before using
- Remove - Inaccurate, non-compliant, or risky
A - ACT
Execute your decisions: Make all marked edits, verify any flagged sources, do a final read-through, confirm compliance, and publish.
The Three Verification Layers
Layer 1: Accuracy
Cross-reference your local MLS data. Check against NAR reports. Verify current rates with Federal Reserve data. Confirm cited sources actually exist.
Common AI errors: Outdated statistics presented as current, fabricated percentages, wrong geographic application (national data for local claim), made-up source citations.
Layer 2: Compliance (Fair Housing)
This is non-negotiable. Federal protected classes include Race, Color, Religion, National Origin, Sex, Familial Status, and Disability.
Phrases that must be removed:
- "Ideal for families" - Familial status issue
- "Perfect for couples" - Familial status issue
- "Near church/mosque/synagogue" - Religion
- "Young professionals" - Age/familial status
- "Quiet neighborhood" - Can imply exclusion
The rule: Describe the property, not who should live there.
HUD issued guidance in May 2024 making clear: the Fair Housing Act applies regardless of the technology used. AI-generated discriminatory content carries the same liability as human-written content.
Layer 3: Voice
Your content needs to sound like you, not like every other agent using AI.
Questions to ask: Would I say this at a listing presentation? Does this match my other content? Would clients recognize my style?
The OODA Checklist
Print This. Use It for Every Piece of AI Content.
OBSERVE
- Read entire output (don't skim)
- Highlight all statistics
- Note all property claims
- Flag descriptive language about people/neighborhoods
ORIENT
- Do statistics match your market data?
- Are property details correct per MLS?
- Does this sound like your voice?
DECIDE
- Mark items to keep as-is
- Mark items to edit
- Mark items requiring verification
- Mark items to remove
ACT
- Make all edits
- Verify flagged sources
- Final read-through
- Compliance confirmed
- Publish
The Time Math
Without AI: Research + Write + Edit = 60-90 minutes per content piece
With AI, no verification: Generate + Quick scan = 10 minutes (Risk level: High)
With AI + OODA verification: Generate + OODA review = 15-20 minutes (Risk level: Low)
Net savings: 40-70 minutes per piece, even with verification
Risk mitigation: One Fair Housing fine: $21,663+. One OODA verification: 5-10 minutes. The math is obvious.
Quick Reference
- OODA: Observe, Orient, Decide, Act
- 28% potential error rate in AI output
- $21,663+ first-offense Fair Housing fine
- 5-10 min verification time per piece
- 40-70 min still saved vs. manual creation
Master AI Verification
Our workshops include complete OODA implementation with checklists, compliance templates, and voice calibration.
Sources
- HUD Fair Housing guidance (May 2024)
- Jonathan Gillham AI accuracy study (August 2024)
- Colonel John Boyd OODA Loop framework