How AI Generates a Strong First Line Without Inventing Facts
AI writes a strong first line without inventing facts when the prompt is grounded on a single verified signal, the model is told to admit no-claim when nothing is found, and a human or automated check validates the line against the source before send.
Why AI Invents Facts
LLMs are optimized for fluency, not truth. When the prompt asks for personalization without supplying the input, the model fills the gap with plausible details. LLMs use 34% more confident language when wrong.
Grounded First Line
Anchored to one verified signal, specific enough to be falsifiable, constrained to 12 to 20 words. Signal-led first lines hit 18% reply rate vs 3.4% for generic.
Prompt Structure
Pass exactly one verified signal, use strict output format, add a no-claim escape hatch (NO_SIGNAL), pull facts via retrieval not training (RAG cuts hallucination 71%), choose low-hallucination models.
Validate Before Send
Signal match check (line references the same signal), specificity check (reject vague phrases), source URL pass-through (review against source for Tier 1).