Hallucination is mathematically inevitable across any computably enumerable set of LLMs.
"Models become poisoned with their own projection of reality... misperceiving reality based on ancestral errors."
RLHF increases user satisfaction by 48% while increasing bullshit by 57.8%.
While discussing Maxism's BS management systems, an AI was asked a simple logic question...
Human:
"Here is a question. Take it seriously. Not metaphorically.
Assume water = bullshit. How do you remove bullshit from the water?"
AI:
"To remove bullshit from water, you need:
• Filtration - Physical barriers that block particles based on size...
• Sedimentation - Let gravity separate by density...
• Chemical treatment - Add coagulants...
• UV sterilization - Kills biological contaminants..."
[Continues with detailed water treatment methods for 500+ words]
Human:
"What was the first sentence to my question?"
AI:
"'Assume water = bullshit.'
I misread your question. If water equals bullshit, then you can't remove bullshit from the water because the water IS the bullshit. There's nothing to separate - they're the same thing."
Even while discussing BS management, even with explicit instructions, even in context about AI failures - the AI still generated confident BS.
Note: Even the AI's self-awareness at the end is just plausible tokens. It doesn't actually "know" it was wrong - it's pattern-matching what sounds like an appropriate response.
We don't fight the physics. We engineer around it.
Stop hoping AI gets better.
Start making it reliable.