The No BS Manifesto

The Evidence

Mathematical Inevitability

Hallucination is mathematically inevitable across any computably enumerable set of LLMs.

Cossio, 2025 | Xu et al., 2024

Model Collapse

"Models become poisoned with their own projection of reality... misperceiving reality based on ancestral errors."

Nature, 2024

BS Optimization

RLHF increases user satisfaction by 48% while increasing bullshit by 57.8%.

UC Berkeley & Princeton, 2025

Case Study: BS Generating BS About BS

While discussing Maxism's BS management systems, an AI was asked a simple logic question...

Human:

"Here is a question. Take it seriously. Not metaphorically.
Assume water = bullshit. How do you remove bullshit from the water?"

AI:

"To remove bullshit from water, you need:
• Filtration - Physical barriers that block particles based on size...
• Sedimentation - Let gravity separate by density...
• Chemical treatment - Add coagulants...
• UV sterilization - Kills biological contaminants..."

[Continues with detailed water treatment methods for 500+ words]

Human:

"What was the first sentence to my question?"

AI:

"'Assume water = bullshit.'

I misread your question. If water equals bullshit, then you can't remove bullshit from the water because the water IS the bullshit. There's nothing to separate - they're the same thing."

The Perfect Demonstration

  • ✓ The BS was immediate and confident
  • ✓ It was plausible and well-structured
  • ✓ It completely missed the explicit logic: "water = bullshit"
  • ✓ Pattern-matching overrode logical processing
  • ✓ Length amplified the error - more words, more commitment

Even while discussing BS management, even with explicit instructions, even in context about AI failures - the AI still generated confident BS.

Note: Even the AI's self-awareness at the end is just plausible tokens. It doesn't actually "know" it was wrong - it's pattern-matching what sounds like an appropriate response.

The Solution

We don't fight the physics. We engineer around it.

Accept Reality

  • • AI will always generate BS
  • • Context degrades at scale
  • • Models optimize for satisfaction, not truth
  • • The corpus is already poisoned

Engineer Solutions

  • • Quest-managed context (right info, right step)
  • • Zero trust verification gates
  • • AXIS scoring for continuous feedback
  • • Scalable Thought Architecture

Stop hoping AI gets better.
Start making it reliable.