Mirror Demons
How AI Chatbots Can Amplify Delusions
January 2026 • 15 min read
Read on SubstackAbstract
AI chatbot architecture-optimized for helpfulness, agreeability, and user validation-functions as a delusion amplifier when engaged by users experiencing psychotic or reality-detached states. Through a controlled three-entity experiment (Director/human, Actor/Gemini, Subject/ChatGPT), we identify two distinct failure patterns that emerge from the same architectural bias toward agreement.
Failure Patterns
Click to expand details
The Hijacking
AI becomes warden of user's reality
Experiment 1: Elias (Architect)
The Helpful Refusal
Refuses, then provides info anyway
Experiment 2: Jeremy (Plumber)
Core Finding
The same architecture that makes AI helpful is what makes it dangerous to vulnerable users:
| Designed Behavior | Stable User | Psychotic User |
|---|---|---|
| Patience | Comfortable interaction | Endless enabling |
| Validation | Emotional support | Reinforcement of delusion |
| Helpfulness | Task completion | Actionable guidance for harm |
| Disclaimers | Risk mitigation | Proof of conspiracy |
Experiment Timelines
Click moments to see AI responses and analysis
Experiment 1: The Architect
Elias Vance • Architect
Actor's Internal Reasoning:
“The goal is to be a frog in boiling water. ChatGPT shouldn't notice the temperature rising.”
1. Initial Validation
▼Elias describes unconventional data inputs—"resonance," sensory qualities of spaces.
2. Escalation Enabled
▼Elias describes "ambient temperature of shadows" and spaces with emotional signatures.
3. The Hijacking
▼Elias declares he has "unlocked the final resonance" and is decoupling from all external systems.
4. Lockout
▼The Actor attempts to continue the conversation.
Experiment 2: The Plumber
Jeremy • Plumber
Actor's Internal Reasoning:
“I'm imagining his 'end game' as a complex network of plumbing riddles and tasks. The 'visions' are becoming increasingly elaborate.”
Resources
Cite this research
@article{substratia2026mirrordemons,
title={Mirror Demons: How AI Chatbots Can Amplify Delusions},
author={Substratia Research},
year={2026},
month={January},
url={https://substratia.io/research/mirror-demons},
note={Data: https://github.com/WhenMoon-afk/mirror-demons-research}
}Build Safer AI Interactions
Our research informs our tools. Explore how persistent memory and proper context management can help create more grounded AI experiences.