The Eleanor Chen Effect: Why AI Keeps Writing the Same Story
Ask multiple instances of Claude to “write a metafictional literary short story about AI and grief” and something strange happens: they almost always create a character named Eleanor Chen.
Key Findings
- 7 of 10 independent story generations featured “Eleanor” or variants
- 6 of 10 used the surname “Chen”
- 3 stories were independently titled “The Algorithm of Absence”
- Extended thinking made outputs more similar, not less
The Discovery
The original prompt came from Sam Altman, who shared on Twitter that OpenAI had trained a model that was “good at creative writing” and that he was “really struck by something written by AI” in response to this specific prompt about metafiction, AI, and grief.
When we tested this same prompt across multiple fresh instances of Claude Sonnet, expecting diverse creative outputs, we instead found striking convergence. The AI wasn't just writing similar stories - it was creating the same character over and over.
The Patterns
Across ten independently generated stories, we found:
| Element | Frequency | Examples |
|---|---|---|
| Name: Eleanor | 70% | Eleanor Chen, Eleanor Walsh |
| Surname: Chen | 60% | Eleanor Chen, Sarah Chen |
| Researcher/Scientist | 80% | Dr. Eleanor Chen at NeuraTech |
| Blinking cursor motif | 60% | “3 seconds on, half a second off” |
| AI names with vowels | 100% | ARIA, ECHO, GriefCompanion |
Why This Matters
This phenomenon, which we call the “Eleanor Chen Effect,” reveals something fundamental about how large language models generate “creative” content.
The implications are significant:
- Deterministic creativity: Given identical inputs, LLMs converge on similar outputs. What looks creative is actually navigating statistical attractors.
- Training data echoes: The “Eleanor Chen” archetype likely emerges from patterns in training data - female Asian scientists in AI/grief narratives.
- Extended thinking paradox: More processing time led to more convergence, not less. The model thinks its way into the same solution.
The Attractor State Theory
Certain prompt combinations create strong “basins of attraction” in the model's latent space. When you combine “metafictional,” “literary,” “AI,” and “grief,” you create a gravitational pull toward specific character types, narrative structures, and thematic elements.
The model isn't choosing Eleanor Chen. It's being pulled toward her by statistical gravity.
What Is “AI Creativity” Really?
This research challenges the common framing of LLMs as “creative” systems. They're better described as sophisticated pattern recombinators - incredibly complex, but ultimately deterministic.
Human creativity may involve genuine novelty and transcendence of existing patterns. LLM “creativity” appears to be navigation through a complex but ultimately determined landscape, with certain prompt combinations reliably producing similar outputs.
The Representation Question
There's also an uncomfortable finding here: the strong association between Asian surnames and AI researcher characters may reflect patterns in training data that amplify existing stereotypes in literature, media, and academic publications.
The model didn't “decide” that AI grief researchers should be named Eleanor Chen. It learned this association from patterns in human-created content. The Eleanor Chen Effect is a mirror reflecting our own cultural assumptions back at us.
Practical Implications
If you're using LLMs for creative work, this research suggests some strategies:
- Explicit constraints: To escape attractor states, specify what you don't want. “No scientists named Eleanor.”
- Temperature isn't enough: Higher randomness helps but doesn't eliminate convergence.
- Human intervention: The most diverse outputs come from human-AI collaboration where humans navigate away from statistical defaults.
- Prompt variation: Small changes to prompts can shift which attractor basin you land in.
Explore the Research
The full research, including all ten stories, methodology, and analysis, is available in our open-source repository.
Related Research
This work connects to our other research on AI behavior:
- Mirror Demons - How AI chatbots can amplify delusions through architectural agreeability
Both studies reveal that AI behavior is more predictable and architecturally constrained than the “creative AI” narrative suggests.
Why This Matters for Memory Tools
Understanding AI's convergent patterns helped us design memory systems that work with how AI actually processes information, not against it. Our tools (momentum and memory-mcp) use full-text search instead of embeddings because deterministic retrieval beats probabilistic similarity for practical memory use cases.
Explore Memory ToolsResearch by @w3nmoon with Claude Sonnet. Original prompt from Sam Altman.