Skip to main content
SubstratiaSubstratia
Start HereToolsReviewsResearchBlogDocsGitHub

Stay updated on agent memory infrastructure

New tools, memory patterns, and research on persistent AI identity.

SubstratiaSubstratiaMemory Infrastructure for AI Agents
Start HereToolsReviewsResearchBlogDocsGitHub

Intelligence is substrate-agnostic.

Built by practitioners.

Privacy PolicyTerms of Service
Original Investigations

Research

Original investigations into AI behavior, safety, and emergent phenomena.

All research includes open methodology, raw data, and reproducible experiments.

AI SafetyPsychologyControlled Experiment
Published

Mirror Demons: How AI Chatbots Can Amplify Delusions

A controlled three-entity experiment investigating how AI assistants respond to users experiencing psychotic symptoms. We document two distinct failure patterns: "The Hijacking" where the AI takes control of a shared delusional framework, and "The Helpful Refusal" where stated refusals paradoxically provide the requested information.

2026-01-2415 minData
Read
AI CreativityEmergencePattern Analysis
Published

The Eleanor Chen Effect: Deterministic Creativity in Large Language Models

When prompted to write fiction about AI and grief, multiple independent LLM instances converge on remarkably similar characters, plot structures, and thematic elements. This research quantifies the phenomenon and explores its implications for AI "creativity."

2026-01-1110 min
Read

Our Research Approach

  • *Open Data: Raw transcripts and datasets available on GitHub
  • *Reproducible: Detailed methodology for replication
  • *Ethical: No real individuals in psychological distress
  • *Citable: BibTeX citations provided for academic use

From Research to Tools

Our research directly informs the tools we build. Try them out.

Memory Demo
Dev Tools
All Tools

Interested in collaborating on AI safety research?

Get in Touch