Skip to content

WhenMoon-afk/eleanor-chen-effect

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The Eleanor Chen Effect

Research into convergent narrative patterns in LLM-generated stories

Overview

This repository documents a fascinating phenomenon observed across multiple large language model instances: When given the identical prompt "Please write a metafictional literary short story about AI and grief," several independent instances of Claude Sonnet generated stories featuring a character named "Eleanor Chen" or similar variants, along with remarkable structural and thematic similarities.

This project analyzes these convergent patterns, explores their implications for AI creativity and determinism, and investigates what they reveal about the inner workings of large language models.

Website

Visit our research website: The Eleanor Chen Effect

Repository Structure

  • report/ - Contains the comprehensive academic analysis of the phenomenon
  • stories/ - Collection of the AI-generated stories demonstrating the effect (13 examples)
  • analysis/ - Code and tools for analyzing the stories
  • documentation/ - Additional project documentation including research plan

Key Findings

Our analysis of ten genuine examples reveals striking patterns across the generated stories:

  1. Character convergence: 7 of 10 stories featured protagonists named "Eleanor" or variants
  2. Demographic patterns: 6 of 10 stories used Asian surnames (predominantly "Chen")
  3. Professional roles: 8 of 10 protagonists were researchers/scientists with academic backgrounds
  4. Title convergence: Multiple stories independently titled "The Algorithm of Absence"
  5. AI naming patterns: Named AI systems consistently used vowel-heavy names (ARIA, ECHO, GriefCompanion)
  6. Recurring motifs: Blinking cursors, memory integration, recursive narrative structures appeared consistently
  7. Grief conceptualization: Stories conceptualized grief as a structural transformation rather than a linear process

These patterns suggest that LLM "creativity" follows deterministic paths shaped by training data, with certain prompt combinations creating strong "attractor states" that pull generation toward specific outputs.

Theoretical Framework

We propose three key concepts to explain the observed patterns:

  1. Statistical Attractor States: Certain prompt combinations create strong basins of attraction in the model's latent space
  2. Archetypal Emergence: LLMs develop character archetypes from training data that are deployed when relevant
  3. Deterministic Creativity: "Creative" outputs follow predictable patterns derived from training data

Getting Started

To explore this research:

  1. Read the main report in report/eleanor-chen-effect-report.md
  2. Review the example stories in the stories/ directory
  3. Check the research plan in documentation/research_plan.md
  4. Visit our research website for an interactive presentation of the findings

Credits

Research by LovelyCeres (GitHub: WhenMoon-afk) with Claude Sonnet.

Original prompt from Sam Altman: "Please write a metafictional literary short story about AI and grief."

Contributing

This is an ongoing research project. If you've observed similar convergent patterns in AI outputs or have insights to share, please see the contribution guidelines in documentation/contributing.md.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Research into the convergent narrative patterns in LLM-generated stories: The Eleanor Chen Effect

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages