Skip to content

templeoflum/miroforge

Repository files navigation

MiroForge

A mythogenetic world generation system that creates rich, internally-consistent universes through a sequence of interconnected modules.

What It Does

MiroForge generates complete fictional universes by building them layer by layer:

  1. Seed - The foundational narrative pattern: core themes, metaphysical logic, symbolic vocabulary
  2. Cosmic - Cosmological structure: creation myths, cosmic forces, dimensional boundaries
  3. World - Physical geography: continents, biomes, climates, resources
  4. Culture - Civilizations: values, beliefs, social structures, exchange networks
  5. Socio - Politics and economics: polities, class systems, power structures
  6. Tech/Magic - Systems of power: technologies, magical traditions, their rules and limits
  7. Fine Detail - Micro-level texture: foods, gestures, sayings, crafts, everyday artifacts
  8. Character & Stories - Individual lives: characters, artifacts, and tales

Each layer builds on the previous ones, ensuring thematic and symbolic consistency throughout.

Quick Start

Requirements

  • Python 3.10+
  • A Gemini API key (get one free at Google AI Studio)
  • Or Ollama installed locally for offline generation

Setup

  1. Clone the repository
  2. Run start_miroforge.bat (Windows)
  3. On first run, you'll be prompted to enter your Gemini API key
  4. The web UI will open automatically at http://127.0.0.1:8000

Usage

  1. Enter a seed prompt describing your universe concept (or leave blank for a random generation)
  2. Select your backend:
    • API (Gemini) - Cloud-based, faster responses
    • Local (Ollama) - Offline, uses your local models
  3. Select a model
  4. Click "Generate" to run the full pipeline—all 8 modules generate in sequence

Generated universes are saved to the generations/ directory and can be exported via the UI.

Using Ollama

If you have Ollama installed, MiroForge will automatically detect your available models when you switch to the Local backend. This allows fully offline universe generation with no API costs.

Architecture

Schema-Driven Generation

Each module has three components:

  • Context Profile (context_profiles/) - Describes the semantic meaning of each field
  • Example (context_profiles/) - Shows the expected structure (used as reference, never copied)
  • JSON Schema (schemas/modules/) - Validates output structure

The LLM receives all upstream module data plus these files, ensuring each layer is informed by everything that came before.

Output Pipeline

  1. Prompt is constructed with upstream data, context, and schema requirements
  2. LLM generates JSON output
  3. Output is cleaned (trailing commas, comments, formatting issues)
  4. Field names are normalized via fuzzy matching to handle LLM variations
  5. Schema validation ensures structural correctness
  6. Result is saved with metadata

Development

Rapid Iteration Workflow

The system is designed for fast development cycles:

  • Hot Reload - The server runs with --reload, so code changes apply instantly
  • Live Logs - Server logs display in the UI, showing prompt construction, API calls, and errors
  • Module Testing - Individual modules can be tested in isolation using example data from previous modules

Testing Individual Modules

Full pipeline generation can take 30+ minutes depending on your LLM—each module may take 5-10 minutes with slower models, and there are 8 modules in sequence.

Scroll down in the UI to find test buttons for each module. These use pre-built example data for upstream modules, allowing you to:

  • Test a single module in isolation without waiting for the full pipeline
  • Iterate on prompts, schemas, and normalization logic
  • Debug LLM output parsing independently
  • Stop generation at that specific module

This was essential during development—instead of waiting through the entire pipeline to test changes to the Culture module, you can test it directly in seconds.

Key Files

File Purpose
core/llm_dispatch.py Unified LLM backend (Ollama local / Gemini API)
core/field_normalizer.py Fuzzy field name matching and typo correction
core/generate_*.py Individual module generators
interface/server.py FastAPI server and pipeline orchestration

Project Structure

miroforge/
├── core/                    # Module generators and utilities
├── context_profiles/        # Context and examples for each module
├── schemas/modules/         # JSON schemas defining module structure
├── interface/               # Web UI and API server
├── generations/             # Output directory for generated universes
├── start_miroforge.bat      # Windows launcher
└── requirements.txt         # Python dependencies

Configuration

The .env file stores your API key:

GEMINI_API_KEY=your_key_here

This file is created automatically on first run and is git-ignored.

API Endpoints

  • GET / - Web UI
  • POST /generate - Start full pipeline generation
  • GET /progress/{session_id} - Stream generation progress
  • GET /result/{session_id} - Get generation results
  • POST /test/{module} - Test individual module with example data
  • GET /export/{universe_id} - Export a specific module
  • GET /export_all/{universe_id} - Export entire universe
  • GET /models - List available Ollama models

License

MIT

About

A mythogenetic world generation system that creates rich, internally-consistent fictional universes through layered LLM-driven modules.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors