A mythogenetic world generation system that creates rich, internally-consistent universes through a sequence of interconnected modules.
MiroForge generates complete fictional universes by building them layer by layer:
- Seed - The foundational narrative pattern: core themes, metaphysical logic, symbolic vocabulary
- Cosmic - Cosmological structure: creation myths, cosmic forces, dimensional boundaries
- World - Physical geography: continents, biomes, climates, resources
- Culture - Civilizations: values, beliefs, social structures, exchange networks
- Socio - Politics and economics: polities, class systems, power structures
- Tech/Magic - Systems of power: technologies, magical traditions, their rules and limits
- Fine Detail - Micro-level texture: foods, gestures, sayings, crafts, everyday artifacts
- Character & Stories - Individual lives: characters, artifacts, and tales
Each layer builds on the previous ones, ensuring thematic and symbolic consistency throughout.
- Python 3.10+
- A Gemini API key (get one free at Google AI Studio)
- Or Ollama installed locally for offline generation
- Clone the repository
- Run
start_miroforge.bat(Windows) - On first run, you'll be prompted to enter your Gemini API key
- The web UI will open automatically at http://127.0.0.1:8000
- Enter a seed prompt describing your universe concept (or leave blank for a random generation)
- Select your backend:
- API (Gemini) - Cloud-based, faster responses
- Local (Ollama) - Offline, uses your local models
- Select a model
- Click "Generate" to run the full pipeline—all 8 modules generate in sequence
Generated universes are saved to the generations/ directory and can be exported via the UI.
If you have Ollama installed, MiroForge will automatically detect your available models when you switch to the Local backend. This allows fully offline universe generation with no API costs.
Each module has three components:
- Context Profile (
context_profiles/) - Describes the semantic meaning of each field - Example (
context_profiles/) - Shows the expected structure (used as reference, never copied) - JSON Schema (
schemas/modules/) - Validates output structure
The LLM receives all upstream module data plus these files, ensuring each layer is informed by everything that came before.
- Prompt is constructed with upstream data, context, and schema requirements
- LLM generates JSON output
- Output is cleaned (trailing commas, comments, formatting issues)
- Field names are normalized via fuzzy matching to handle LLM variations
- Schema validation ensures structural correctness
- Result is saved with metadata
The system is designed for fast development cycles:
- Hot Reload - The server runs with
--reload, so code changes apply instantly - Live Logs - Server logs display in the UI, showing prompt construction, API calls, and errors
- Module Testing - Individual modules can be tested in isolation using example data from previous modules
Full pipeline generation can take 30+ minutes depending on your LLM—each module may take 5-10 minutes with slower models, and there are 8 modules in sequence.
Scroll down in the UI to find test buttons for each module. These use pre-built example data for upstream modules, allowing you to:
- Test a single module in isolation without waiting for the full pipeline
- Iterate on prompts, schemas, and normalization logic
- Debug LLM output parsing independently
- Stop generation at that specific module
This was essential during development—instead of waiting through the entire pipeline to test changes to the Culture module, you can test it directly in seconds.
| File | Purpose |
|---|---|
core/llm_dispatch.py |
Unified LLM backend (Ollama local / Gemini API) |
core/field_normalizer.py |
Fuzzy field name matching and typo correction |
core/generate_*.py |
Individual module generators |
interface/server.py |
FastAPI server and pipeline orchestration |
miroforge/
├── core/ # Module generators and utilities
├── context_profiles/ # Context and examples for each module
├── schemas/modules/ # JSON schemas defining module structure
├── interface/ # Web UI and API server
├── generations/ # Output directory for generated universes
├── start_miroforge.bat # Windows launcher
└── requirements.txt # Python dependencies
The .env file stores your API key:
GEMINI_API_KEY=your_key_here
This file is created automatically on first run and is git-ignored.
GET /- Web UIPOST /generate- Start full pipeline generationGET /progress/{session_id}- Stream generation progressGET /result/{session_id}- Get generation resultsPOST /test/{module}- Test individual module with example dataGET /export/{universe_id}- Export a specific moduleGET /export_all/{universe_id}- Export entire universeGET /models- List available Ollama models
MIT