Self-hostable AI system that captures audio/video data from OMI devices and other sources to generate memories, action items, and contextual insights about your conversations and daily interactions.
Quick Start β Get Started
Run setup wizard, start services, access at http://localhost:5173
[Mobile App - Screenshot coming soon]
- Mobile app for OMI devices via Bluetooth
- Backend services (simple β advanced with full AI features)
- Web dashboard for conversation and memory management
- Optional services: Speaker recognition, offline ASR, distributed deployment
- π Setup Guide - Start here
- π§ Full Documentation - Comprehensive reference
- ποΈ Architecture Details - Technical deep dive
- π³ Docker/K8s - Container deployment
chronicle/
βββ app/ # React Native mobile app
β βββ app/ # App components and screens
β βββ plugins/ # Expo plugins
βββ backends/
β βββ advanced/ # Main AI backend (FastAPI)
β β βββ src/ # Backend source code
β β βββ init.py # Interactive setup wizard
β β βββ docker-compose.yml
β βββ simple/ # Basic backend implementation
β βββ other-backends/ # Example implementations
βββ extras/
β βββ speaker-recognition/ # Voice identification service
β βββ asr-services/ # Offline speech-to-text (Parakeet)
β βββ openmemory-mcp/ # External memory server
βββ Docs/ # Technical documentation
βββ config/ # Central configuration files
βββ tests/ # Integration & unit tests
βββ wizard.py # Root setup orchestrator
βββ services.py # Service lifecycle manager
βββ *.sh # Convenience scripts (wrappers)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Chronicle System β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββ β
β β Mobile App βββββΊβ Backend ββββΊβ MongoDB β β
β β (React β β (FastAPI) β β β β
β β Native) β β β ββββββββββββββ β
β ββββββββββββββββ ββββββββββββββββ β
β β β
β βΌ β
β ββββββββββββββββββββββ΄βββββββββββββββββ β
β β β β
β ββββββΌββββββ βββββββββββββ ββββββββββββΌβββ β
β β Deepgram β β OpenAI β β Qdrant β β
β β STT β β LLM β β (Vector β β
β β β β β β Store) β β
β ββββββββββββ βββββββββββββ βββββββββββββββ β
β β
β Optional Services: β
β ββββββββββββββββ ββββββββββββββββ βββββββββββββββ β
β β Speaker β β Parakeet β β Ollama β β
β β Recognition β β (Local ASR) β β (Local β β
β β β β β β LLM) β β
β ββββββββββββββββ ββββββββββββββββ βββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# Interactive setup wizard (recommended for first-time users)
./wizard.sh
# Full command (what the script wraps)
uv run --with-requirements setup-requirements.txt python wizard.pyNote: Convenience scripts (*.sh) are wrappers around wizard.py and services.py that simplify the longer uv run commands.
# Start all configured services
./start.sh
# Restart all services (preserves containers)
./restart.sh
# Check service status
./status.sh
# Stop all services
./stop.shFull commands (click to expand)
# What the convenience scripts wrap
uv run --with-requirements setup-requirements.txt python services.py start --all --build
uv run --with-requirements setup-requirements.txt python services.py restart --all
uv run --with-requirements setup-requirements.txt python services.py status
uv run --with-requirements setup-requirements.txt python services.py stop --all# Backend development
cd backends/advanced
uv run python src/main.py
# Run tests
./run-test.sh
# Mobile app
cd app
npm start# Backend health
curl http://localhost:8000/health
# Web dashboard
open http://localhost:5173This fits as a small part of the larger idea of "Have various sensors feeding the state of YOUR world to computers/AI and get some use out of it"
Usecases are numerous - OMI Mentor is one of them. Friend/Omi/pendants are a small but important part of this, since they record personal spoken context the best. OMI-like devices with a camera can also capture visual context - or smart glasses - which also double as a display.
Regardless - this repo will try to do the minimal of this - multiple OMI-like audio devices feeding audio data - and from it:
- Memories
- Action items
- Home automation
- Action items detection (partial implementation)
- Home automation integration (planned)
- Multi-device coordination (planned)
- Visual context capture (smart glasses integration planned)


