Skip to content
Enreign edited this page Mar 13, 2026 · 3 revisions

Emberloom Sparks Wiki

Self-hosted multi-agent orchestrator with a hardened execution sandbox, semantic memory, and deep observability — built in Rust.

CI License: MIT


What is Sparks?

Sparks is a self-hosted Rust multi-agent system that turns goals and external tickets into safely executed, auditable autonomous tasks. Named sub-agents called ghosts run inside hardened Docker containers with explicit tool allowlists. A classifier routes tasks to the right ghost based on historical KPI outcomes. A local embedding pipeline with HNSW search accumulates cross-session memory. A 20-type event stream, Langfuse traces, and a doctor command make everything inspectable.

It is a portfolio/learning project — not a SaaS product. It exists because building these subsystems from scratch is the fastest way to understand them.


Navigation

Getting Started

Page Description
Getting-Started Installation, first run, quick start
Configuration-Reference All config.toml sections and keys
CLI-Reference Every command and flag
Troubleshooting Common issues, FAQ, diagnostics

Core Concepts

Page Description
Architecture Component diagram, data flow, state machines
Ghosts Sub-agents: personas, strategies, soul files
Memory-System ONNX embeddings, HNSW index, SQLite, recency decay
Execution-Strategies react loop vs code multi-phase pipeline
Sandboxing-and-Safety Docker hardening, 5-level autonomy ladder, prompt scanner

Integrations

Page Description
LLM-Providers OpenAI, Ollama, OpenRouter, Zen — setup and fallback
MCP-Integration Config-driven tool registry, namespacing, allowlists
Ticket-Intake Linear, webhook, and custom intake sources
OpenAI-Compatible-API Drop-in /v1/chat/completions for IDE plugins

Operations

Page Description
Observability Event stream, Langfuse, KPI tracking, doctor
Contributing Dev setup, CI gates, PR guidelines

Quick Start

git clone https://github.com/emberloom/sparks.git && cd sparks
cp config.example.toml config.toml
# Edit config.toml — set [llm] provider and credentials
cargo run --quiet -- doctor --skip-llm
cargo run --quiet -- chat

Full setup guide: Getting-Started


Key Features at a Glance

Feature Details
Hardened sandbox CAP_DROP ALL, read-only rootfs, SSRF/path-traversal blocking, PID + memory limits
Local embeddings 384-dim ONNX vectors, HNSW nearest-neighbor, no external API required
KPI-driven routing Ghost selection driven by historical success/rollback rates per repo + lane
20-type event stream All emitted via Unix socket; CI-enforced to have at least one emit site
5-level safety model Autonomy ladder governing what ghosts may do without confirmation
MCP tool registry Connect any MCP server via config; exposed as mcp:<server>:<tool>
OpenAI-compatible API /v1/models + /v1/chat/completions for IDE integrations
Self-improvement Eval harness, optimizer tournament, KPI-driven ghost selection

What Sparks Does Not Do (Yet)

  • No IDE integration — CLI and Telegram only
  • No git worktree workspace isolation — Docker isolation only
  • No interactive TUI — CLI output and observer socket
  • No auto-merge by default — PR creation uses gh CLI

Source: github.com/emberloom/sparks · CHANGELOG · Issues

Clone this wiki locally