-
Notifications
You must be signed in to change notification settings - Fork 0
Getting Started
This guide walks you from zero to a running Sparks instance. The full process takes about 10 minutes for a local Ollama setup, or 5 minutes if you already have an OpenAI/OpenRouter API key.
| Requirement | Version | Notes |
|---|---|---|
| Rust | stable (see rust-toolchain.toml) |
rustup recommended |
| Docker | 20+ | Required for ghost sandbox execution |
| Python | 3.11+ | Required for CI scripts and eval harness |
| Git | any |
Optional:
- Ollama — for fully local, no-API-key setup
-
Telegram bot token — for the Telegram frontend (
--features telegram)
# 1. Clone
git clone https://github.com/emberloom/sparks.git
cd sparks
# 2. Set up config
cp config.example.toml config.tomlThen edit config.toml to set your LLM provider (see LLM-Providers for full options).
Minimal config for Ollama (fully local, no API key):
[llm]
provider = "ollama"
[ollama]
url = "http://localhost:11434"
model = "qwen2.5:7b"
classifier_model = "qwen2.5:3b"Minimal config for OpenRouter:
[llm]
provider = "openrouter"
[openrouter]
model = "google/gemini-2.5-flash"
classifier_model = "google/gemini-2.5-flash-lite"Set OPENROUTER_API_KEY in your environment or a .env file (never in config.toml).
# Check compilation (no warnings expected)
cargo check -q
# Run all tests
cargo test -q
# Run the doctor (no LLM required for this check)
cargo run --quiet -- doctor --skip-llmA healthy doctor output looks like:
[OK] Docker daemon reachable
[OK] Ghost image available: rust:1.93
[OK] Memory model dir exists: ~/.sparks/models/all-MiniLM-L6-v2
[OK] DB path writable: ~/.sparks/sparks.db
If you see [WARN] or [FAIL] entries, see Troubleshooting.
Sparks uses a local ONNX model for semantic memory. On first run it will attempt to download it automatically, but you can pre-stage it:
# The model lives at:
~/.sparks/models/all-MiniLM-L6-v2/
# Verify it's present:
cargo run --quiet -- doctor --skip-llm | grep -i embedcargo run --quiet -- chatThis starts the REPL. Type a goal and Sparks will classify it, select a ghost, and execute it inside Docker.
cargo run --quiet -- dispatch --goal "Add a README badge for the CI workflow" --wait-secs 120cargo run --quiet -- ghosts
# Deterministic (no ~/.sparks overrides):
SPARKS_DISABLE_HOME_PROFILES=1 cargo run --quiet -- ghostscargo run --quiet -- doctor
cargo run --quiet -- doctor --security # Print security attestationNever put API keys directly in config.toml. Sparks blocks inline secrets by default.
Recommended approaches (in order of preference):
-
OS keyring (most secure):
sparks secrets set openrouter.api_key -
.envfile (gitignored):echo 'OPENROUTER_API_KEY=sk-...' >> .env
-
Shell environment:
export OPENROUTER_API_KEY=sk-...
Override for development only:
SPARKS_ALLOW_INLINE_SECRETS=1 cargo run -- chat| Goal | Page |
|---|---|
| Configure providers, ghosts, memory | Configuration-Reference |
| Understand ghost sub-agents | Ghosts |
| Connect an MCP tool server | MCP-Integration |
| Wire up ticket intake | Ticket-Intake |
| Enable observability | Observability |
| Understand the full architecture | Architecture |