Skip to content

Getting Started

Enreign edited this page Mar 13, 2026 · 2 revisions

Getting Started

This guide walks you from zero to a running Sparks instance. The full process takes about 10 minutes for a local Ollama setup, or 5 minutes if you already have an OpenAI/OpenRouter API key.


Prerequisites

Requirement Version Notes
Rust stable (see rust-toolchain.toml) rustup recommended
Docker 20+ Required for ghost sandbox execution
Python 3.11+ Required for CI scripts and eval harness
Git any

Optional:

  • Ollama — for fully local, no-API-key setup
  • Telegram bot token — for the Telegram frontend (--features telegram)

Installation

# 1. Clone
git clone https://github.com/emberloom/sparks.git
cd sparks

# 2. Set up config
cp config.example.toml config.toml

Then edit config.toml to set your LLM provider (see LLM-Providers for full options).

Minimal config for Ollama (fully local, no API key):

[llm]
provider = "ollama"

[ollama]
url = "http://localhost:11434"
model = "qwen2.5:7b"
classifier_model = "qwen2.5:3b"

Minimal config for OpenRouter:

[llm]
provider = "openrouter"

[openrouter]
model = "google/gemini-2.5-flash"
classifier_model = "google/gemini-2.5-flash-lite"

Set OPENROUTER_API_KEY in your environment or a .env file (never in config.toml).


Verify the Installation

# Check compilation (no warnings expected)
cargo check -q

# Run all tests
cargo test -q

# Run the doctor (no LLM required for this check)
cargo run --quiet -- doctor --skip-llm

A healthy doctor output looks like:

[OK] Docker daemon reachable
[OK] Ghost image available: rust:1.93
[OK] Memory model dir exists: ~/.sparks/models/all-MiniLM-L6-v2
[OK] DB path writable: ~/.sparks/sparks.db

If you see [WARN] or [FAIL] entries, see Troubleshooting.


Download the Embedding Model

Sparks uses a local ONNX model for semantic memory. On first run it will attempt to download it automatically, but you can pre-stage it:

# The model lives at:
~/.sparks/models/all-MiniLM-L6-v2/

# Verify it's present:
cargo run --quiet -- doctor --skip-llm | grep -i embed

First Run

Interactive Chat

cargo run --quiet -- chat

This starts the REPL. Type a goal and Sparks will classify it, select a ghost, and execute it inside Docker.

Dispatch a Task

cargo run --quiet -- dispatch --goal "Add a README badge for the CI workflow" --wait-secs 120

List Configured Ghosts

cargo run --quiet -- ghosts
# Deterministic (no ~/.sparks overrides):
SPARKS_DISABLE_HOME_PROFILES=1 cargo run --quiet -- ghosts

Health Check

cargo run --quiet -- doctor
cargo run --quiet -- doctor --security   # Print security attestation

Secrets

Never put API keys directly in config.toml. Sparks blocks inline secrets by default.

Recommended approaches (in order of preference):

  1. OS keyring (most secure):

    sparks secrets set openrouter.api_key
  2. .env file (gitignored):

    echo 'OPENROUTER_API_KEY=sk-...' >> .env
  3. Shell environment:

    export OPENROUTER_API_KEY=sk-...

Override for development only:

SPARKS_ALLOW_INLINE_SECRETS=1 cargo run -- chat

Next Steps

Goal Page
Configure providers, ghosts, memory Configuration-Reference
Understand ghost sub-agents Ghosts
Connect an MCP tool server MCP-Integration
Wire up ticket intake Ticket-Intake
Enable observability Observability
Understand the full architecture Architecture

Clone this wiki locally