Skip to content

Configuration Reference

Enreign edited this page Mar 13, 2026 · 2 revisions

Configuration Reference

Sparks is configured via config.toml (copied from config.example.toml). Secrets belong in environment variables, a .env file, or the OS keyring — never inline in config.toml (blocked by default).


[runtime]

[runtime]
profile = "container_strict"  # default
Value Behavior
container_strict Docker-isolated execution (default, recommended)
self_dev_trusted Trusted host execution for allowlisted repos only
local_only Fully local constraints — no external APIs, no cloud integrations

[llm]

[llm]
provider = "openai"   # "openai" | "ollama" | "openrouter" | "zen"

Each provider has two model slots:

Slot Key Used by
Main model Ghost task execution (use a capable model)
Classifier classifier_model Routing decisions only (use a lighter/cheaper model)

Always set classifier_model to a lighter model. If omitted, the main model handles every routing decision — slow and expensive.

See LLM-Providers for per-provider auth and URL configuration.


[docker]

[docker]
image        = "rust:1.93"
socket_path  = "/var/run/docker.sock"
runtime      = "runc"
memory_limit = 268435456   # 256 MiB per container
Key Description
image Docker image used for ghost containers. Pin by digest for reproducibility.
socket_path Path to the Docker daemon socket.
runtime OCI runtime (runc, runsc for gVisor, etc.).
memory_limit Per-container memory cap in bytes.

For Rust repos use a Rust image (rust:1.93) so cargo/rustc exist inside the sandbox.


[[ghosts]]

Each ghost is a named sub-agent. You can define multiple.

[[ghosts]]
name        = "coder"
description = "Multi-file coding tasks with git integration."
tools       = ["file_read", "file_write", "shell", "git", "gh"]
strategy    = "code"
soul_file   = "~/.sparks/souls/coder.md"   # optional
Key Type Description
name string Unique identifier; used in routing and CLI
description string Used by the classifier to decide which ghost handles a task
tools list Allowlisted tools this ghost may use
strategy "code" or "react" Execution pipeline (see Execution-Strategies)
soul_file path (optional) Persona markdown injected into system prompt. Keep under ~2K tokens.

Important: strategy must be exactly "code" or "react". Any other value fails silently at dispatch.

Custom profiles can also live in ~/.sparks/ghosts/*.toml. Disable home profiles for reproducible runs:

SPARKS_DISABLE_HOME_PROFILES=1 cargo run -- ghosts

[memory]

[memory]
enabled       = true
half_life_days = 30        # recency decay half-life
dedup_threshold = 0.92     # cosine similarity threshold for deduplication
max_results   = 10         # max memories injected per task
Key Description
enabled Toggle the memory subsystem
half_life_days How quickly old memories lose relevance weight
dedup_threshold Cosine similarity above which a new memory is considered a duplicate
max_results Maximum number of memories retrieved and injected into each task context

[embedding]

[embedding]
enabled   = true
model_dir = "~/.sparks/models/all-MiniLM-L6-v2"

The ONNX embedding model lives locally. No external embedding API is used. Run doctor to verify the model is present.


[db]

[db]
path = "~/.sparks/sparks.db"

Single SQLite file. Stores memories, tool usage, KPI snapshots, ticket intake state, and session activity.


[openai_api]

OpenAI-compatible API server (disabled by default).

[openai_api]
enabled             = false
bind                = "127.0.0.1:8787"
api_key_env         = "SPARKS_OPENAI_API_KEY"
principal           = "self"
requests_per_minute = 120
burst               = 30

See OpenAI-Compatible-API for full details.


[mcp]

[mcp]
enabled           = true
discovery_ttl_secs = 60

[[mcp.servers]]
name                  = "linear"
enabled               = true
transport             = "stdio"
command               = "npx"
args                  = ["-y", "@modelcontextprotocol/server-linear"]
env                   = ["LINEAR_API_KEY"]
timeout_secs          = 30
reconnect_delay_secs  = 5
requires_confirmation = true
allowed_tools         = ["search_documents", "get_issue"]

See MCP-Integration for full reference.


[ticket_intake]

[ticket_intake]
enabled = true
sources = ["linear"]

[ticket_intake.linear]
enabled       = true
poll_interval_secs = 60
project_ids   = ["PROJECT_ID"]

[ticket_intake.webhook]
enabled = false
bind    = "127.0.0.1:8788"

See Ticket-Intake for full reference.


[langfuse]

[langfuse]
enabled = false

Set LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, and LANGFUSE_BASE_URL to enable. No errors are emitted if absent — tracing is silently skipped.


[heartbeat]

[heartbeat]
interval_secs = 900   # 15 minutes

[mood]

Controls the experimental personality system.

[mood]
enabled            = true
energy_peak_hour   = 10    # Hour (0-23) when energy peaks
half_life_hours    = 4     # Energy decay half-life
drift_interval_secs = 3600

[manager]

[manager]
sensitive_patterns     = ["rm -rf", "DROP TABLE"]
auto_approve_patterns  = []   # per-ghost pattern overrides

[prompt_scanner]

[prompt_scanner]
enabled  = true
mode     = "flag_only"   # "flag_only" | "block"
threshold = 0.7

[[prompt_scanner.allowlist]]
type  = "repo"
value = "emberloom/sparks"

See Sandboxing-and-Safety for full scanner documentation.


Runtime Knobs

30+ runtime-tunable values in RuntimeKnobs can be changed without restart. Key ones:

Knob Description
spontaneity Controls autonomous initiative level (0.0–1.0)
quiet_hours_start / quiet_hours_end Timezone-aware pulse suppression window
pulse_rate_limit Max non-urgent pulses per hour (default: 4)

Full Example

See config.example.toml in the repository for a fully annotated configuration with all sections and commented-out options.

Clone this wiki locally