-
Notifications
You must be signed in to change notification settings - Fork 0
OpenAI Compatible API
Sparks exposes an OpenAI-compatible HTTP API that allows any OpenAI-compatible client — IDE plugins, chat UIs, API wrappers — to use Sparks as a backend drop-in.
| Method | Path | Description |
|---|---|---|
GET |
/v1/models |
List available models (ghosts + "sparks") |
POST |
/v1/chat/completions |
Submit a chat completion request |
Enable in config.toml:
[openai_api]
enabled = true
bind = "127.0.0.1:8787"
api_key_env = "SPARKS_OPENAI_API_KEY"
principal = "self"
requests_per_minute = 120
burst = 30Set the API key:
export SPARKS_OPENAI_API_KEY=your-secret-keyStart Sparks — the API server starts alongside the main process.
Bearer token authentication. Pass your configured API key as:
Authorization: Bearer <SPARKS_OPENAI_API_KEY>
| Config key | Default | Description |
|---|---|---|
requests_per_minute |
120 | Sustained request rate limit |
burst |
30 | Short-burst allowance above the sustained rate |
Requests exceeding the limit receive 429 Too Many Requests.
The model parameter in your request maps to Sparks internals:
| Model value | Routes to |
|---|---|
sparks |
Default ghost (classifier picks) |
sparks/coder |
coder ghost directly |
sparks/scout |
scout ghost directly |
| Any other model ID | Default ghost |
Advertised models (returned by /v1/models) default to "sparks", the current configured model, and all ghost IDs. Override:
[openai_api]
advertised_models = ["sparks", "sparks/coder", "sparks/scout"]curl http://127.0.0.1:8787/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-secret-key" \
-d '{
"model": "sparks",
"messages": [{"role": "user", "content": "Fix the failing test in auth.rs"}]
}'In Cursor settings, set:
-
Base URL:
http://127.0.0.1:8787/v1 -
API key: your configured
SPARKS_OPENAI_API_KEY -
Model:
sparks(or a specific ghost likesparks/coder)
Sparks does not implement the full OpenAI spec. Unsupported options return explicit errors:
| Feature | Behavior |
|---|---|
stream: true |
Returns 400 with error JSON explaining streaming is not supported |
| Function calling / tools | Returns 400 with error JSON |
logprobs, top_logprobs
|
Ignored silently |
These deviations are documented intentionally. Sparks returns explicit errors rather than silently ignoring unsupported fields.
- Always bind to
127.0.0.1(loopback) when running locally — do not expose to0.0.0.0without proper network controls - The API key must be set via environment variable; inline in config is blocked by default
- All requests are logged to the observer event stream
-
src/openai_api.rs— HTTP server, routing, rate limiting -
src/openai_auth.rs— bearer token validation -
docs/openai-compatible-api.md— design document with full deviation list