Worker-local runtime adapter setup for QUESTPIE Autopilot. Last updated: 2026-04-07 (Pass 25.8)
Workers execute runs using runtime adapters — CLI tools that run on the worker machine. The orchestrator does not own or install runtimes. Each worker machine needs at least one runtime binary installed and authenticated locally.
Supported runtimes:
| Runtime | Binary | Default | Status |
|---|---|---|---|
| Claude Code | claude |
Yes | Full adapter, production-tested |
| Codex | codex |
No | V1 adapter, functional |
| OpenCode | opencode |
No | V1 adapter, functional |
Claude Code is the default and most-tested runtime. Codex and OpenCode are functional V1 adapters with documented caveats.
npm install -g @anthropic-ai/claude-codeVerify with upstream docs if install commands change: https://docs.anthropic.com/en/docs/claude-code/setup
Pick one:
# Option A: Interactive OAuth (recommended)
claude login
# Option B: API key
export ANTHROPIC_API_KEY=sk-ant-...# Check binary is on PATH
which claude
# Check from Autopilot
autopilot doctor --offline --require-runtime --runtimes claude-code- The
claudebinary executes in non-interactive mode (claude -p "prompt" --output-format json) - Sessions are persisted locally on the worker machine
- Same-worker continuation uses Claude's
--resumeflag with stored session IDs - Git worktree isolation is handled by the worker before spawning the runtime
- MCP tools are injected via
--mcp-configflag — no project config modification needed
- Runtime binary installation
- Anthropic API key or OAuth session
- Local Claude session state
- Runtime process lifecycle (worker manages spawn/kill)
npm install -g @openai/codexVerify with upstream docs if install commands change: https://github.com/openai/codex
export OPENAI_API_KEY=sk-...which codex
autopilot doctor --offline --require-runtime --runtimes codex- The
codexbinary executes viacodex exec --json "prompt"with JSONL event streaming - Runs in unattended mode (
--ask-for-approval never) - Sessions are persisted in
~/.codex/sessions/ - Session resume uses
codex exec resume <session_id> "prompt" - Git worktree isolation is handled by the worker before spawning
Codex reads MCP server configuration from .codex/config.toml in the project directory. The Autopilot worker injects MCP config by:
- Backing up existing
.codex/config.tomlin the run worktree (if present) - Writing Autopilot's MCP server config into
.codex/config.toml - Running the Codex adapter
- Restoring the original config on cleanup
This means: if you have a custom .codex/config.toml in your project, it will be temporarily replaced during Autopilot runs and restored afterward. This is a documented V1 tradeoff — not a bug.
Config format (TOML):
# .codex/config.toml
[mcp_servers.autopilot]
command = "bun"
args = ["run", "/path/to/mcp-server/src/index.ts"]
[mcp_servers.autopilot.env]
AUTOPILOT_API_URL = "http://orchestrator:7778"
AUTOPILOT_API_KEY = "worker-secret"- No
--max-turnsflag exists — runs continue until the model decides to stop - JSONL event types differ from Claude Code's JSON output format
- Token usage is reported via
turn.completedevents, not a single summary - Model override is wired: set
modelin agent YAML and the worker passes--modelto Codex
npm install -g opencode-aiVerify with upstream docs if install commands change: https://opencode.ai/docs/
OpenCode supports multiple providers. Set the API key for your chosen provider:
# Anthropic (default for many models)
export ANTHROPIC_API_KEY=sk-ant-...
# OpenAI
export OPENAI_API_KEY=sk-...
# Google
export GOOGLE_API_KEY=...Consult OpenCode documentation for provider-specific auth.
which opencode
autopilot doctor --offline --require-runtime --runtimes opencode- The
opencodebinary executes viaopencode run "prompt" --format json - Session resume uses
opencode run --continue --session <id> "prompt" - Model format uses forward-slash notation:
provider/model(e.g.anthropic/claude-sonnet-4-5) - Git worktree isolation is handled by the worker before spawning
OpenCode reads MCP configuration from opencode.jsonc in the project directory. The same backup/replace/restore approach applies as with Codex:
- Back up existing
opencode.jsoncin the run worktree (if present) - Write Autopilot's MCP config
- Run the OpenCode adapter
- Restore original config on cleanup
Config format (JSON):
{
"mcp": {
"autopilot": {
"type": "local",
"command": ["bun", "run", "/path/to/mcp-server/src/index.ts"],
"environment": {
"AUTOPILOT_API_URL": "http://orchestrator:7778",
"AUTOPILOT_API_KEY": "worker-secret"
}
}
}
}Key differences from Claude Code MCP format:
- Field is
"mcp", not"mcpServers" - Type is
"local", not"stdio" - Command is a single array (command + args), not separate fields
- Environment uses
"environment", not"env"
- V1 event streaming is start + completion only (no per-tool granularity)
- Token usage reporting may not be available from all providers
- Session persistence behavior may vary — verify with upstream docs
- Model override is wired: set
modelin agent YAML and the worker passes--modelto OpenCode (usemodelMapfor provider/model format conversion)
Workers specify their runtime when starting:
# Default (Claude Code)
autopilot worker start --url http://orchestrator:7778 --token <token>
# Explicit runtime
autopilot worker start --url http://orchestrator:7778 --token <token> --runtime codex
autopilot worker start --url http://orchestrator:7778 --token <token> --runtime opencodeAgent YAML can specify a canonical model, provider, and variant:
# .autopilot/agents/dev.yaml
id: dev
name: Developer
role: developer
model: claude-sonnet-4 # canonical model name
provider: anthropic # optional — carried as intent, not yet used for claim routing
variant: extended-thinking # optional — behavioral hintThe orchestrator carries these through to the run. The worker resolves the canonical model to a runtime-specific model string via modelMap in runtime config.
If no model is set on the agent, the runtime default is used (no --model flag passed). If a model is set but no modelMap entry exists, the canonical model name is passed through as-is.
V1 note: modelMap is a programmatic config field on RuntimeConfig. The autopilot worker start CLI does not expose it as a flag — CLI-started workers pass canonical model names through directly. modelMap is available when constructing WorkerConfig programmatically (e.g. in custom worker scripts or tests). A CLI/config-file surface may be added in a future pass.
// Programmatic worker config example
const config: RuntimeConfig = {
runtime: 'opencode',
modelMap: {
'claude-sonnet-4': 'anthropic/claude-sonnet-4-5',
'gpt-4o': 'openai/gpt-4o',
}
}- Per-runtime capability sandboxing: Beyond prompt-level hints, there is no strict capability subsetting per runtime.
- Cross-runtime parity: Claude Code has richer event streaming and MCP injection than Codex/OpenCode.
- Managed model catalog: No central model registry or automatic model availability checks.
- Variant-specific adapter behavior:
variantis carried through the contract but does not yet change adapter flags.
Use autopilot doctor on worker machines to validate runtime setup:
# Check all default runtimes (informational)
autopilot doctor --offline
# Require at least one runtime (fail if none found)
autopilot doctor --offline --require-runtime
# Check specific runtimes only
autopilot doctor --offline --runtimes claude-code,codex --require-runtime
# Machine-readable output
autopilot doctor --offline --require-runtime --jsonDoctor checks for each runtime:
- Binary exists on
PATH(viawhich) - At least one supported runtime is available (when
--require-runtimeis set)
Doctor does not check:
- Whether the runtime is authenticated (API key/OAuth)
- Whether MCP config is correct
- Runtime version compatibility
| Aspect | Claude Code | Codex | OpenCode |
|---|---|---|---|
| Install | npm install -g @anthropic-ai/claude-code |
npm install -g @openai/codex |
npm install -g opencode-ai |
| Auth | OAuth or ANTHROPIC_API_KEY |
OPENAI_API_KEY |
Provider-specific API key |
| MCP injection | CLI flag (--mcp-config) |
Backup/replace .codex/config.toml |
Backup/replace opencode.jsonc |
| Session resume | --resume <id> |
codex exec resume <id> |
--continue --session <id> |
| Event granularity | Full JSON | JSONL stream | Start + completion |
| Model override | --model via agent config |
--model via agent config |
--model via agent config + modelMap |
| Maturity | Production-tested | V1 functional | V1 functional |
See also:
- Deployment Variants for the full topology guide
- VPS Deployment Runbook for end-to-end deployment walkthrough
- Release Channels for version management