OpenCode supports ACP natively via the acp subcommand — no adapter needed.
OpenCode supports 75+ LLM providers via the AI SDK, making it the most flexible backend for OpenAB. Users bring their own provider — no separate API keys per backend needed.
┌──────────┐ Discord ┌────────┐ ACP stdio ┌──────────┐ ┌───────────────────┐
│ Discord │◄────────► │ OpenAB │◄────────► │ OpenCode │──►│ LLM Providers │
│ Users │ Gateway │ (Rust) │ JSON-RPC │ (ACP) │ │ │
└──────────┘ └────────┘ └──────────┘ │ ┌───────────────┐ │
│ │ │ Ollama Cloud │ │
opencode.json │ │ OpenAI │ │
sets model │ │ Anthropic │ │
│ │ AWS Bedrock │ │
│ │ GitHub Copilot│ │
│ │ Groq │ │
│ │ OpenRouter │ │
│ │ Ollama (local)│ │
│ │ 75+ more... │ │
│ └───────────────┘ │
└───────────────────┘
docker build -f Dockerfile.opencode -t openab-opencode:latest .The image installs opencode-ai globally via npm on node:22-bookworm-slim.
helm install openab openab/openab \
--set agents.kiro.enabled=false \
--set agents.opencode.enabled=true \
--set agents.opencode.command=opencode \
--set 'agents.opencode.args={acp}' \
--set agents.opencode.image=ghcr.io/openabdev/openab-opencode:latest \
--set agents.opencode.discord.botToken="$DISCORD_BOT_TOKEN" \
--set-string 'agents.opencode.discord.allowedChannels[0]=YOUR_CHANNEL_ID' \
--set agents.opencode.workingDir=/home/node \
--set agents.opencode.pool.maxSessions=3Set
agents.kiro.enabled=falseto disable the default Kiro agent.
[agent]
command = "opencode"
args = ["acp"]
working_dir = "/home/node"kubectl exec -it deployment/openab-opencode -- opencode auth loginFollow the browser OAuth flow, then restart the pod:
kubectl rollout restart deployment/openab-opencodeOpenCode supports multiple providers. Add any of them via opencode auth login:
- Ollama Cloud — free tier available, models like
gemini-3-flash-preview,qwen3-coder-next,deepseek-v3.2 - OpenCode Zen / Go — tested and verified models provided by the OpenCode team (e.g.
opencode/big-pickle,opencode/gpt-5-nano) - OpenAI, Anthropic, AWS Bedrock, GitHub Copilot, Groq, OpenRouter — and 75+ more
To list all available models across configured providers:
kubectl exec deployment/openab-opencode -- opencode modelskubectl exec -it deployment/openab-opencode -- opencode auth login -p "ollama cloud"Create opencode.json in the working directory (/home/node). OpenCode reads it as project-level config:
kubectl exec deployment/openab-opencode -- sh -c \
'echo "{\"model\": \"ollama-cloud/gemini-3-flash-preview\"}" > /home/node/opencode.json'This file is on the PVC and persists across restarts.
kubectl rollout restart deployment/openab-opencodekubectl logs deployment/openab-opencode --tail=5
# Should show: discord bot connected@mention the bot in your Discord channel to start chatting.
- Tool authorization: OpenCode handles tool authorization internally and never emits
session/request_permission— all tools run without user confirmation, equivalent to--trust-all-toolson other backends. - Model selection: Set the default model via
opencode.jsonin the working directory using theprovider/modelformat (e.g.ollama-cloud/gemini-3-flash-preview). - Frequent releases: OpenCode releases very frequently (often daily). The pinned version in
Dockerfile.opencodeshould be bumped via a dedicated PR when an update is needed.