-
Notifications
You must be signed in to change notification settings - Fork 5.1k
Open
Labels
discussionUsed for feature requests, proposals, ideas, etc. Open discussionUsed for feature requests, proposals, ideas, etc. Open discussion
Description
Problem
OpenCode supports:
- Provider fallback only when the model ID is the same
- Static agent-level model overrides
There is no way to define fallback between different models, e.g.:
“If model A errors or rate-limits → automatically retry with model B”
This causes long-running agent workflows to fail on transient provider/model issues and forces users to rely on external routers or proxies (like litellm).
Request
Add first-class model fallback support to OpenCode.
Example (global)
Example (agent-level)
{
"agents": {
"build": {
"model": {
"fallback": ["claude-sonnet", "gpt-4o-mini"]
}
}
}
}Behavior
- Switch models on: rate limits, provider/model unavailability, 5xx errors
- Do not retry on prompt or validation errors
- Optional retry limit
Why in Core
- Model routing is an orchestration concern
- External routers break plugin auth flows and agent semantics
- Complements existing provider-order routing (e.g. Feature Request: Add Vercel AI Gateway Provider Routing Support (
onlyandorderfilters) #2153)
Status
- No existing plugin or PR implements true model fallback
Thanks,
Metadata
Metadata
Assignees
Labels
discussionUsed for feature requests, proposals, ideas, etc. Open discussionUsed for feature requests, proposals, ideas, etc. Open discussion
{ "models": { "coder": { "fallback": [ "anthropic/claude-3.5-sonnet", "openai/gpt-4o", "deepseek/deepseek-r1" ] } } }