Skip to content

Conversation

@Edison-A-N
Copy link

Add excludeDefaultOptions config for custom API compatibility

Fixes #5421
Fixes #6473
Related to #2661

Problem

Custom API providers fail when OpenCode automatically injects parameters they don't support:

From #5421:

Unsupported parameter: 'max_tokens' is not supported with this model.
Use 'max_completion_tokens' instead.

From #6473:

litellm.BadRequestError: invalid beta flag

These errors occur because OpenCode injects default parameters that custom APIs don't support:

  • max_tokens / max_completion_tokens
  • temperature, topP, topK
  • thinkingConfig, reasoningEffort, promptCacheKey
  • cache_control in messages

Why This Solution is Better

vs PR #5541

vs #6473 Workarounds

  • Workarounds: Require plugins or LiteLLM hacks
  • This PR: Built-in config option, no external dependencies

Coverage

Solution

Add excludeDefaultOptions configuration at provider and model levels to disable all default parameter injection.

Config priority: model > provider > default (false)

Usage

{
  "provider": {
    "custom-api": {
      "excludeDefaultOptions": true,
      "npm": "@ai-sdk/openai-compatible",
      "options": { "baseURL": "...", "apiKey": "..." },
      "models": {
        "model-1": { "name": "Model 1" },
        "model-2": { "excludeDefaultOptions": false }
      }
    }
  }
}

What Gets Filtered

When excludeDefaultOptions: true:

  • temperature, topP, topK, maxOutputTokens (unless explicitly in agent config)
  • ❌ Provider options: thinkingConfig, reasoningEffort, promptCacheKey, cache_control, etc.
  • ✅ Core parameters always sent: model, messages, tools, headers

Verification

Files Changed

8 files changed: +377, -6

  • packages/opencode/src/config/config.ts - Config schema with JSDoc
  • packages/opencode/src/provider/provider.ts - Config loading with priority
  • packages/opencode/src/provider/transform.ts - Filter provider options
  • packages/opencode/src/session/llm.ts - Filter core parameters
  • packages/opencode/test/provider/provider.test.ts - 3 new tests
  • packages/opencode/test/provider/transform.test.ts - 11 new tests
  • packages/sdk/js/src/gen/types.gen.ts - SDK types
  • packages/sdk/js/src/v2/gen/types.gen.ts - SDK v2 types

@github-actions
Copy link
Contributor

Hey! Your PR title Feat/exclude default options doesn't follow conventional commit format.

Please update it to start with one of:

  • feat: or feat(scope): new feature
  • fix: or fix(scope): bug fix
  • docs: or docs(scope): documentation changes
  • chore: or chore(scope): maintenance tasks
  • refactor: or refactor(scope): code refactoring
  • test: or test(scope): adding or updating tests

Where scope is the package name (e.g., app, desktop, opencode).

See CONTRIBUTING.md for details.

@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Found Related PR

PR #5541 - "fix: use max_completion_tokens for Azure reasoning models"

Why it's related: This PR explicitly addresses a similar problem but with a hardcoded solution specifically for Azure reasoning models (max_tokens vs max_completion_tokens). The current PR (7760) provides a more generalized, config-driven solution that covers this use case plus many others. The PR description itself compares to #5541, noting that #5541 only fixes Azure reasoning models while #7760 works for ANY custom API through configuration.

No duplicate PRs found addressing the same excludeDefaultOptions feature or the broader custom API parameter filtering approach.

@Edison-A-N Edison-A-N changed the title Feat/exclude default options feat: add excludeDefaultOptions for custom API compatibility Jan 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

applyCaching called too freely, causes LiteLLM to fail @ai-sdk/openai-compatible max_tokens error for GPT 5.x

4 participants