Clarify model routing precedence for Codex, OpenAI, and custom Kimi endpoints#79
Clarify model routing precedence for Codex, OpenAI, and custom Kimi endpoints#79Rawdream-Xu wants to merge 1 commit intoaristoteleo:mainfrom
Conversation
Nanguage
left a comment
There was a problem hiding this comment.
Thanks for the PR! The intent to make model routing more predictable is good. However, there are several issues that need to be addressed before merging:
1. Hardcoded model versions
EXPLICIT_MODEL_ALIASES maps to specific model versions like gpt-5.4-mini and gpt-5.4. These will become outdated as models evolve. Aliases should map to provider-level routes or be dynamically resolved, not pinned to specific versions.
2. kimi-for-coding alias is too specific
This is a user-specific custom endpoint name baked into core routing logic. Custom models should be managed through the existing custom_models.json configuration system, not hardcoded in model_selector.py.
3. Redundant normalization calls
normalize_model_choice() is called in both agent.py (_normalize_model_spec) and room.py (set_agent_model). If the agent already normalizes on construction, doing it again in set_agent_model adds unnecessary complexity. Pick one canonical normalization point.
4. Test mocks the function under test
test_agent_normalizes_explicit_provider_aliases monkeypatches normalize_model_choice itself, so it doesn't actually test the real normalization logic — it only tests that _normalize_model_spec calls the function. Consider testing with real alias resolution instead.
Suggestions
- If you want an alias system, make it configurable (e.g., in settings or a config file) rather than hardcoded.
- Remove the kimi-specific alias — let custom endpoints be handled by the existing custom models system.
- Avoid pinning aliases to specific model version strings.
Summary
This PR makes model routing more predictable when switching between Codex OAuth, OpenAI ChatGPT/API models, and configured custom Kimi coding endpoints.
It introduces a single routing rule:
Explicit provider/model routes win
Known user-facing aliases are normalized into explicit routes
Quality/capability tags like high or normal,tools continue to use ModelSelector
Automatic provider selection stays as the final fallback
What changed
Added alias normalization in model_selector.py
codex oauth -> codex/gpt-5.4-mini
openai chatgpt -> openai/gpt-5.4
kimi-for-coding -> custom_anthropic/ when available
Canonicalized model specs during agent initialization in agent.py
Persisted normalized model routes in set_agent_model() so chat/team state stays consistent after switching models
Added focused tests for alias normalization and agent initialization behavior
Added a short routing reference doc:
docs/model-routing-precedence.md
Why
Before this change, friendly but ambiguous model names could be treated as raw model ids and slip into the wrong request path. In practice this made routing behavior hard to predict, especially when custom endpoints and standard providers were both configured.
This PR keeps the current tag-based auto-selection behavior, while making explicit routes and common aliases deterministic.
Verification
Passed focused tests:
pytest -q
tests/test_model_selector.py::TestModelResolution::test_explicit_aliases_normalize_to_routed_models
tests/test_model_selector.py::TestModelResolution::test_kimi_alias_normalizes_to_configured_custom_endpoint
tests/test_agent.py::test_agent_normalizes_explicit_provider_aliases
tests/test_agent.py::test_blank_model_is_treated_as_implicit_default
Notes
This PR is based on the official newer PantheonOS version.