feat: add MiniMax as first-class LLM provider#190
feat: add MiniMax as first-class LLM provider#190octo-patch wants to merge 1 commit intoplexe-ai:mainfrom
Conversation
Add first-class support for MiniMax models via the minimax/ prefix. Users can now configure agents to use MiniMax models (M2.7, M2.5, etc.) with zero-config auto-routing - just set MINIMAX_API_KEY and use model IDs like "minimax/MiniMax-M2.7". Changes: - config.py: Add MINIMAX_API_BASE, MINIMAX_MODELS constants and minimax/ prefix auto-routing in get_routing_for_model() - litellm_wrapper.py: Rewrite minimax/ prefix to openai/ for LiteLLM compatibility, add temperature clamping for MiniMax [0, 1.0] - config.yaml.template: Add MiniMax configuration examples - README.md: Document MiniMax as supported provider with setup guide Tests: 10 unit + 2 config integration + 1 API integration test
Greptile SummaryThis PR adds MiniMax as a first-class LLM provider by extending
Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| plexe/config.py | Adds MiniMax routing constants and auto-routing logic in get_routing_for_model; one misleading warning message and one unused MINIMAX_MODELS constant found. |
| plexe/utils/litellm_wrapper.py | Adds minimax/ → openai/ model ID rewriting and temperature clamping in PlexeLiteLLMModel.__init__; logic is correct and integrates cleanly with the existing routing pipeline. |
| tests/unit/utils/test_litellm_wrapper.py | Good coverage of model ID rewriting and temperature clamping; violates the project style guide by importing LiteLLMModel inside 10 test method bodies instead of at module top level. |
| tests/unit/test_config.py | Thorough tests for auto-routing, explicit mapping override, no-key edge case, and YAML/env loading; no issues found. |
| tests/unit/test_minimax_integration.py | Integration tests for YAML roundtrip, mixed-provider config, and live API completion (correctly guarded by pytest.skip when MINIMAX_API_KEY is absent); no issues found. |
| README.md | Adds MiniMax setup guide and API-key export snippet; documentation is accurate and well-placed. |
| config.yaml.template | Adds commented MiniMax config example with all four model variants and a note about litellm_drop_params; no issues found. |
Prompt To Fix All With AI
This is a comment left during a code review.
Path: tests/unit/utils/test_litellm_wrapper.py
Line: 22-25
Comment:
**Imports inside test functions violate style guide**
The project's style guide (`CLAUDE.md`) states: "Imports: ALWAYS at top level in order: stdlib, third-party, local; **NEVER inside functions**." The `from plexe.utils.litellm_wrapper import LiteLLMModel` import appears inside 10 separate test methods instead of at the module top level.
Move the import to the top of the file:
```suggestion
from unittest.mock import patch, MagicMock
import pytest
from plexe.utils.litellm_wrapper import LiteLLMModel, PlexeLiteLLMModel
```
Then remove the repeated `from plexe.utils.litellm_wrapper import LiteLLMModel` lines inside each test method body.
**Context Used:** CLAUDE.md ([source](https://app.greptile.com/review/custom-context?memory=72154d9b-42a4-4734-9db8-1821c15d8b84))
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: plexe/config.py
Line: 178-184
Comment:
**`MINIMAX_MODELS` is dead code — never consumed by production code**
`MINIMAX_MODELS` is defined and exported but is only referenced in the test `test_minimax_models_constant`, which just asserts the constant has certain keys. No production code path (routing, validation, context-window capping, etc.) reads from it. Either wire it into `get_routing_for_model` / `PlexeLiteLLMModel` to serve a real purpose (e.g. rejecting unknown model IDs or surfacing context-window metadata), or remove it to keep the codebase lean per the project's "prefer deleting code over adding code" principle.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: plexe/config.py
Line: 551-554
Comment:
**Misleading warning message when an explicit mapping references a non-existent provider**
When `model_id` is a `minimax/` model AND its explicit provider mapping resolves to a non-existent provider, the warning says **"Using default routing"** but execution then falls through to the MiniMax auto-routing branch — not the `config.default` path. The log message will mislead anyone debugging the routing decision.
```suggestion
logging.getLogger(__name__).warning(
f"Model '{model_id}' references non-existent provider '{provider_name}'. "
"Falling through to next routing step."
)
```
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "feat: add MiniMax as first-class LLM pro..." | Re-trigger Greptile
| model = PlexeLiteLLMModel(model_id="minimax/MiniMax-M2.7", temperature=0.2) | ||
| # The super().__init__ was patched, so check the call args | ||
| from plexe.utils.litellm_wrapper import LiteLLMModel | ||
|
|
There was a problem hiding this comment.
Imports inside test functions violate style guide
The project's style guide (CLAUDE.md) states: "Imports: ALWAYS at top level in order: stdlib, third-party, local; NEVER inside functions." The from plexe.utils.litellm_wrapper import LiteLLMModel import appears inside 10 separate test methods instead of at the module top level.
Move the import to the top of the file:
| model = PlexeLiteLLMModel(model_id="minimax/MiniMax-M2.7", temperature=0.2) | |
| # The super().__init__ was patched, so check the call args | |
| from plexe.utils.litellm_wrapper import LiteLLMModel | |
| from unittest.mock import patch, MagicMock | |
| import pytest | |
| from plexe.utils.litellm_wrapper import LiteLLMModel, PlexeLiteLLMModel |
Then remove the repeated from plexe.utils.litellm_wrapper import LiteLLMModel lines inside each test method body.
Context Used: CLAUDE.md (source)
Prompt To Fix With AI
This is a comment left during a code review.
Path: tests/unit/utils/test_litellm_wrapper.py
Line: 22-25
Comment:
**Imports inside test functions violate style guide**
The project's style guide (`CLAUDE.md`) states: "Imports: ALWAYS at top level in order: stdlib, third-party, local; **NEVER inside functions**." The `from plexe.utils.litellm_wrapper import LiteLLMModel` import appears inside 10 separate test methods instead of at the module top level.
Move the import to the top of the file:
```suggestion
from unittest.mock import patch, MagicMock
import pytest
from plexe.utils.litellm_wrapper import LiteLLMModel, PlexeLiteLLMModel
```
Then remove the repeated `from plexe.utils.litellm_wrapper import LiteLLMModel` lines inside each test method body.
**Context Used:** CLAUDE.md ([source](https://app.greptile.com/review/custom-context?memory=72154d9b-42a4-4734-9db8-1821c15d8b84))
How can I resolve this? If you propose a fix, please make it concise.Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| MINIMAX_MODELS = { | ||
| "MiniMax-M2.7": {"context_window": 1_000_000}, | ||
| "MiniMax-M2.7-highspeed": {"context_window": 1_000_000}, | ||
| "MiniMax-M2.5": {"context_window": 204_000}, | ||
| "MiniMax-M2.5-highspeed": {"context_window": 204_000}, | ||
| } | ||
|
|
There was a problem hiding this comment.
MINIMAX_MODELS is dead code — never consumed by production code
MINIMAX_MODELS is defined and exported but is only referenced in the test test_minimax_models_constant, which just asserts the constant has certain keys. No production code path (routing, validation, context-window capping, etc.) reads from it. Either wire it into get_routing_for_model / PlexeLiteLLMModel to serve a real purpose (e.g. rejecting unknown model IDs or surfacing context-window metadata), or remove it to keep the codebase lean per the project's "prefer deleting code over adding code" principle.
Prompt To Fix With AI
This is a comment left during a code review.
Path: plexe/config.py
Line: 178-184
Comment:
**`MINIMAX_MODELS` is dead code — never consumed by production code**
`MINIMAX_MODELS` is defined and exported but is only referenced in the test `test_minimax_models_constant`, which just asserts the constant has certain keys. No production code path (routing, validation, context-window capping, etc.) reads from it. Either wire it into `get_routing_for_model` / `PlexeLiteLLMModel` to serve a real purpose (e.g. rejecting unknown model IDs or surfacing context-window metadata), or remove it to keep the codebase lean per the project's "prefer deleting code over adding code" principle.
How can I resolve this? If you propose a fix, please make it concise.| if provider_name not in config.providers: | ||
| # This should have been caught by validation, but handle gracefully | ||
| logging.getLogger(__name__).warning( | ||
| f"Model '{model_id}' references non-existent provider '{provider_name}'. Using default routing." | ||
| ) |
There was a problem hiding this comment.
Misleading warning message when an explicit mapping references a non-existent provider
When model_id is a minimax/ model AND its explicit provider mapping resolves to a non-existent provider, the warning says "Using default routing" but execution then falls through to the MiniMax auto-routing branch — not the config.default path. The log message will mislead anyone debugging the routing decision.
| if provider_name not in config.providers: | |
| # This should have been caught by validation, but handle gracefully | |
| logging.getLogger(__name__).warning( | |
| f"Model '{model_id}' references non-existent provider '{provider_name}'. Using default routing." | |
| ) | |
| logging.getLogger(__name__).warning( | |
| f"Model '{model_id}' references non-existent provider '{provider_name}'. " | |
| "Falling through to next routing step." | |
| ) |
Prompt To Fix With AI
This is a comment left during a code review.
Path: plexe/config.py
Line: 551-554
Comment:
**Misleading warning message when an explicit mapping references a non-existent provider**
When `model_id` is a `minimax/` model AND its explicit provider mapping resolves to a non-existent provider, the warning says **"Using default routing"** but execution then falls through to the MiniMax auto-routing branch — not the `config.default` path. The log message will mislead anyone debugging the routing decision.
```suggestion
logging.getLogger(__name__).warning(
f"Model '{model_id}' references non-existent provider '{provider_name}'. "
"Falling through to next routing step."
)
```
How can I resolve this? If you propose a fix, please make it concise.
Summary
This PR adds first-class support for MiniMax as an LLM provider in plexe, enabling users to route any agent to MiniMax models via the
minimax/prefix with zero-config auto-routing.What's Changed
plexe/config.py: AddedMINIMAX_API_BASE,MINIMAX_MODELSconstants and auto-routing logic inget_routing_for_model()— when a model ID starts withminimax/, it auto-routes tohttps://api.minimax.io/v1with theMINIMAX_API_KEYenv varplexe/utils/litellm_wrapper.py: Rewritesminimax/prefix →openai/for LiteLLM compatibility, clamps temperature to MiniMax's [0, 1.0] rangeconfig.yaml.template: Added MiniMax configuration examples with available modelsREADME.md: Documented MiniMax as a supported provider with setup guide in the Multi-Provider LLM Support sectionUsage
Available models:
MiniMax-M2.7,MiniMax-M2.7-highspeed(1M context),MiniMax-M2.5,MiniMax-M2.5-highspeed(204K context).Design Decisions
minimax/MiniMax-M2.7to a custom provider inrouting_config.models, that mapping is used instead of auto-routingget_routing_for_model()→PlexeLiteLLMModelpipelineTest Plan
tests/unit/utils/test_litellm_wrapper.py)tests/unit/test_config.py)tests/unit/test_minimax_integration.py)7 files changed, 413 additions