Conversation
Review Summary by QodoAdd SambaNova provider integration and documentation
WalkthroughsDescription• Add SambaNova provider support with four model configurations • Integrate SambaNova API key handling in litellm AI handler • Add SambaNova documentation and configuration examples • Update context window limits for SambaNova models Diagramflowchart LR
A["SambaNova Provider"] --> B["Model Context Windows"]
A --> C["API Key Configuration"]
A --> D["Documentation & Setup"]
B --> E["4 Models Added"]
C --> F["litellm Handler"]
D --> G["User Guide"]
File Changes1. pr_agent/algo/__init__.py
|
Code Review by Qodo
|
| if get_settings().get("SAMBANOVA.KEY", None): | ||
| litellm.api_key = get_settings().sambanova.key |
There was a problem hiding this comment.
3. Wrong provider key injected 🐞 Bug ☼ Reliability
LiteLLMAIHandler.__init__ now overwrites litellm.api_key with the SambaNova key, and chat_completion injects this single global key into every LiteLLM call, so mixed-provider fallback_models can be invoked with the wrong API key and fail authentication.
Agent Prompt
### Issue description
`LiteLLMAIHandler` uses `litellm.api_key` as a single global, but PR-Agent can call multiple providers via `fallback_models`. With the new SambaNova assignment, whichever provider runs last in `__init__` wins, and `chat_completion` forwards that key for *all* models.
### Issue Context
- `__init__` sets `litellm.api_key` for multiple providers (Groq, SambaNova, xAI, Ollama, OpenRouter, Azure AD, etc.).
- `chat_completion` injects `kwargs["api_key"] = litellm.api_key` for every call.
- `retry_with_fallback_models()` may invoke different providers in a single run.
### Fix Focus Areas
- pr_agent/algo/ai_handlers/litellm_ai_handler.py[68-90]
- pr_agent/algo/ai_handlers/litellm_ai_handler.py[412-416]
- pr_agent/algo/pr_processing.py[320-354]
### Suggested approach
1. Stop relying on a single global `litellm.api_key` for multiple providers.
2. In `chat_completion`, derive provider from `model` (e.g., prefix before `/`) and set `kwargs["api_key"]` from the corresponding settings (e.g., `get_settings().groq.key`, `get_settings().sambanova.key`, `get_settings().xai.key`, etc.).
3. Only inject `api_key` when you have a provider-specific match; otherwise let LiteLLM handle OpenAI/Azure/Anthropic keys via their dedicated fields/env vars.
4. Add/extend a unit test covering Groq + SambaNova configured simultaneously, then calling both a `groq/...` and `sambanova/...` model and verifying the forwarded `api_key` differs appropriately.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
|
Persistent review updated to latest commit 21901fc |
|
Persistent review updated to latest commit ba4048f |
Add SambaNova provider. SambaNova provides high-performance infrastructure and cloud services for running large language models (LLMs) and Inference APIs for models families like DeepSeek, Llama, GPT and MiniMax.