Skip to content

Add sambanova provider#2313

Open
luisfucros wants to merge 3 commits intoThe-PR-Agent:mainfrom
luisfucros:feature/add-sambanova
Open

Add sambanova provider#2313
luisfucros wants to merge 3 commits intoThe-PR-Agent:mainfrom
luisfucros:feature/add-sambanova

Conversation

@luisfucros
Copy link
Copy Markdown

Add SambaNova provider. SambaNova provides high-performance infrastructure and cloud services for running large language models (LLMs) and Inference APIs for models families like DeepSeek, Llama, GPT and MiniMax.

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Add SambaNova provider integration and documentation

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add SambaNova provider support with four model configurations
• Integrate SambaNova API key handling in litellm AI handler
• Add SambaNova documentation and configuration examples
• Update context window limits for SambaNova models
Diagram
flowchart LR
  A["SambaNova Provider"] --> B["Model Context Windows"]
  A --> C["API Key Configuration"]
  A --> D["Documentation & Setup"]
  B --> E["4 Models Added"]
  C --> F["litellm Handler"]
  D --> G["User Guide"]
Loading

Grey Divider

File Changes

1. pr_agent/algo/__init__.py ⚙️ Configuration changes +4/-0

Add SambaNova models to context window configuration

• Added four SambaNova model entries to context window mapping
• Models include MiniMax-M2.5, Meta-Llama-3.3-70B-Instruct, gpt-oss-120b, and DeepSeek-V3.1
• All models configured with 128000 token context window limit

pr_agent/algo/init.py


2. pr_agent/algo/ai_handlers/litellm_ai_handler.py ✨ Enhancement +2/-0

Integrate SambaNova API key handling

• Added SambaNova API key initialization in the handler constructor
• Checks for SAMBANOVA.KEY setting and configures litellm.api_key
• Follows same pattern as existing providers like Groq and xAI

pr_agent/algo/ai_handlers/litellm_ai_handler.py


3. docs/docs/usage-guide/changing_a_model.md 📝 Documentation +21/-6

Add SambaNova setup guide and fix formatting

• Added new SambaNova section with configuration examples
• Provided instructions for setting model and API key in configuration files
• Included link to SambaNova API key acquisition page
• Fixed indentation and formatting issues in existing documentation
• Corrected trailing whitespace in multiple sections

docs/docs/usage-guide/changing_a_model.md


View more (1)
4. pr_agent/settings/.secrets_template.toml ⚙️ Configuration changes +3/-0

Add SambaNova configuration to secrets template

• Added new [sambanova] section to secrets template
• Included placeholder for SambaNova API key with acquisition link
• Positioned between groq and xai sections for logical organization

pr_agent/settings/.secrets_template.toml


Grey Divider

Qodo Logo

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects bot commented Apr 13, 2026

Code Review by Qodo

🐞 Bugs (1)   📘 Rule violations (1)   📎 Requirement gaps (0)
🐞\ ☼ Reliability (1)
📘\ ⚙ Maintainability (1) ⭐ New (1)

Grey Divider


Action required

1. MkDocs admonition not indented📘
Description
The !!! note "Model-specific environment variables" block content is not indented, which can break
MkDocs/MkDocs-Material admonition rendering and potentially fail the docs build.
Code

docs/docs/usage-guide/changing_a_model.md[R16-18]

!!! note "Model-specific environment variables"
-    See [litellm documentation](https://litellm.vercel.app/docs/proxy/quick_start#supported-llms) for the environment variables needed per model, as they may vary and change over time. Our documentation per-model may not always be up-to-date with the latest changes.
-    Failing to set the needed keys of a specific model will usually result in litellm not identifying the model type, and failing to utilize it.
+See [litellm documentation](https://litellm.vercel.app/docs/proxy/quick_start#supported-llms) for the environment variables needed per model, as they may vary and change over time. Our documentation per-model may not always be up-to-date with the latest changes.
+Failing to set the needed keys of a specific model will usually result in litellm not identifying the model type, and failing to utilize it.
Evidence
PR Compliance ID 11 requires docs Markdown under docs/ to follow MkDocs conventions. The added
lines immediately after the admonition marker are not indented as required for admonition content.

AGENTS.md
docs/docs/usage-guide/changing_a_model.md[16-18]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The Markdown admonition content is not indented under the `!!! note` marker, which can break MkDocs admonition parsing/rendering.
## Issue Context
MkDocs-style admonitions require the content block to be indented relative to the `!!!` line.
## Fix Focus Areas
- docs/docs/usage-guide/changing_a_model.md[16-18]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Second admonition line unindented📘
Description
The !!! note "Local models vs commercial models" block has its first content line unindented while
the rest is indented, which can break MkDocs admonition formatting.
Code

docs/docs/usage-guide/changing_a_model.md[R109-112]

!!! note "Local models vs commercial models"
-    PR-Agent is compatible with almost any AI model, but analyzing complex code repositories and pull requests requires a model specifically optimized for code analysis.
+PR-Agent is compatible with almost any AI model, but analyzing complex code repositories and pull requests requires a model specifically optimized for code analysis.
Commercial models such as GPT-5, Claude Sonnet, and Gemini have demonstrated robust capabilities in generating structured output for code analysis tasks with large input. In contrast, most open-source models currently available (as of January 2025) face challenges with these complex tasks.
Evidence
PR Compliance ID 11 requires MkDocs-compliant Markdown in docs/. The first line of admonition
content is not indented under the !!! note marker, making the block syntactically inconsistent.

AGENTS.md
docs/docs/usage-guide/changing_a_model.md[109-112]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The admonition content immediately following `!!! note "Local models vs commercial models"` is not consistently indented, which can break MkDocs admonition parsing.
## Issue Context
In MkDocs-style admonitions, all content lines belonging to the admonition must be indented under the `!!!` directive.
## Fix Focus Areas
- docs/docs/usage-guide/changing_a_model.md[109-112]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Wrong provider key injected 🐞
Description
LiteLLMAIHandler.__init__ now overwrites litellm.api_key with the SambaNova key, and
chat_completion injects this single global key into every LiteLLM call, so mixed-provider
fallback_models can be invoked with the wrong API key and fail authentication.
Code

pr_agent/algo/ai_handlers/litellm_ai_handler.py[R74-75]

+        if get_settings().get("SAMBANOVA.KEY", None):
+            litellm.api_key = get_settings().sambanova.key
Evidence
During initialization, multiple providers write to the same global litellm.api_key sequentially
(Groq first, then SambaNova, then others). Later, chat_completion unconditionally forwards
litellm.api_key as api_key for every acompletion call. Since PR-Agent can iterate across
config.model + config.fallback_models, a fallback from a different provider can be attempted
while still sending the last-initialized provider’s key, causing auth failures. The existing unit
test explicitly documents this overwrite behavior when multiple providers are configured.

pr_agent/algo/ai_handlers/litellm_ai_handler.py[68-90]
pr_agent/algo/ai_handlers/litellm_ai_handler.py[412-416]
pr_agent/algo/pr_processing.py[320-354]
tests/unittest/test_litellm_api_key_guard.py[276-333]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`LiteLLMAIHandler` uses `litellm.api_key` as a single global, but PR-Agent can call multiple providers via `fallback_models`. With the new SambaNova assignment, whichever provider runs last in `__init__` wins, and `chat_completion` forwards that key for *all* models.
### Issue Context
- `__init__` sets `litellm.api_key` for multiple providers (Groq, SambaNova, xAI, Ollama, OpenRouter, Azure AD, etc.).
- `chat_completion` injects `kwargs["api_key"] = litellm.api_key` for every call.
- `retry_with_fallback_models()` may invoke different providers in a single run.
### Fix Focus Areas
- pr_agent/algo/ai_handlers/litellm_ai_handler.py[68-90]
- pr_agent/algo/ai_handlers/litellm_ai_handler.py[412-416]
- pr_agent/algo/pr_processing.py[320-354]
### Suggested approach
1. Stop relying on a single global `litellm.api_key` for multiple providers.
2. In `chat_completion`, derive provider from `model` (e.g., prefix before `/`) and set `kwargs["api_key"]` from the corresponding settings (e.g., `get_settings().groq.key`, `get_settings().sambanova.key`, `get_settings().xai.key`, etc.).
3. Only inject `api_key` when you have a provider-specific match; otherwise let LiteLLM handle OpenAI/Azure/Anthropic keys via their dedicated fields/env vars.
4. Add/extend a unit test covering Groq + SambaNova configured simultaneously, then calling both a `groq/...` and `sambanova/...` model and verifying the forwarded `api_key` differs appropriately.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Mixed access for SAMBANOVA.KEY 📘
Description
The new SambaNova key handling checks get_settings().get("SAMBANOVA.KEY") but then reads
get_settings().sambanova.key, mixing configuration access patterns/casing. This violates the
requirement for consistent configuration access and increases the risk of runtime errors or config
drift across providers.
Code

pr_agent/algo/ai_handlers/litellm_ai_handler.py[R74-75]

+        if get_settings().get("SAMBANOVA.KEY", None):
+            litellm.api_key = get_settings().sambanova.key
Evidence
PR Compliance ID 16 requires a consistent configuration access pattern and stable key paths; the
added SambaNova logic mixes .get("SAMBANOVA.KEY") with attribute access .sambanova.key on the
same setting.

pr_agent/algo/ai_handlers/litellm_ai_handler.py[74-75]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The SambaNova API key retrieval mixes Dynaconf access patterns (`get_settings().get("SAMBANOVA.KEY")` vs `get_settings().sambanova.key`) for the same configuration value.

## Issue Context
PR compliance requires using a single consistent access pattern and stable key paths to avoid subtle configuration bugs.

## Fix Focus Areas
- pr_agent/algo/ai_handlers/litellm_ai_handler.py[74-75]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Grey Divider

Previous review results

Review updated until commit ba4048f

Results up to commit N/A


🐞 Bugs (1)  
📘 Rule violations (0)  
📎 Requirement gaps (0)

Grey Divider
Action required
1. MkDocs admonition not indented📘
Description
The !!! note "Model-specific environment variables" block content is not indented, which can break
MkDocs/MkDocs-Material admonition rendering and potentially fail the docs build.
Code

docs/docs/usage-guide/changing_a_model.md[R16-18]

!!! note "Model-specific environment variables"
-    See [litellm documentation](https://litellm.vercel.app/docs/proxy/quick_start#supported-llms) for the environment variables needed per model, as they may vary and change over time. Our documentation per-model may not always be up-to-date with the latest changes.
-    Failing to set the needed keys of a specific model will usually result in litellm not identifying the model type, and failing to utilize it.
+See [litellm documentation](https://litellm.vercel.app/docs/proxy/quick_start#supported-llms) for the environment variables needed per model, as they may vary and change over time. Our documentation per-model may not always be up-to-date with the latest changes.
+Failing to set the needed keys of a specific model will usually result in litellm not identifying the model type, and failing to utilize it.
Evidence
PR Compliance ID 11 requires docs Markdown under docs/ to follow MkDocs conventions. The added
lines immediately after the admonition marker are not indented as required for admonition content.

AGENTS.md
docs/docs/usage-guide/changing_a_model.md[16-18]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The Markdown admonition content is not indented under the `!!! note` marker, which can break MkDocs admonition parsing/rendering.
## Issue Context
MkDocs-style admonitions require the content block to be indented relative to the `!!!` line.
## Fix Focus Areas
- docs/docs/usage-guide/changing_a_model.md[16-18]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Second admonition line unindented📘
Description
The !!! note "Local models vs commercial models" block has its first content line unindented while
the rest is indented, which can break MkDocs admonition formatting.
Code

docs/docs/usage-guide/changing_a_model.md[R109-112]

!!! note "Local models vs commercial models"
-    PR-Agent is compatible with almost any AI model, but analyzing complex code repositories and pull requests requires a model specifically optimized for code analysis.
+PR-Agent is compatible with almost any AI model, but analyzing complex code repositories and pull requests requires a model specifically optimized for code analysis.
 Commercial models such as GPT-5, Claude Sonnet, and Gemini have demonstrated robust capabilities in generating structured output for code analysis tasks with large input. In contrast, most open-source models currently available (as of January 2025) face challenges with these complex tasks.
Evidence
PR Compliance ID 11 requires MkDocs-compliant Markdown in docs/. The first line of admonition
content is not indented under the !!! note marker, making the block syntactically inconsistent.

AGENTS.md
docs/docs/usage-guide/changing_a_model.md[109-112]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The admonition content immediately following `!!! note "Local models vs commercial models"` is not consistently indented, which can break MkDocs admonition parsing.
## Issue Context
In MkDocs-style admonitions, all content lines belonging to the admonition must be indented under the `!!!` directive.
## Fix Focus Areas
- docs/docs/usage-guide/changing_a_model.md[109-112]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Wrong provider key injected 🐞
Description
LiteLLMAIHandler.__init__ now overwrites litellm.api_key with the SambaNova key, and
chat_completion injects this single global key into every LiteLLM call, so mixed-provider
fallback_models can be invoked with the wrong API key and fail authentication.
Code

pr_agent/algo/ai_handlers/litellm_ai_handler.py[R74-75]

+        if get_settings().get("SAMBANOVA.KEY", None):
+            litellm.api_key = get_settings().sambanova.key
Evidence
During initialization, multiple providers write to the same global litellm.api_key sequentially
(Groq first, then SambaNova, then others). Later, chat_completion unconditionally forwards
litellm.api_key as api_key for every acompletion call. Since PR-Agent can iterate across
config.model + config.fallback_models, a fallback from a different provider can be attempted
while still sending the last-initialized provider’s key, causing auth failures. The existing unit
test explicitly documents this overwrite behavior when multiple providers are configured.

pr_agent/algo/ai_handlers/litellm_ai_handler.py[68-90]
pr_agent/algo/ai_handlers/litellm_ai_handler.py[412-416]
pr_agent/algo/pr_processing.py[320-354]
tests/unittest/test_litellm_api_key_guard.py[276-333]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`LiteLLMAIHandler` uses `litellm.api_key` as a single global, but PR-Agent can call multiple providers via `fallback_models`. With the new SambaNova assignment, whichever provider runs last in `__init__` wins, and `chat_completion` forwards that key for *all* models.
### Issue Context
- `__init__` sets `litellm.api_key` for multiple providers (Groq, SambaNova, xAI, Ollama, OpenRouter, Azure AD, etc.).
- `chat_completion` injects `kwargs["api_key"] = litellm.api_key` for every call.
- `retry_with_fallback_models()` may invoke different providers in a single run.
### Fix Focus Areas
- pr_agent/algo/ai_handlers/litellm_ai_handler.py[68-90]
- pr_agent/algo/ai_handlers/litellm_ai_handler.py[412-416]
- pr_agent/algo/pr_processing.py[320-354]
### Suggested approach
1. Stop relying on a single global `litellm.api_key` for multiple providers.
2. In `chat_completion`, derive provider from `model` (e.g., prefix before `/`) and set `kwargs["api_key"]` from the corresponding settings (e.g., `get_settings().groq.key`, `get_settings().sambanova.key`, `get_settings().xai.key`, etc.).
3. Only inject `api_key` when you have a provider-specific match; otherwise let LiteLLM handle OpenAI/Azure/Anthropic keys via their dedicated fields/env vars.
4. Add/extend a unit test covering Groq + SambaNova configured simultaneously, then calling both a `groq/...` and `sambanova/...` model and verifying the forwarded `api_key` differs appropriately.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider Grey Divider

Qodo Logo

Comment thread docs/docs/usage-guide/changing_a_model.md Outdated
Comment thread docs/docs/usage-guide/changing_a_model.md
Comment on lines +74 to +75
if get_settings().get("SAMBANOVA.KEY", None):
litellm.api_key = get_settings().sambanova.key
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Wrong provider key injected 🐞 Bug ☼ Reliability

LiteLLMAIHandler.__init__ now overwrites litellm.api_key with the SambaNova key, and
chat_completion injects this single global key into every LiteLLM call, so mixed-provider
fallback_models can be invoked with the wrong API key and fail authentication.
Agent Prompt
### Issue description
`LiteLLMAIHandler` uses `litellm.api_key` as a single global, but PR-Agent can call multiple providers via `fallback_models`. With the new SambaNova assignment, whichever provider runs last in `__init__` wins, and `chat_completion` forwards that key for *all* models.

### Issue Context
- `__init__` sets `litellm.api_key` for multiple providers (Groq, SambaNova, xAI, Ollama, OpenRouter, Azure AD, etc.).
- `chat_completion` injects `kwargs["api_key"] = litellm.api_key` for every call.
- `retry_with_fallback_models()` may invoke different providers in a single run.

### Fix Focus Areas
- pr_agent/algo/ai_handlers/litellm_ai_handler.py[68-90]
- pr_agent/algo/ai_handlers/litellm_ai_handler.py[412-416]
- pr_agent/algo/pr_processing.py[320-354]

### Suggested approach
1. Stop relying on a single global `litellm.api_key` for multiple providers.
2. In `chat_completion`, derive provider from `model` (e.g., prefix before `/`) and set `kwargs["api_key"]` from the corresponding settings (e.g., `get_settings().groq.key`, `get_settings().sambanova.key`, `get_settings().xai.key`, etc.).
3. Only inject `api_key` when you have a provider-specific match; otherwise let LiteLLM handle OpenAI/Azure/Anthropic keys via their dedicated fields/env vars.
4. Add/extend a unit test covering Groq + SambaNova configured simultaneously, then calling both a `groq/...` and `sambanova/...` model and verifying the forwarded `api_key` differs appropriately.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects bot commented Apr 14, 2026

Persistent review updated to latest commit 21901fc

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects bot commented Apr 14, 2026

Persistent review updated to latest commit ba4048f

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants