Skip to content

fix: prevent Ollama key from polluting global litellm state#2325

Open
gvago wants to merge 1 commit intomainfrom
fix/gitlab-submodule-and-ollama-key
Open

fix: prevent Ollama key from polluting global litellm state#2325
gvago wants to merge 1 commit intomainfrom
fix/gitlab-submodule-and-ollama-key

Conversation

@gvago
Copy link
Copy Markdown

@gvago gvago commented Apr 14, 2026

Summary

  • Root cause: LiteLLMAIHandler.__init__() sets litellm.api_key globally when OLLAMA.API_KEY is configured. Since litellm is a module-level singleton, this overwrites the API key for ALL providers (Groq, xAI, OpenAI, Anthropic, etc.). Any non-Ollama call then sends the Ollama key and fails authentication.
  • Fix: Store the Ollama key on the handler instance (self.ollama_api_key) instead of globally, and inject it per-request via litellm's api_key kwarg only when the model uses the ollama/ or ollama_chat/ prefix.
  • Tests updated: The test_ollama_and_groq_coexist test now verifies correct per-provider key isolation -- Ollama models get the Ollama key, non-Ollama models get the Groq key.

This builds on prior fixes #2288 and #2293 which addressed related symptoms (Gemini broken by Ollama key, dummy key forwarding) but left the root cause (global litellm.api_key mutation) in place.

Note on GitLab submodule content fetching

There is a separate issue in gitlab_provider.py where _expand_submodule_changes() creates synthetic diff entries with paths like submodule_path/file.py, but get_diff_files() still calls get_pr_file_content() against the parent project ID. These paths don't exist in the parent repo -- the content lives in the submodule project. This is a real bug but requires more invasive changes (threading submodule project resolution through content fetching) and is left for a separate PR.

Test plan

  • All 7 existing tests in test_litellm_api_key_guard.py pass
  • test_ollama_and_groq_coexist now correctly verifies Ollama key goes only to Ollama models and Groq key goes to non-Ollama models
  • Manual verification with Ollama + another provider configured simultaneously

🤖 Generated with Claude Code

When OLLAMA.API_KEY is configured, __init__() sets litellm.api_key
globally. Since litellm is a module-level singleton, this overwrites
the API key for ALL providers (Groq, XAI, OpenAI, Anthropic, etc.).
If Ollama is configured alongside another provider, the Ollama key
replaces theirs and all non-Ollama calls fail authentication.

Fix: store the Ollama key on the handler instance (self.ollama_api_key)
and inject it per-request via litellm's api_key kwarg only when the
model is an Ollama model (ollama/ or ollama_chat/ prefix).

This builds on the prior fixes in #2288 and #2293 which addressed
related symptoms but left the root cause (global mutation) in place.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@gvago gvago added the Bug fix label Apr 14, 2026
@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Prevent Ollama key from polluting global litellm state

🐞 Bug fix

Grey Divider

Walkthroughs

Description
• Store Ollama API key on handler instance instead of global litellm state
• Inject Ollama key per-request only for ollama/ prefixed models
• Prevent Ollama key from overwriting keys for other providers (Groq, XAI, OpenAI, etc.)
• Update test to verify correct per-provider key isolation
Diagram
flowchart LR
  A["OLLAMA.API_KEY configured"] -->|"Previously: global mutation"| B["litellm.api_key overwritten"]
  B -->|"Breaks other providers"| C["Non-Ollama calls fail"]
  A -->|"Now: instance storage"| D["self.ollama_api_key"]
  D -->|"Per-request injection"| E["Only for ollama/ models"]
  E -->|"Preserves other keys"| F["Groq/XAI/OpenAI work correctly"]
Loading

Grey Divider

File Changes

1. pr_agent/algo/ai_handlers/litellm_ai_handler.py 🐞 Bug fix +7/-1

Store and inject Ollama key per-request

• Add self.ollama_api_key instance variable initialized to None
• Store Ollama API key on handler instance instead of setting litellm.api_key globally
• Inject Ollama key per-request via kwargs only when model starts with ollama/ or ollama_chat/
 prefix
• Preserve global litellm.api_key for other providers (Groq, XAI, etc.)

pr_agent/algo/ai_handlers/litellm_ai_handler.py


2. tests/unittest/test_litellm_api_key_guard.py 🧪 Tests +10/-10

Verify per-provider key isolation in coexistence test

• Update test_ollama_and_groq_coexist test documentation to reflect new per-request injection
 behavior
• Verify litellm.api_key remains as Groq key after initialization (not overwritten by Ollama)
• Verify handler.ollama_api_key stores the Ollama key on the instance
• Verify Ollama models receive Ollama key and non-Ollama models receive Groq key

tests/unittest/test_litellm_api_key_guard.py


Grey Divider

Qodo Logo

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects bot commented Apr 14, 2026

Code Review by Qodo

🐞 Bugs (0) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider

Great, no issues found!

Qodo reviewed your code and found no material issues that require review

Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant