Skip to content

Add native Ollama provider support#18

Open
ianu82 wants to merge 2 commits intomindsdb:mainfrom
ianu82:codex/native-ollama-support
Open

Add native Ollama provider support#18
ianu82 wants to merge 2 commits intomindsdb:mainfrom
ianu82:codex/native-ollama-support

Conversation

@ianu82
Copy link
Contributor

@ianu82 ianu82 commented Mar 19, 2026

Summary

  • add a native ollama LLM provider using Ollama's Python client and /api/chat
  • add Ollama as a first-class setup/onboarding option with local model discovery
  • wire Ollama through scratchpad, streaming reasoning progress, and provider-scoped verifier handling

What changed

  • added anton/llm/ollama.py for native Ollama inference and model listing
  • extracted shared OpenAI-style message/tool translation into anton/llm/openai_compat.py
  • added shared LLM setup helpers in anton/llm/setup.py and reused them from CLI setup and /setup
  • added ANTON_OLLAMA_BASE_URL settings support and scratchpad Ollama propagation
  • kept Anthropic and OpenAI provider payload logic unchanged while widening the provider interface for Ollama-only request options

Verification

  • pytest -q (411 passed)
  • live native Ollama smoke checks against local models
    • qwen3:0.6b streamed reasoning progress through the new Ollama path
    • ministral-3:8b returned normal native Ollama responses
    • ChatSession.turn(...) completed successfully with Ollama configured as both planning and coding provider

@ianu82 ianu82 requested a review from torrmal March 19, 2026 10:28
ollama.ResponseError carries a status_code, so 4xx errors like "model
not found" (404) should surface as actionable errors rather than being
misreported as connectivity failures. Only 5xx errors are now mapped
to ConnectionError, matching the pattern used by the Anthropic and
OpenAI providers.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant