Skip to content

feat: per-inbox model config and ollama lifecycle#12

Merged
spignotti merged 1 commit intomainfrom
feat/feature-md
Apr 7, 2026
Merged

feat: per-inbox model config and ollama lifecycle#12
spignotti merged 1 commit intomainfrom
feat/feature-md

Conversation

@spignotti
Copy link
Copy Markdown
Owner

Summary

Allow each inbox to use a different LLM model (e.g. local Ollama for private folders, cloud model for fast processing). Optionally auto-start and stop Ollama when a local model is configured.

What Changed

  • Per-inbox model override: InboxConfig now accepts optional model and api_base fields that override global [llm] settings
  • OllamaConfig: New configuration section with auto_start, stop_after_run, host, and startup_timeout settings
  • New ollama.py module: Lifecycle management with is_running(), start(), stop(), and ollama_lifecycle context manager
  • extract_metadata(): Updated to accept explicit model/api_base params instead of LLMConfig object
  • renamer.run(): Detects Ollama model usage and wraps execution in lifecycle context manager

Why

Different inboxes may require different models — private documents might use a local Ollama instance while general documents use cloud models. The lifecycle manager ensures Ollama is running when needed and stops it afterward.

Validation

  • nox -s lint typecheck test — all pass
  • 59 tests including 15 new Ollama lifecycle tests and 6 new model resolution tests
  • Typecheck: 0 errors, 0 warnings

Config Example

[[inbox]]
path = "/path/to/private"
model = "ollama/llama3.2"
api_base = "http://localhost:11434"

[ollama]
auto_start = true
stop_after_run = true

Add support for per-inbox LLM model configuration with optional Ollama
auto-start/stop lifecycle management.

Changes:
- Add model and api_base fields to InboxConfig with global fallback
- Add OllamaConfig with auto_start, stop_after_run, host, startup_timeout
- Create ollama.py module with is_running(), start(), stop(), and
  ollama_lifecycle context manager
- Update extract_metadata() to accept explicit model/api_base params
- Update renamer.py to resolve per-inbox model and wrap execution in
  OllamaLifecycle when ollama/ models are detected
- Update config.toml.example with new per-inbox and [ollama] options
- Add comprehensive unit tests for model resolution and Ollama lifecycle

Validation: 59 tests pass, lint and typecheck clean
@spignotti spignotti merged commit e51476e into main Apr 7, 2026
2 checks passed
@spignotti spignotti deleted the feat/feature-md branch April 7, 2026 08:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant