feat: per-inbox model config and ollama lifecycle#12
Merged
Conversation
Add support for per-inbox LLM model configuration with optional Ollama auto-start/stop lifecycle management. Changes: - Add model and api_base fields to InboxConfig with global fallback - Add OllamaConfig with auto_start, stop_after_run, host, startup_timeout - Create ollama.py module with is_running(), start(), stop(), and ollama_lifecycle context manager - Update extract_metadata() to accept explicit model/api_base params - Update renamer.py to resolve per-inbox model and wrap execution in OllamaLifecycle when ollama/ models are detected - Update config.toml.example with new per-inbox and [ollama] options - Add comprehensive unit tests for model resolution and Ollama lifecycle Validation: 59 tests pass, lint and typecheck clean
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Allow each inbox to use a different LLM model (e.g. local Ollama for private folders, cloud model for fast processing). Optionally auto-start and stop Ollama when a local model is configured.
What Changed
InboxConfignow accepts optionalmodelandapi_basefields that override global[llm]settingsauto_start,stop_after_run,host, andstartup_timeoutsettingsollama.pymodule: Lifecycle management withis_running(),start(),stop(), andollama_lifecyclecontext managerextract_metadata(): Updated to accept explicitmodel/api_baseparams instead ofLLMConfigobjectrenamer.run(): Detects Ollama model usage and wraps execution in lifecycle context managerWhy
Different inboxes may require different models — private documents might use a local Ollama instance while general documents use cloud models. The lifecycle manager ensures Ollama is running when needed and stops it afterward.
Validation
nox -s lint typecheck test— all passConfig Example