Skip to content

feat: concurrent multi-tool daemon support (#12)#28

Merged
don-petry merged 5 commits intomainfrom
feat/concurrent-multi-tool
Apr 10, 2026
Merged

feat: concurrent multi-tool daemon support (#12)#28
don-petry merged 5 commits intomainfrom
feat/concurrent-multi-tool

Conversation

@don-petry
Copy link
Copy Markdown
Collaborator

Why

Users who work across multiple AI tools (Gemini, Claude, Copilot) must currently run separate daemon instances for each one. This is cumbersome to manage and wastes resources — each instance runs its own MCP client connection and evaluator. A single daemon that monitors all tools concurrently with shared infrastructure is simpler to operate and more resource-efficient.

Summary

  • Adds --tools CLI option to run multiple providers concurrently in a single daemon (e.g. --tools gemini-cli,claude,copilot)
  • Each provider watches in its own asyncio.create_task, feeding interactions into a shared bounded asyncio.Queue(maxsize=1000)
  • Shared MCP client and evaluator across all providers
  • Dashboard header shows all active tools; status messages prefixed with [tool_name]
  • --tool (single) still works for backward compatibility
  • Handles empty --tools "" and tools=[] defensively (rejects instead of silent fallback)
  • Includes CLI tests for --tools deduplication, invalid tool, and empty input

Replaces #22 (fork archived, branch rebased on current main including PR #19).

Closes #12

Testing evidence

Check Result
Full test suite: 84/84 passed (includes PR #19 tests) ✅ PASS
Merge conflict with PR #19 resolved cleanly ✅ PASS
--tools CLI validation (dedup, empty, invalid) ✅ PASS
run_daemon(tools=[]) raises ValueError ✅ PASS

Review history

All 11 Copilot review comments from #22 were addressed and resolved. This PR carries the same code plus the merge resolution.

🤖 Generated with Claude Code

DJ and others added 5 commits April 4, 2026 20:40
Adds --tools option to run multiple providers concurrently in a single
daemon process. Each provider watches in its own async task, feeding
interactions into a shared asyncio.Queue for unified processing.

Key changes:
- TOOL_REGISTRY maps tool names to (provider_class, bootstrap_fn) pairs
- _create_providers() bootstraps and instantiates multiple providers
- run_daemon() creates watcher tasks per provider via asyncio.create_task
- Dashboard shows all active tools in header
- --tool still works for single-tool backward compat
- --tools validates tool names and rejects unknowns

Usage: context-scribe --tools gemini-cli,claude,copilot

Closes #12

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- _create_providers() now raises ValueError for unknown tools
- Queue bounded to maxsize=1000 to prevent memory growth
- _watch_provider() catches StopIteration and exceptions gracefully
- watcher_tasks initialized before try to prevent UnboundLocalError
- --tools deduplicates and rejects empty input

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- test_daemons.py: patch _create_providers instead of individual class
  names, since TOOL_REGISTRY captures class refs at import time
- _watch_provider: catch KeyboardInterrupt from mock generators
- Increase test wait timeout for queue-based async pipeline

Found via full test suite run with Python 3.12 + mcp installed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Reject empty/whitespace --tools values by checking `tools_csv is not None`
- Fail fast in run_daemon() when tools=[] instead of silent fallback
- Add CLI unit tests for --tools deduplication, invalid, and empty input
- Remove unused provider_class from test_daemons.py parametrize
- Fix misleading docstring in test_multi_tool.py fixture

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Resolve merge conflict in tests/test_daemons.py, keeping the
_create_providers mock approach from #22 while incorporating the
os._exit mock and reduced iterations from #19.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 10, 2026 15:42
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds concurrent multi-tool monitoring to the daemon so a single context-scribe process can watch multiple providers (Gemini CLI, Claude, Copilot) at once while sharing the evaluator + MCP client.

Changes:

  • Introduces --tools CLI option (comma-separated, deduplicated) while keeping --tool for backward compatibility.
  • Refactors provider creation into a TOOL_REGISTRY + _create_providers() helper.
  • Runs one watcher task per provider and multiplexes interactions through a shared bounded asyncio.Queue; adds new tests for provider creation + CLI validation.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.

File Description
context_scribe/main.py Adds --tools, provider registry/factory, concurrent watcher tasks feeding a shared queue, and tool-prefixed status messages.
tests/test_multi_tool.py New tests for provider registry/factory and --tools CLI parsing/validation (dedup, invalid, empty).
tests/test_daemons.py Updates daemon tests to patch _create_providers() instead of patching individual provider classes.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread context_scribe/main.py

async def _watch_provider(tool_name: str, provider):
"""Run a provider's watch() in a thread and feed interactions into the shared queue."""
loop = asyncio.get_event_loop()
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In async contexts, asyncio.get_event_loop() is deprecated on recent Python versions and can emit DeprecationWarnings; use asyncio.get_running_loop() here instead (and reuse that loop variable inside the coroutine).

Suggested change
loop = asyncio.get_event_loop()
loop = asyncio.get_running_loop()

Copilot uses AI. Check for mistakes.
Comment thread context_scribe/main.py
_status("🛑 Stopping...", db, live, debug)
finally:
for task in watcher_tasks:
task.cancel()
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Watcher tasks are cancelled in finally, but they are never awaited. This can leave pending tasks and produce "Task was destroyed but it is pending" / unhandled-exception warnings. After cancelling, await them (e.g., await asyncio.gather(*watcher_tasks, return_exceptions=True)) before closing the MCP client.

Suggested change
task.cancel()
task.cancel()
if watcher_tasks:
await asyncio.gather(*watcher_tasks, return_exceptions=True)

Copilot uses AI. Check for mistakes.
Comment thread context_scribe/main.py
except (StopIteration, asyncio.CancelledError, KeyboardInterrupt):
pass
except Exception as e:
logger.error("Watcher for %s failed: %s", tool_name, e)
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_watch_provider swallows CancelledError/StopIteration and exits without ensuring the underlying provider.watch() generator is closed. For BaseProvider implementations, the watchdog Observer is stopped/joined only in the generator's finally, so not closing the generator can leak observer threads on shutdown or if a watcher exits early. Ensure the generator is closed (or provide an explicit provider shutdown hook) when the watcher task ends.

Suggested change
logger.error("Watcher for %s failed: %s", tool_name, e)
logger.error("Watcher for %s failed: %s", tool_name, e)
finally:
close = getattr(watch_iter, "close", None)
if callable(close):
try:
close()
except Exception as e:
logger.warning("Failed to close watcher for %s: %s", tool_name, e)

Copilot uses AI. Check for mistakes.
Comment thread tests/test_multi_tool.py
Comment on lines +102 to +105
with patch("context_scribe.main.run_daemon", side_effect=fake_run_daemon) as mock_rd:
loop = asyncio.new_event_loop()
with patch("asyncio.run", side_effect=lambda coro: loop.run_until_complete(coro)):
result = runner.invoke(cli, ["--tools", "gemini-cli,gemini-cli,claude"])
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test creates a new event loop but never closes it. On some Python/pytest configurations this can raise ResourceWarnings or fail the suite. Close the loop in a finally block (or avoid creating a custom loop by using CliRunner(mix_stderr=False) with asyncio.run unpatched).

Suggested change
with patch("context_scribe.main.run_daemon", side_effect=fake_run_daemon) as mock_rd:
loop = asyncio.new_event_loop()
with patch("asyncio.run", side_effect=lambda coro: loop.run_until_complete(coro)):
result = runner.invoke(cli, ["--tools", "gemini-cli,gemini-cli,claude"])
loop = asyncio.new_event_loop()
try:
with patch("context_scribe.main.run_daemon", side_effect=fake_run_daemon) as mock_rd:
with patch("asyncio.run", side_effect=lambda coro: loop.run_until_complete(coro)):
result = runner.invoke(cli, ["--tools", "gemini-cli,gemini-cli,claude"])
finally:
loop.close()

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator Author

@don-petry don-petry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Automated review — APPROVED

Risk: MEDIUM
Reviewed commit: d5dc54ef3dd39d51551a36bb9bba1cd53269e013
Council vote: security=MEDIUM · correctness=MEDIUM · maintainability=MEDIUM

Summary

This PR correctly implements concurrent multi-tool daemon support (issue #12) using a clean TOOL_REGISTRY + _create_providers() factory pattern that is straightforward to extend. The async architecture is sound: watcher tasks are properly created and cancelled on shutdown, the bounded queue (maxsize=1000) prevents memory exhaustion, and pip-audit found no dependency vulnerabilities. No security-sensitive surfaces are touched and no functional bugs were identified under normal operation. Minor follow-up items exist around asyncio API modernization, README documentation, and test safety patterns, but none are blockers.

Linked issue analysis

Addresses issue #12 (concurrent multi-tool daemon support). The diff satisfies the core acceptance criteria: multiple tools can be passed via --tools, each tool gets its own watcher task and provider instance, the bounded async queue coordinates the producer/consumer, and backward compatibility with the existing --tool flag is preserved. The tools=[] guard raises ValueError as described. One indirect gap: the ValueError inside run_daemon() itself is not directly unit-tested (the CLI path that raises ClickException is tested, but the lower-level guard is only exercised indirectly).

Findings

Minor

Lens(es) File Finding
correctness + maintainability context_scribe/main.py asyncio.get_event_loop() deprecated usage — Called on every iteration inside _loop() and also inside _watch_provider(). The event-loop reference is invariant during an async run; obtain it once with asyncio.get_running_loop() (recommended in Python 3.10+) to avoid DeprecationWarning and future breakage. Both lenses agreed on this finding.
correctness context_scribe/main.py Ambiguous evaluator selection — When --tools copilot,gemini-cli is given, the evaluator is auto-detected from tool_names[0] only, silently using the copilot evaluator for all interactions including gemini-cli ones. This is undocumented and may produce suboptimal evaluations for secondary tools.
maintainability README.md Documentation drift — The README Usage section documents only --tool; the new --tools multi-tool option is entirely absent. Users will not discover multi-tool mode from the docs.
maintainability tests/test_multi_tool.py:72 Test fragilitytest_create_providers_calls_bootstrap directly mutates the module-level TOOL_REGISTRY dict and restores it in a try/finally. A concurrent test run or unexpected exception before finally could corrupt shared state. Prefer patch.dict(TOOL_REGISTRY, {...}) for safe, isolated mutation.

Info

Lens(es) File Finding
security context_scribe/main.py Resource hygiene_watch_provider does not close the provider.watch() generator on exit, which could leak watchdog Observer threads on shutdown. Not a security issue but worth a follow-up.
security context_scribe/main.py Input validation (informational) — Tool name validation correctly uses the static TOOL_REGISTRY allowlist, rejecting unknown, empty, and whitespace-only inputs. No injection risk.
correctness tests/test_multi_tool.py Missing direct unit test — The ValueError guard inside run_daemon(tools=[]) is exercised only indirectly via the CLI path; a direct call-level test would strengthen coverage.
correctness tests/test_daemons.py Redundant mocks — Individual bootstrap function patches (e.g. bootstrap_global_config) are still present even though _create_providers is fully mocked, making those patches redundant.
maintainability tests/test_multi_tool.py Test style — Several test functions nest 4–5 with patch(...) context managers; stacked @patch decorators or patch.multiple would reduce nesting and improve readability.
maintainability context_scribe/main.py:235 Dual-mode signaturerun_daemon() accepts both tool: str (old) and tools: Optional[List[str]] (new). The dual-mode is functional but mildly surprising; a future cleanup could unify them.
maintainability context_scribe/main.py:177 Missing return type annotation_create_providers() lacks a return type annotation (List[Tuple[str, BaseProvider]]); adding it would aid IDE navigation and static analysis.

CI status

pip-audit passed cleanly with no dependency vulnerabilities. No failing or pending checks reported by the council.


Reviewed automatically by the don-petry PR-review council (security: opus 4.6 · correctness: sonnet 4.6 · maintainability: sonnet 4.6 · synthesis: sonnet 4.6). The marker on line 1 lets the agent detect new commits and re-review. Reply with @don-petry if you need a human.

@don-petry don-petry merged commit f18cdc8 into main Apr 10, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: Concurrent multi-tool daemon support

2 participants