feat: AnalysisRunner interface + ClaudeNativeRunner + ProviderRunner#246
feat: AnalysisRunner interface + ClaudeNativeRunner + ProviderRunner#246
Conversation
Defines the abstraction layer between the insights command and LLM backends. Adding a future runner (e.g. CursorNativeRunner) requires only implementing this interface — no changes to the calling code. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Executes analysis via `claude -p` non-interactive mode. Uses execFileSync (not exec) to prevent shell injection. Temp files cleaned up in finally block. Tokens are 0 — counted as part of the overall Claude Code session. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Delegates analysis to the configured provider (OpenAI, Anthropic, Gemini, Ollama). Inlines provider dispatch in CLI to avoid a circular dependency with the server package (server -> cli). All providers use only Node.js built-in fetch. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…prompt-quality Flat JSON schemas for claude -p --json-schema structured output. Derived from AnalysisResponse and PromptQualityResponse in prompt-types.ts. Schema sync test validates required properties match TypeScript types. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds ./analysis/runner-types, ./analysis/native-runner, ./analysis/provider-runner, and schema exports. Updates build script to copy JSON schema files to dist/. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
TA Synthesis (Phase 2): Runner Interface — Round 1Review of Domain Specialist CommentsSpecialist FIX NOW: Verdict: AGREE — CONFIRMED VIA RUNTIME VERIFICATION. I ran [
{"type": "system", "subtype": "init", "session_id": "...", "tools": [...], ...},
{"type": "assistant", "message": {"content": [{"type": "text", "text": "...actual LLM output..."}]}, ...},
{"type": "result", "subtype": "success", "result": "...actual LLM output...", "is_error": false, ...}
]The native runner currently returns this entire array as Recommended fix: Parse the JSON envelope and extract the result text. Two options:
I recommend Option B. The envelope contains Concrete extraction path: Consolidated Review (For Dev Agent)FIX NOW:
NOT APPLICABLE:
SUGGESTIONS (non-blocking):
Final VerdictCHANGES REQUIRED — Round 2 needed. The FIX NOW item is a confirmed runtime bug (verified against actual Claude CLI output). The native runner will silently produce the entire event envelope instead of the analysis JSON. Must be fixed before merge. |
claude -p --output-format json returns a JSON array of typed events. The actual LLM text lives in the result event's result field, not in the raw output. Extract it via extractResultFromEnvelope(), check is_error, and throw with a clear message on claude-level failures. Also adds random suffix to temp file names to prevent concurrent collisions. Tests updated to mock the envelope format throughout. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds coverage for all four configured providers. Also adds a comment explaining why LLMMessage.content is intentionally narrower (string only) than the server type that allows ContentBlock[]. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Review AddressalFIX NOW items addressed:
Non-blocking suggestions addressed:
Pre-PR gate: All review items addressed. Ready for re-review or merge. |
PM Review Summary — PR #246 Ready for MergeWhat ShippedAnalysisRunner abstraction with two implementations, JSON schemas, and full test coverage:
Review Findings AddressedCritical fix caught by review: Non-blocking suggestions addressed: random temp file suffix, Gemini+Ollama dispatch tests (4 new tests), content type comment on Design Note on Record
Gate Status
This PR is ready for founder merge. |
Triple-Layer Code Review — Round 2 of 2Reviewers
Pre-Review Gates
Round 1 Issues & Resolution🔴 FIX NOW (Round 1)
🟡 SUGGESTIONS (Round 1)
🔵 NOTES
Round 2 VerdictPASS — Ready for merge. All Round 1 blocking items resolved. All suggestions addressed. No new issues found in Round 2. Envelope parsing is correct with comprehensive error handling. 🤖 Generated with Claude Code |
… schema-sync native-runner: JSON-object-not-array, empty array, error_max_turns subtype provider-runner: Anthropic system message extraction, missing usage defaults to 0, unknown provider throws schema-sync: friction_points item required fields, effective_patterns item required fields, findings item required fields Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Coverage Gap Addressal9 new tests added across 3 files (975 → 984 passing): native-runner.test.ts (+3):
provider-runner.test.ts (+3):
schema-sync.test.ts (+3):
Pre-PR gate: |
What
Implements the runner abstraction layer for Issue #239 (Phase 12, v4.8.0).
Why
The upcoming
insightsCLI command needs to run analysis via two backends (claude -pnative and configured LLM provider) without the command code caring which is used. This PR delivers the interface and both implementations.How
runner-types.ts—AnalysisRunnerinterface withRunAnalysisParams/RunAnalysisResult. Adding a new runner (e.g. CursorNativeRunner) means implementing this interface only.native-runner.ts—ClaudeNativeRunnerusingexecFileSync(notexec— shell injection prevention). Writes system prompt and JSON schema to temp files, pipes conversation via stdin, cleans up infinally. Tokens = 0 (counted in the Claude Code session).provider-runner.ts—ProviderRunnersupporting all four configured providers (OpenAI, Anthropic, Gemini, Ollama). Inlines provider dispatch in CLI to avoid circular dependency (@code-insights/serverdepends on@code-insights/cli). All providers use only Node.js built-infetch— no external SDK dependencies.schemas/session-analysis.json— Flat JSON schema forclaude -p --json-schema, derived fromAnalysisResponseinprompt-types.ts.schemas/prompt-quality.json— Flat JSON schema forclaude -p --json-schema, derived fromPromptQualityResponseinprompt-types.ts.schemas/__tests__/schema-sync.test.ts— Validates JSON schema required properties match TypeScript interface fields. Fails in CI if they diverge.Schema Impact
Testing
New tests added:
native-runner.test.ts— 10 tests (validate, execFileSync args, cleanup in finally, schema flag, result shape)provider-runner.test.ts— 9 tests (fromConfig validation, OpenAI/Anthropic dispatch, error handling, jsonSchema ignored)schemas/__tests__/schema-sync.test.ts— 8 tests (required field coverage for both schemas)Closes #239
Note on ProviderRunner Design
The plan noted "if circular dep is an issue, document it in the PR." The circular dep was confirmed:
serverdepends oncli, soclicannot import@code-insights/server. Resolution: ProviderRunner inlines the provider dispatch (mirrorsserver/src/llm/client.ts). All providers use onlyfetchwith no SDK dependencies, so the inline is 4 small functions totaling ~100 lines. Server LLM client (server/src/llm/client.ts) continues to be the source used by the dashboard API. Issue #240 can evaluate consolidating if this becomes a maintenance concern.