Skip to content

feat: multi-provider LLM enrichment (GIT-110)#77

Merged
TonyCasey merged 6 commits intomainfrom
GIT-110
Feb 15, 2026
Merged

feat: multi-provider LLM enrichment (GIT-110)#77
TonyCasey merged 6 commits intomainfrom
GIT-110

Conversation

@TonyCasey
Copy link
Copy Markdown
Owner

@TonyCasey TonyCasey commented Feb 15, 2026

Summary

  • Extract BaseLLMClient abstract class with shared enrichment logic (message building, JSON parsing, fact validation)
  • Add ILLMCaller interface and refactor IntentExtractor to be provider-agnostic (removes direct @anthropic-ai/sdk dependency)
  • Add OpenAI (gpt-4o), Gemini (gemini-2.0-flash), and Ollama (llama3.2) provider implementations
  • Expand LLMClientFactory with auto-detection priority: GIT_MEM_LLM_PROVIDER env > Anthropic > OpenAI > Gemini > Ollama
  • Add ILLMConfig to hook config for YAML-based provider/model configuration
  • Add openai and @google/generative-ai as optional peer dependencies; Ollama uses native fetch

Test plan

  • TypeScript compiles (npm run type-check)
  • ESLint passes (npm run lint)
  • 527 unit tests pass (npm run test:unit)
  • 91 integration tests pass (npm run test:integration)
  • Build succeeds (npm run build)
  • Manual: set OPENAI_API_KEY and run commit with enrich: true
  • Manual: set OLLAMA_HOST and run commit with enrich: true

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Multi-provider LLM support (OpenAI, Gemini, Ollama, Anthropic) with auto-detection and explicit override; provider-agnostic LLM foundation and unified completion/enrichment interfaces.
  • Configuration

    • Added env keys (OPENAI_API_KEY, GOOGLE/GEMINI_API_KEY, OLLAMA_HOST) and example YAML; only one provider needed; falls back to heuristic extraction with a warning.
  • Documentation

    • Expanded env and README guidance, provider examples, and hosting notes.
  • Chores

    • Added optional peerDependencies for AI SDKs.
  • Tests

    • New/updated unit tests covering LLM config, clients, and extraction.

Extract BaseLLMClient abstract class with shared logic (message building,
response parsing, fact validation, enrichCommit orchestration). Add OpenAI,
Gemini, and Ollama provider implementations alongside existing Anthropic.

- Add ILLMCaller interface for provider-agnostic simple completions
- Refactor IntentExtractor to use ILLMCaller instead of Anthropic SDK directly
- Expand LLMClientFactory with auto-detection priority chain
- Add ILLMConfig to hook config for YAML-based provider configuration
- Wire config-derived LLM options through DI container
- Add openai and @google/generative-ai as optional peer dependencies

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
AI-Agent: Claude-Code/2.1.42
Copilot AI review requested due to automatic review settings February 15, 2026 22:18
@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.
To continue using code reviews, you can upgrade your account or add credits to your account and enable them for code reviews in your settings.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 15, 2026

Important

Review skipped

This PR was authored by the user configured for CodeRabbit reviews. By default, CodeRabbit skips reviewing PRs authored by this user. It's recommended to use a dedicated user account to post CodeRabbit review feedback.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds multi-provider LLM support: new domain LLM types and caller interface, a BaseLLMClient with shared enrichment/complete logic, provider-specific clients (Anthropic, OpenAI, Gemini, Ollama), config parsing and DI wiring for optional llm config, and refactors IntentExtractor to use the provider-agnostic ILLMCaller.

Changes

Cohort / File(s) Summary
Configuration & Docs
\.env.example, CLAUDE.md
Expanded env examples/docs to list multiple provider keys (OPENAI_API_KEY, GOOGLE_API_KEY/GEMINI_API_KEY, OLLAMA_HOST), note provider autodetection and a .git-mem LLM YAML block.
Package metadata
package.json
Added optional peerDependencies and peerDependenciesMeta for external AI SDKs.
Domain interfaces
src/domain/interfaces/IHookConfig.ts, src/domain/interfaces/ILLMClient.ts, src/domain/interfaces/ILLMCaller.ts
Added LLMProvider, ILLMConfig, ILLMCaller and options; ILLMClient now extends ILLMCaller.
Config parsing & tests
src/hooks/utils/config.ts, tests/unit/hooks/utils/config.test.ts
Parse optional llm section, validate provider via VALID_PROVIDERS/isValidProvider, attach llm only when valid; added tests for full/partial/invalid configs.
DI types & container
src/infrastructure/di/types.ts, src/infrastructure/di/container.ts
IContainerOptions gains llm?: ILLMConfig; container creates llmClient with options?.llm, registers llm client and injects ILLMCaller into IntentExtractor (caller-based initialization).
Base LLM foundation & tests
src/infrastructure/llm/BaseLLMClient.ts, tests/unit/infrastructure/llm/BaseLLMClient.test.ts
New abstract BaseLLMClient with IAPICallResult, ENRICHMENT_SYSTEM_PROMPT, buildUserMessage, parseResponse/validateFact, enrichCommit and complete; comprehensive tests via a TestLLMClient subclass.
Provider clients & tests
src/infrastructure/llm/AnthropicLLMClient.ts, src/infrastructure/llm/OpenAILLMClient.ts, src/infrastructure/llm/GeminiLLMClient.ts, src/infrastructure/llm/OllamaLLMClient.ts, tests/unit/infrastructure/llm/*
Anthropic refactored to extend BaseLLMClient; new OpenAI, Gemini, Ollama clients added (dynamic SDK loading, API key/env handling, callAPI implementations) with constructor/behavior tests.
LLM factory & detection
src/infrastructure/llm/LLMClientFactory.ts, tests/unit/infrastructure/llm/LLMClientFactory.test.ts
Factory now supports multiple providers, baseUrl, dynamic SDK loading, `detectProvider(): LLMProvider
Intent extraction & tests
src/infrastructure/llm/IntentExtractor.ts, tests/unit/infrastructure/llm/IntentExtractor.test.ts
IntentExtractor now requires caller: ILLMCaller, uses caller.complete(...) with timeout via Promise.race, updated system prompt; tests use mock callers for success, SKIP, errors, and timeouts.
Tests adjusted / removed
tests/unit/infrastructure/llm/AnthropicLLMClient.test.ts, other llm tests
Simplified Anthropic tests to assert inheritance/complete presence; many LLM-related tests reorganized/added to cover new BaseLLMClient and provider clients.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client/Hook
    participant Config as Config Parser
    participant Container as DI Container
    participant Factory as LLMClientFactory
    participant Provider as Provider Client (ILLMCaller)
    participant Extractor as IntentExtractor

    Client->>Config: Load hook config (optional llm)
    Config-->>Container: Pass ILLMConfig
    Container->>Factory: createLLMClient(options?.llm)
    Factory->>Factory: detectProvider() (env/override)
    Factory->>Provider: Load/init provider client (runtime)
    Provider-->>Factory: ILLMCaller instance or null
    Factory-->>Container: Return caller
    Container->>Extractor: Inject caller
    Client->>Extractor: extract(message)
    Extractor->>Provider: caller.complete(systemPrompt, userMessage, maxTokens)
    Provider-->>Extractor: text response
    Extractor-->>Client: extracted keywords / skip
Loading
sequenceDiagram
    participant Caller as ILLMCaller
    participant Base as BaseLLMClient
    participant SDK as Provider SDK

    Caller->>Base: complete({system, userMessage, maxTokens})
    Base->>SDK: callAPI(system, userMessage, maxTokens)
    SDK-->>Base: IAPICallResult {text, inputTokens, outputTokens}
    Base-->>Caller: Return text
    Note over Base,Caller: enrichCommit uses ENRICHMENT_SYSTEM_PROMPT → parseResponse → validateFact → return facts + usage
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 2 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 20.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Merge Conflict Detection ⚠️ Warning ❌ Merge conflicts detected (21 files):

⚔️ .env.example (content)
⚔️ CLAUDE.md (content)
⚔️ package-lock.json (content)
⚔️ package.json (content)
⚔️ src/commands/init-mcp.ts (content)
⚔️ src/commands/init.ts (content)
⚔️ src/domain/interfaces/IHookConfig.ts (content)
⚔️ src/domain/interfaces/ILLMClient.ts (content)
⚔️ src/hooks/utils/config.ts (content)
⚔️ src/infrastructure/di/container.ts (content)
⚔️ src/infrastructure/di/types.ts (content)
⚔️ src/infrastructure/llm/AnthropicLLMClient.ts (content)
⚔️ src/infrastructure/llm/IntentExtractor.ts (content)
⚔️ src/infrastructure/llm/LLMClientFactory.ts (content)
⚔️ tests/integration/sync.test.ts (content)
⚔️ tests/unit/commands/init.test.ts (content)
⚔️ tests/unit/hooks/utils/config.test.ts (content)
⚔️ tests/unit/infrastructure/git/GitClient.test.ts (content)
⚔️ tests/unit/infrastructure/llm/AnthropicLLMClient.test.ts (content)
⚔️ tests/unit/infrastructure/llm/IntentExtractor.test.ts (content)
⚔️ tests/unit/infrastructure/llm/LLMClientFactory.test.ts (content)

These conflicts must be resolved before merging into main.
Resolve conflicts locally and push changes to this branch.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly and concisely summarizes the main feature being added: multi-provider LLM enrichment support.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds multi-provider LLM support to git-mem, expanding from Anthropic-only to support OpenAI (GPT), Google Gemini, and Ollama. The refactoring extracts common enrichment logic into a BaseLLMClient abstract class, introduces an ILLMCaller interface for provider-agnostic LLM calls, and updates IntentExtractor to use the new interface instead of coupling directly to Anthropic's SDK. The factory auto-detects providers from environment variables with configurable priority, and adds optional YAML-based configuration via .git-mem/.git-mem.yaml.

Changes:

  • Extract BaseLLMClient with shared message building, JSON parsing, and fact validation logic
  • Add ILLMCaller interface and refactor IntentExtractor to be provider-agnostic
  • Implement OpenAI, Gemini, and Ollama provider clients extending BaseLLMClient
  • Expand LLMClientFactory with auto-detection priority and add ILLMConfig for YAML-based provider configuration
  • Add openai and @google/generative-ai as optional peer dependencies; Ollama uses native fetch

Reviewed changes

Copilot reviewed 23 out of 24 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/infrastructure/llm/BaseLLMClient.ts New abstract base class providing shared enrichment logic for all LLM providers
src/infrastructure/llm/AnthropicLLMClient.ts Refactored to extend BaseLLMClient, removing duplicate logic
src/infrastructure/llm/OpenAILLMClient.ts New OpenAI (GPT) provider implementation
src/infrastructure/llm/GeminiLLMClient.ts New Google Gemini provider implementation
src/infrastructure/llm/OllamaLLMClient.ts New Ollama (local) provider implementation using native fetch
src/infrastructure/llm/LLMClientFactory.ts Expanded to support multi-provider detection and creation with priority-based auto-detection
src/infrastructure/llm/IntentExtractor.ts Refactored to use ILLMCaller interface instead of direct Anthropic SDK dependency
src/domain/interfaces/ILLMCaller.ts New interface for simple LLM text completions
src/domain/interfaces/IHookConfig.ts Added ILLMConfig and LLMProvider types for YAML configuration
src/hooks/utils/config.ts Added parsing logic for llm configuration section
src/infrastructure/di/container.ts Updated to pass llm config to createLLMClient and wire IntentExtractor with ILLMCaller
src/infrastructure/di/types.ts Added llm field to IContainerOptions
tests/unit/infrastructure/llm/BaseLLMClient.test.ts Comprehensive tests for shared base class logic
tests/unit/infrastructure/llm/AnthropicLLMClient.test.ts Reduced to test only Anthropic-specific behavior
tests/unit/infrastructure/llm/OpenAILLMClient.test.ts Unit tests for OpenAI client
tests/unit/infrastructure/llm/GeminiLLMClient.test.ts Unit tests for Gemini client
tests/unit/infrastructure/llm/OllamaLLMClient.test.ts Unit tests for Ollama client
tests/unit/infrastructure/llm/LLMClientFactory.test.ts Expanded tests for multi-provider detection and creation
tests/unit/infrastructure/llm/IntentExtractor.test.ts Refactored to use mock ILLMCaller instead of real LLM calls
tests/unit/hooks/utils/config.test.ts Added tests for llm configuration parsing
package.json Added openai and @google/generative-ai as optional peer dependencies
package-lock.json Updated version to 0.4.0
CLAUDE.md Updated documentation with multi-provider setup instructions
.env.example Expanded with examples for all supported providers

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/infrastructure/llm/BaseLLMClient.ts Outdated
Comment thread src/domain/interfaces/IHookConfig.ts Outdated
Comment thread src/infrastructure/di/container.ts Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🤖 Fix all issues with AI agents
In `@CLAUDE.md`:
- Around line 76-83: Add a blank line immediately before the fenced code block
that starts with "```yaml" in CLAUDE.md so the code block is separated from the
preceding paragraph (to satisfy markdownlint MD031); locate the block containing
"llm:" and ensure there is an empty line above the opening ```yaml fence so the
fenced code block is properly surrounded by blank lines.

In `@src/hooks/utils/config.ts`:
- Around line 128-132: Export VALID_PROVIDERS (and optionally isValidProvider)
so other modules can reuse the canonical list and validator; update the
declaration of VALID_PROVIDERS to be exported (export const VALID_PROVIDERS) and
export the isValidProvider function (export function isValidProvider) or move
both to the domain layer alongside the LLMProvider type if preferred, ensuring
imports elsewhere (e.g., LLMClientFactory, CLI validation) reference the
exported symbols VALID_PROVIDERS and isValidProvider.

In `@src/infrastructure/di/container.ts`:
- Around line 116-127: The DI registration for intentExtractor is using an
unsafe double-cast for caller (client as unknown as ILLMCaller); change the
types so no cast is required: update the llm client type exposed on the
container (container.cradle.llmClient) to satisfy ILLMCaller (either by having
the concrete ILLMClient implement/extend ILLMCaller or by typing llmClient as
ILLMCaller), then instantiate new IntentExtractor({ caller:
container.cradle.llmClient, logger: container.cradle.logger }) without casting;
ensure interface names referenced are ILLMCaller, any existing ILLMClient
implementation, and the IntentExtractor constructor signature remain consistent.

In `@src/infrastructure/llm/BaseLLMClient.ts`:
- Around line 4-9: Update the class header comment in BaseLLMClient to remove
the reference to the non-existent callComplete() and state that concrete
providers only need to implement callAPI(); reference the class name
BaseLLMClient and the method callAPI() so the documentation matches the actual
abstract API surface.
- Around line 70-105: The complete method in BaseLLMClient ignores
options.maxTokens so callers can't limit output; update the call flow to accept
and propagate maxTokens: change the abstract callAPI signature to include an
optional maxTokens parameter (e.g., callAPI(systemPrompt: string, userMessage:
string, maxTokens?: number): Promise<IAPICallResult>), update
BaseLLMClient.complete to pass options.maxTokens into callAPI, and update all
provider implementations (Anthropic/OpenAI/Gemini/Ollama) to accept and honor
the optional maxTokens when making their API requests; ensure any type/interface
definitions (ILLMCallerOptions, provider call signatures) are updated
accordingly.

In `@src/infrastructure/llm/GeminiLLMClient.ts`:
- Around line 41-49: The code is triggering ESLint rules for an explicit any and
using require(); update the optional dependency loading in GeminiLLMClient by
adding the same inline ESLint directives used in OpenAILLMClient: annotate the
GoogleGenerativeAI variable to avoid the no-explicit-any rule and wrap the
require call with a no-require-imports eslint-disable-next-line comment (and
re-enable if needed), then keep the existing try/catch that throws LLMError if
the module is absent; reference the GoogleGenerativeAI variable and the
require('@google/generative-ai') usage to locate where to add these directives.

In `@src/infrastructure/llm/LLMClientFactory.ts`:
- Around line 38-83: ESLint flags the synchronous require() calls inside
LLMClientFactory (the switch cases for 'anthropic', 'openai', 'gemini',
'ollama') as forbidden; either convert the factory to an async function and
replace each require('./XLLMClient') with await import('./XLLMClient') and then
construct the client (e.g., import { OpenAILLMClient } via await import and
return new OpenAILLMClient(...)), or keep the synchronous API and add targeted
eslint-disable-next-line comments immediately above each require with a brief
justification that these are optional peer-dependency lazy-loads; update callers
if you choose the async approach so they await LLMClientFactory.create(...) (or
equivalent factory function).

In `@src/infrastructure/llm/OpenAILLMClient.ts`:
- Around line 44-52: The code in OpenAILLMClient.ts uses a require() and an
any-typed variable which trigger ESLint rules; update the optional dependency
loading block that sets OpenAI (and uses mod) to include inline ESLint disables
for `@typescript-eslint/no-explicit-any` and
`@typescript-eslint/no-require-imports`, and move the orphaned comment from lines
~13-14 into this usage site above the try/catch; ensure the thrown LLMError
remains unchanged and reference the OpenAI variable and mod/default resolution
(mod.default ?? mod) so the linter exceptions only apply to this optional-import
pattern.

In `@tests/unit/infrastructure/llm/OpenAILLMClient.test.ts`:
- Around line 8-26: The finally block in the OpenAILLMClient.test.ts case
doesn't correctly restore the environment: change the cleanup so that after the
test the OPENAI_API_KEY is restored to its original value when saved !==
undefined, and explicitly deleted (delete process.env.OPENAI_API_KEY) when saved
is undefined; update the finally block around the new OpenAILLMClient()
assertion to mirror the Gemini test pattern and use the saved variable to decide
between restoring the value or removing the env var.

Comment thread CLAUDE.md
Comment thread src/hooks/utils/config.ts
Comment thread src/infrastructure/di/container.ts
Comment thread src/infrastructure/llm/BaseLLMClient.ts
Comment thread src/infrastructure/llm/BaseLLMClient.ts
Comment thread src/infrastructure/llm/GeminiLLMClient.ts Outdated
Comment thread src/infrastructure/llm/LLMClientFactory.ts Outdated
Comment thread src/infrastructure/llm/OpenAILLMClient.ts Outdated
Comment thread tests/unit/infrastructure/llm/OpenAILLMClient.test.ts
TonyCasey and others added 2 commits February 15, 2026 22:26
- Fix BaseLLMClient JSDoc: callComplete() → callAPI() with correct description
- Improve ICommitMsgConfig.enrich JSDoc with all provider env var names
- Make ILLMClient extend ILLMCaller, removing the `as unknown as` cast in DI container

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
AI-Agent: Claude-Code/2.1.42
…up (GIT-110)

- Pass maxTokens through callAPI() to all providers so ILLMCaller.complete()
  respects the caller's token limit (e.g., IntentExtractor uses 100)
- Add eslint-disable comments for dynamic require() of optional peer deps
- Fix env var restoration in OpenAI test cleanup

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
AI-Agent: Claude-Code/2.1.42
Copilot AI review requested due to automatic review settings February 15, 2026 22:31
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@src/infrastructure/llm/AnthropicLLMClient.ts`:
- Around line 38-60: The AnthropicLLMClient.callAPI implementation lacks
consistent SDK error wrapping; update the callAPI method in the
AnthropicLLMClient class to wrap the SDK call in a try/catch, catch any thrown
error from this.client.messages.create or subsequent processing, and rethrow a
new LLMError (including context like model, systemPrompt/userMessage summary,
and the original error) so that direct calls to complete() receive the same
LLMError shape as OpenAILLMClient and GeminiLLMClient.

In `@src/infrastructure/llm/LLMClientFactory.ts`:
- Around line 35-44: The Anthropic branch in LLMClientFactory uses a dynamic
require for AnthropicLLMClient but lacks the try/catch wrapper other providers
use; wrap the require and instantiation in a try/catch around the block that
loads './AnthropicLLMClient' and constructs new AnthropicLLMClient({ apiKey,
model: options?.model }), and on any thrown error return null (and optionally
log the error using the same logger/pattern used by other cases) so loading
failures behave consistently with the other providers.

In `@src/infrastructure/llm/OllamaLLMClient.ts`:
- Around line 91-106: The catch block in OllamaLLMClient currently wraps all
errors into a generic LLMError; modify it to detect request timeouts by checking
for an AbortError (e.g. error.name === 'AbortError' or error instanceof
DOMException) coming from AbortController.abort(), and throw a specific LLMError
like "Ollama API request timed out" (include timeout duration if available)
instead of the generic message; keep existing ECONNREFUSED handling and ensure
the new timeout branch still preserves the original error as the cause when
constructing the LLMError.

Comment thread src/infrastructure/llm/AnthropicLLMClient.ts
Comment thread src/infrastructure/llm/LLMClientFactory.ts
Comment thread src/infrastructure/llm/OllamaLLMClient.ts
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 24 out of 25 changed files in this pull request and generated 9 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/infrastructure/di/types.ts
Comment thread src/infrastructure/llm/LLMClientFactory.ts
Comment thread src/infrastructure/llm/LLMClientFactory.ts
Comment thread src/infrastructure/di/container.ts
Comment thread tests/unit/infrastructure/llm/LLMClientFactory.test.ts Outdated
Comment thread src/infrastructure/llm/LLMClientFactory.ts
Comment thread src/infrastructure/llm/OpenAILLMClient.ts Outdated
Comment thread src/domain/interfaces/IHookConfig.ts Outdated
Comment thread src/hooks/utils/config.ts
… (GIT-110)

- Remove stale eslint-disable comment at top of OpenAILLMClient.ts
- Use top-level import type for ILLMConfig in types.ts
- Add try/catch to anthropic case in factory for consistency
- Mark intentModel as reserved (not yet wired)
- Fix CLAUDE.md markdown: blank line before fenced code block
- Clarify test comments about optional SDK loading

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
AI-Agent: Claude-Code/2.1.42
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@src/infrastructure/llm/OpenAILLMClient.ts`:
- Around line 42-47: The local variable OpenAI is typed as `any`; change this to
a minimal structural type and use `unknown` when loading the optional SDK:
declare a minimal module shape (e.g. a type alias for the OpenAI
constructor/namespace you need) and type the variable as that shape | undefined,
then cast the result of require('openai') from unknown to that type (use
mod.default ?? mod as the typed module) instead of using `any`. Update the
symbols OpenAI and the require('openai') loading block in OpenAILLMClient.ts to
use the new minimal type and unknown cast.

Comment thread src/infrastructure/llm/OpenAILLMClient.ts Outdated
- OllamaLLMClient: detect AbortError specifically for timeout messages
- LLMClientFactory.test.ts: clarify test names and comments for SDK behavior

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
AI-Agent: Claude-Code/2.1.42
Copilot AI review requested due to automatic review settings February 15, 2026 23:03
…110)

- OpenAILLMClient: typed OpenAIConstructor replaces any
- GeminiLLMClient: typed GenAIConstructor/GeminiModel replaces any
- Both use unknown + cast instead of eslint-disable for no-explicit-any

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
AI-Agent: Claude-Code/2.1.42
AI-Decision: When dynamically loading optional SDKs, use structural typing instead of 'any' to maintain type safety while allowing for optional dependencies.
AI-Confidence: verified
AI-Tags: typescript, type-safety, optional-dependencies, sdk-loading, eslint, type-casting, code-quality, api-design, dependency-management, structural-typing, dynamic-imports, require
AI-Lifecycle: project
AI-Memory-Id: ee99d9b6
AI-Source: llm-enrichment
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 24 out of 25 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@TonyCasey TonyCasey merged commit 9ce6735 into main Feb 15, 2026
3 checks passed
@TonyCasey TonyCasey deleted the GIT-110 branch February 15, 2026 23:27
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@src/infrastructure/llm/GeminiLLMClient.ts`:
- Around line 67-92: Move the GoogleGenerativeAI instantiation and the
getGenerativeModel call into the existing try block so any errors during SDK
construction or model creation are caught and rethrown as LLMError; specifically
wrap creation of new GoogleGenerativeAI(this.apiKey) and
genAI.getGenerativeModel({ model: this.model, systemInstruction: systemPrompt })
inside the try before calling model.generateContent, and keep the existing catch
that converts non-LLMError exceptions into new LLMError(`Gemini API call failed:
${message}`) so callAPI()/complete() always receive an LLMError.

In `@src/infrastructure/llm/OpenAILLMClient.ts`:
- Around line 42-65: The OpenAI SDK resolution and try/catch should be moved out
of per-call execution and cached on the OpenAILLMClient instance: add a private
field (e.g., this.OpenAI?: OpenAIConstructor) and initialize it either in the
OpenAILLMClient constructor or via a lazy initializer used by callAPI; have the
initializer perform the require('openai') logic and throw LLMError if missing,
and then have callAPI use the cached this.OpenAI to construct the client,
avoiding repeated try/catch and repeated resolution on every call.

Comment on lines +67 to +92
const genAI = new GoogleGenerativeAI(this.apiKey);
const model = genAI.getGenerativeModel({
model: this.model,
systemInstruction: systemPrompt,
});

try {
const result = await model.generateContent({
contents: [{ role: 'user', parts: [{ text: userMessage }] }],
generationConfig: { maxOutputTokens: maxTokens ?? this.maxTokens },
});

const response = result.response;
const text = response.text() ?? '';
const usage = response.usageMetadata;

return {
text,
inputTokens: usage?.promptTokenCount ?? 0,
outputTokens: usage?.candidatesTokenCount ?? 0,
};
} catch (error) {
if (error instanceof LLMError) throw error;
const message = error instanceof Error ? error.message : String(error);
throw new LLMError(`Gemini API call failed: ${message}`);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Inspect BaseLLMClient error-handling expectations for callAPI
head -200 src/infrastructure/llm/BaseLLMClient.ts

Repository: TonyCasey/git-mem

Length of output: 6670


🏁 Script executed:

# Inspect full GeminiLLMClient.ts to see context around lines 67-92
cat -n src/infrastructure/llm/GeminiLLMClient.ts

Repository: TonyCasey/git-mem

Length of output: 3894


🏁 Script executed:

# Also check OpenAILLMClient.ts to see the reference error-wrapping pattern
cat -n src/infrastructure/llm/OpenAILLMClient.ts

Repository: TonyCasey/git-mem

Length of output: 3581


Move SDK instantiation into the try-catch block to ensure callAPI always throws LLMError.

Lines 67-71 create the GoogleGenerativeAI client and model outside the error handling block. If instantiation fails (e.g., invalid API key), the error propagates unwrapped to callAPI() callers like complete(), breaking the error-shape contract. These calls must be guarded by the try-catch on line 73.

🔧 Proposed fix
-    const genAI = new GoogleGenerativeAI(this.apiKey);
-    const model = genAI.getGenerativeModel({
-      model: this.model,
-      systemInstruction: systemPrompt,
-    });
-
-    try {
-      const result = await model.generateContent({
+    try {
+      const genAI = new GoogleGenerativeAI(this.apiKey);
+      const model = genAI.getGenerativeModel({
+        model: this.model,
+        systemInstruction: systemPrompt,
+      });
+      const result = await model.generateContent({
         contents: [{ role: 'user', parts: [{ text: userMessage }] }],
         generationConfig: { maxOutputTokens: maxTokens ?? this.maxTokens },
       });
🤖 Prompt for AI Agents
In `@src/infrastructure/llm/GeminiLLMClient.ts` around lines 67 - 92, Move the
GoogleGenerativeAI instantiation and the getGenerativeModel call into the
existing try block so any errors during SDK construction or model creation are
caught and rethrown as LLMError; specifically wrap creation of new
GoogleGenerativeAI(this.apiKey) and genAI.getGenerativeModel({ model:
this.model, systemInstruction: systemPrompt }) inside the try before calling
model.generateContent, and keep the existing catch that converts non-LLMError
exceptions into new LLMError(`Gemini API call failed: ${message}`) so
callAPI()/complete() always receive an LLMError.

Comment on lines +42 to +65
type OpenAIConstructor = new (opts: { apiKey: string }) => {
chat: {
completions: {
create: (params: {
model: string;
max_tokens: number;
messages: Array<{ role: 'system' | 'user'; content: string }>;
}) => Promise<{
choices: Array<{ message?: { content?: string } }>;
usage?: { prompt_tokens?: number; completion_tokens?: number };
}>;
};
};
};
let OpenAI: OpenAIConstructor;
try {
// eslint-disable-next-line @typescript-eslint/no-require-imports
const mod: unknown = require('openai');
OpenAI = ((mod as { default?: OpenAIConstructor }).default ?? mod) as OpenAIConstructor;
} catch {
throw new LLMError(
'OpenAI SDK not installed. Run: npm install openai',
);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider caching the SDK constructor at instance level.

The SDK is loaded inside callAPI, causing the try/catch to execute on every call. While Node.js caches require() results, moving the SDK resolution to constructor or a lazy-init pattern would avoid the repeated overhead.

This is a minor optimization and not blocking.

♻️ Optional refactor to cache SDK at instance level
 export class OpenAILLMClient extends BaseLLMClient {
   private readonly apiKey: string;
+  private readonly OpenAI: OpenAIConstructor;

   constructor(options?: IOpenAILLMClientOptions) {
     const apiKey = options?.apiKey || process.env.OPENAI_API_KEY;
     if (!apiKey) {
       throw new LLMError('OpenAI API key required. Set OPENAI_API_KEY or pass apiKey option.');
     }

+    // Load SDK at construction time
+    type OpenAIConstructor = new (opts: { apiKey: string }) => { /* ... */ };
+    let OpenAI: OpenAIConstructor;
+    try {
+      // eslint-disable-next-line `@typescript-eslint/no-require-imports`
+      const mod: unknown = require('openai');
+      OpenAI = ((mod as { default?: OpenAIConstructor }).default ?? mod) as OpenAIConstructor;
+    } catch {
+      throw new LLMError('OpenAI SDK not installed. Run: npm install openai');
+    }

     super(options?.model || DEFAULT_MODEL, options?.maxTokens || DEFAULT_MAX_TOKENS);
     this.apiKey = apiKey;
+    this.OpenAI = OpenAI;
   }
🤖 Prompt for AI Agents
In `@src/infrastructure/llm/OpenAILLMClient.ts` around lines 42 - 65, The OpenAI
SDK resolution and try/catch should be moved out of per-call execution and
cached on the OpenAILLMClient instance: add a private field (e.g., this.OpenAI?:
OpenAIConstructor) and initialize it either in the OpenAILLMClient constructor
or via a lazy initializer used by callAPI; have the initializer perform the
require('openai') logic and throw LLMError if missing, and then have callAPI use
the cached this.OpenAI to construct the client, avoiding repeated try/catch and
repeated resolution on every call.

@TonyCasey TonyCasey restored the GIT-110 branch February 16, 2026 00:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants