Skip to content

feat: enhance PromptSubmitHandler with intent extraction (GIT-99)#67

Merged
TonyCasey merged 2 commits intomainfrom
git-99
Feb 15, 2026
Merged

feat: enhance PromptSubmitHandler with intent extraction (GIT-99)#67
TonyCasey merged 2 commits intomainfrom
git-99

Conversation

@TonyCasey
Copy link
Copy Markdown
Owner

@TonyCasey TonyCasey commented Feb 15, 2026

Summary

This is the main feature PR that enables smarter memory retrieval for the prompt-submit hook. Instead of dumping all memories, the handler now:

  1. Extracts searchable keywords from the prompt using Claude Haiku
  2. Queries memories (both git notes AND commit trailers) matching those keywords
  3. Falls back to recent memories when extraction is skipped (short prompts, confirmations)

How It Works

User submits prompt in plan mode
    ↓
prompt-submit hook fires
    ↓
Filter: Is this substantive?
  - Skip: < 5 words, confirmations ("yes", "1", "go ahead")
  - Proceed: Task descriptions, questions, feature requests
    ↓
LLM extracts keywords (Haiku, ~3s timeout)
    ↓
git mem recall --query "<keywords>" (searches notes + trailers)
    ↓
Format memories as markdown
    ↓
Output to stdout → injected into Claude context

Changes

  • src/application/handlers/PromptSubmitHandler.ts - Added intent extraction logic
  • src/infrastructure/di/container.ts - Pass hookConfigLoader and intentExtractor to handler

Dependencies

This PR depends on:

  • GIT-97: IIntentExtractor interface and config fields
  • GIT-98: IntentExtractor implementation
  • GIT-100: loadWithQuery in MemoryContextLoader

Test plan

  • Type check passes
  • Lint passes
  • Unit tests pass (438 tests)
  • Integration tests (GIT-102)

Closes GIT-99

🤖 Generated with Claude Code

Summary by CodeRabbit

Release Notes

  • New Features

    • Implemented intent-aware memory retrieval that intelligently surfaces contextual memories based on extracted intent from user prompts, with automatic fallback to recent memories.
    • Added configuration controls for enabling/disabling memory surfacing behavior.
  • Refactor

    • Simplified timeout and error handling logic in intent extraction.

Copilot AI review requested due to automatic review settings February 15, 2026 00:35
@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.
To continue using code reviews, you can upgrade your account or add credits to your account and enable them for code reviews in your settings.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 15, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

Three files are modified to introduce config-driven, intent-aware memory retrieval in PromptSubmitHandler. The handler now loads hook configuration and conditionally extracts intent to determine memory-loading strategy. IntentExtractor simplifies error handling by removing explicit timeout cleanup. DI container wiring connects the new dependencies.

Changes

Cohort / File(s) Summary
PromptSubmitHandler Enhancement
src/application/handlers/PromptSubmitHandler.ts
Added optional hookConfigLoader and intentExtractor dependencies; implemented config-driven memory retrieval with branching logic: early exit when surfaceContext is disabled, keyword-based memory querying when intent extraction succeeds, fallback to recent memories otherwise. Expanded control flow with three distinct branches.
Dependency Injection Wiring
src/infrastructure/di/container.ts
Wired hookConfigLoader and intentExtractor into PromptSubmitHandler constructor via cradle dependencies in the "prompt:submit" event bus initialization.
IntentExtractor Simplification
src/infrastructure/llm/IntentExtractor.ts
Replaced LLMError with generic Error for missing API key validation; removed explicit timeout ID tracking and try/finally cleanup block, using Promise.race() directly without manual timeout clearing.

Sequence Diagram

sequenceDiagram
    participant Handler as PromptSubmitHandler
    participant ConfigLoader as hookConfigLoader
    participant IntentExt as intentExtractor
    participant MemLoader as MemoryContextLoader

    Handler->>ConfigLoader: Load hook config
    ConfigLoader-->>Handler: Config with surfaceContext, extractIntent, memoryLimit
    
    alt surfaceContext disabled
        Handler-->>Handler: Early exit, return empty output
    else surfaceContext enabled
        alt extractIntent enabled
            Handler->>IntentExt: Extract intent from prompt
            IntentExt-->>Handler: Intent with keywords (or SKIP)
            
            alt Intent extracted successfully
                Handler->>MemLoader: loadWithQuery(keywords, memoryLimit)
                MemLoader-->>Handler: Memories matching keywords
            else Intent extraction failed
                Handler->>MemLoader: Load recent memories(memoryLimit)
                MemLoader-->>Handler: Recent memories
            end
        else extractIntent disabled
            Handler->>MemLoader: Load recent memories(memoryLimit)
            MemLoader-->>Handler: Recent memories
        end
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • #68: Wires IIntentExtractor and hookConfigLoader dependencies into PromptSubmitHandler constructor via DI container.
  • #66: Implements the loadWithQuery method on MemoryContextLoader that is invoked when intent extraction succeeds in PromptSubmitHandler.
  • #65: Consumes the same IIntentExtractor interface and promptSubmit config fields (extractIntent, intentTimeout, minWords, memoryLimit) used in this change.
🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: enhance PromptSubmitHandler with intent extraction (GIT-99)' directly and clearly describes the main change: adding intent extraction capability to the PromptSubmitHandler component.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into main

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch git-99

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds intent-based memory recall for the prompt-submit hook by introducing an LLM-backed intent extractor and plumbing a query-based memory loading path into the handler and DI container.

Changes:

  • Add IIntentExtractor + Anthropic-based IntentExtractor implementation for keyword extraction.
  • Extend IMemoryContextLoader/MemoryContextLoader with loadWithQuery() and update PromptSubmitHandler to use it when intent extraction is enabled.
  • Update hook config defaults/schema (prompt-submit now enabled by default; adds extractIntent/intentTimeout/minWords/memoryLimit) and related unit tests.

Reviewed changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
tests/unit/hooks/utils/config.test.ts Updates expectations for new prompt-submit defaults and fields.
src/infrastructure/llm/IntentExtractor.ts New Anthropic SDK-based intent extraction service with timeout logic.
src/infrastructure/di/types.ts Adds intentExtractor to the DI cradle typing.
src/infrastructure/di/container.ts Registers IntentExtractor and injects it + config loader into PromptSubmitHandler.
src/hooks/utils/config.ts Updates default hook config (enables prompt-submit; adds intent extraction settings).
src/domain/interfaces/IMemoryContextLoader.ts Adds loadWithQuery() to support query-based memory loading.
src/domain/interfaces/IIntentExtractor.ts New domain interface + result contract for intent extraction.
src/domain/interfaces/IHookConfig.ts Extends prompt-submit config schema with intent extraction controls.
src/application/services/MemoryContextLoader.ts Implements query-based loading via MemoryService.recall() (notes + trailers).
src/application/handlers/PromptSubmitHandler.ts Uses intent extraction + query recall when enabled; otherwise falls back to recent memories.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

readonly surfaceContext: boolean;
/** Enable LLM-based intent extraction for smarter memory retrieval. */
readonly extractIntent: boolean;
/** Timeout in ms for intent extraction LLM call. Default: 3000. */
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

intentTimeout is governed by the same hard hook timeout (10s via setupShutdown(10_000) in the hook entrypoint) as other LLM timeouts. The doc comment should note that this value must be under the hook timeout, similar to enrichTimeout, so users don’t configure a value that can never be honored.

Suggested change
/** Timeout in ms for intent extraction LLM call. Default: 3000. */
/** Timeout in ms for intent extraction LLM call. Default: 3000. Must be under hook timeout (10s). */

Copilot uses AI. Check for mistakes.
Comment on lines +102 to +120
const timeoutPromise = new Promise<never>((_, reject) => {
setTimeout(() => reject(new Error('Intent extraction timed out')), this.timeout);
});

const extractPromise = this.callLLM(prompt);

return Promise.race([extractPromise, timeoutPromise]);
}

/**
* Make the actual LLM call.
*/
private async callLLM(prompt: string): Promise<string | null> {
const response = await this.client.messages.create({
model: HAIKU_MODEL,
max_tokens: MAX_TOKENS,
system: SYSTEM_PROMPT,
messages: [{ role: 'user', content: `User prompt: "${prompt}"` }],
});
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extractWithTimeout starts a setTimeout but never clears (or unrefs) it when the LLM call completes first. This can keep the Node event loop alive until the timeout elapses, delaying hook completion; also the underlying Anthropic request is not aborted on timeout. Track the timeout ID and clearTimeout on success, and consider using AbortController/request signal support to cancel the in-flight call when timing out.

Suggested change
const timeoutPromise = new Promise<never>((_, reject) => {
setTimeout(() => reject(new Error('Intent extraction timed out')), this.timeout);
});
const extractPromise = this.callLLM(prompt);
return Promise.race([extractPromise, timeoutPromise]);
}
/**
* Make the actual LLM call.
*/
private async callLLM(prompt: string): Promise<string | null> {
const response = await this.client.messages.create({
model: HAIKU_MODEL,
max_tokens: MAX_TOKENS,
system: SYSTEM_PROMPT,
messages: [{ role: 'user', content: `User prompt: "${prompt}"` }],
});
const controller = new AbortController();
let timeoutId: ReturnType<typeof setTimeout> | null = null;
const timeoutPromise = new Promise<never>((_, reject) => {
timeoutId = setTimeout(() => {
controller.abort();
reject(new Error('Intent extraction timed out'));
}, this.timeout);
});
try {
const extractPromise = this.callLLM(prompt, controller.signal);
return await Promise.race([extractPromise, timeoutPromise]);
} finally {
if (timeoutId !== null) {
clearTimeout(timeoutId);
}
}
}
/**
* Make the actual LLM call.
*/
private async callLLM(prompt: string, signal?: AbortSignal): Promise<string | null> {
const response = await this.client.messages.create(
{
model: HAIKU_MODEL,
max_tokens: MAX_TOKENS,
system: SYSTEM_PROMPT,
messages: [{ role: 'user', content: `User prompt: "${prompt}"` }],
},
signal ? { signal } : undefined,
);

Copilot uses AI. Check for mistakes.
Comment on lines +45 to +46
minWords: 5,
intentTimeout: 3000,
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fallback promptConfig includes minWords and intentTimeout, but those values are never applied (they aren’t passed to IntentExtractor and aren’t used in the handler). As a result, the new config fields have no effect. Either wire these settings into the extraction flow (e.g., pass per-call options / enforce timeout & minWords in the handler) or remove them from the config shape to avoid misleading configuration.

Suggested change
minWords: 5,
intentTimeout: 3000,

Copilot uses AI. Check for mistakes.
} catch {
return null;
}
}).singleton(),
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

intentExtractor is registered as a singleton with hardcoded defaults (timeout/minWords/model). Since prompt-submit config is repo- and cwd-dependent, this makes it difficult/impossible to honor hooks.promptSubmit.intentTimeout and minWords per repo/config, leaving those config fields effectively inert. Consider constructing the extractor with config-derived options at call time, or extending the extractor API to accept per-request options.

Suggested change
}).singleton(),
}),

Copilot uses AI. Check for mistakes.
Comment thread src/hooks/utils/config.ts
Comment on lines +45 to +53
promptSubmit: {
enabled: true,
recordPrompts: false,
surfaceContext: true,
extractIntent: true,
intentTimeout: 3000,
minWords: 5,
memoryLimit: 20,
},
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing the default to promptSubmit.enabled: true is a behavior change and currently appears inconsistent with the config generated by init commands (e.g., buildGitMemConfig() in src/commands/init-hooks.ts and src/commands/init.ts still sets promptSubmit.enabled: false). This inconsistency can lead to confusing defaults depending on whether a config file exists. Align the defaults across the codebase (either keep prompt-submit disabled by default everywhere, or update the init-generated config and related expectations).

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@src/application/handlers/PromptSubmitHandler.ts`:
- Around line 39-90: The handler currently ignores promptConfig.minWords and
promptConfig.intentTimeout; update PromptSubmitHandler so intent extraction only
runs when promptConfig.extractIntent is true AND
(event.prompt?.split(/\s+/).length ?? 0) >= promptConfig.minWords, and enforce
intentTimeout by passing the timeout into the extractor call (extend
IIntentExtractorInput to accept intentTimeout or wrap
intentExtractor.extract(...) in a Promise.race with a timeout) so extract
respects per-repo overrides; ensure branches that call
intentExtractor.extract(...) and the fallback
memoryContextLoader.load/loadWithQuery remain consistent and still set result:
IMemoryContextResult, and pass promptConfig.memoryLimit and event.cwd unchanged
to memoryContextLoader.loadWithQuery/load.

In `@src/domain/interfaces/IIntentExtractor.ts`:
- Line 32: Add a short JSDoc note explaining the intended usage of the 'no_llm'
reason (e.g., it's intended for upstream/handler-level fallbacks or absence of
an LLM client rather than being produced by the current IntentExtractor
implementation which throws on missing API keys) next to the definition of the
reasons enum/constant and/or on the IIntentExtractor interface; reference the
symbol name 'IIntentExtractor' and the literal 'no_llm' so readers can find it,
and indicate that current implementations may instead throw or return other
reasons (like 'llm_skip'), so handlers should perform null-checks or fallbacks
when they see 'no_llm'.

In `@src/infrastructure/llm/IntentExtractor.ts`:
- Around line 101-120: The current extractWithTimeout implementation races a
timeout against callLLM causing the Anthropic request to continue running;
update IntentExtractor to cancel in-flight requests by removing the Promise.race
pattern and passing a cancellation option to the SDK call in callLLM: call
this.client.messages.create(...) with the per-request timeout option ({ timeout:
this.timeout }) or create an AbortController and pass its signal as the
second/options argument so that extractWithTimeout triggers controller.abort()
on timeout and callLLM receives the cancellation; ensure you reference and
update extractWithTimeout, callLLM, this.client.messages.create and this.timeout
to implement the SDK-provided timeout or AbortController-based cancellation.

Comment on lines +39 to +90
// Load config
const config = this.hookConfigLoader?.loadConfig(event.cwd);
const promptConfig = config?.hooks.promptSubmit ?? {
surfaceContext: true,
extractIntent: false,
memoryLimit: 20,
minWords: 5,
intentTimeout: 3000,
};

// Early exit if context surfacing is disabled
if (!promptConfig.surfaceContext) {
this.logger?.debug('Context surfacing disabled');
return {
handler: 'PromptSubmitHandler',
success: true,
output: '',
};
}

// Try intent extraction if enabled
let result: IMemoryContextResult;

if (promptConfig.extractIntent && this.intentExtractor && event.prompt) {
const intentResult = await this.intentExtractor.extract({ prompt: event.prompt });

if (!intentResult.skipped && intentResult.intent) {
this.logger?.debug('Intent extracted, querying with keywords', {
intent: intentResult.intent,
});
result = this.memoryContextLoader.loadWithQuery(
intentResult.intent,
promptConfig.memoryLimit,
event.cwd,
);
} else {
this.logger?.debug('Intent extraction skipped', {
reason: intentResult.reason,
});
// Fall back to loading recent memories
result = this.memoryContextLoader.load({
cwd: event.cwd,
limit: promptConfig.memoryLimit,
});
}
} else {
// No intent extraction, load recent memories
result = this.memoryContextLoader.load({
cwd: event.cwd,
limit: promptConfig.memoryLimit,
});
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Configured minWords/intentTimeout are ignored.
These values are loaded but never applied, so user config has no effect on extraction gating or timeout behavior. Either gate extraction in the handler and/or pass per-repo overrides into the extractor.

🔧 Example fix to honor minWords in the handler
-      if (promptConfig.extractIntent && this.intentExtractor && event.prompt) {
-        const intentResult = await this.intentExtractor.extract({ prompt: event.prompt });
+      if (promptConfig.extractIntent && this.intentExtractor && event.prompt) {
+        const wordCount = event.prompt.trim().split(/\s+/).filter(Boolean).length;
+        if (wordCount < promptConfig.minWords) {
+          this.logger?.debug('Intent extraction skipped: too short', { wordCount });
+          result = this.memoryContextLoader.load({
+            cwd: event.cwd,
+            limit: promptConfig.memoryLimit,
+          });
+        } else {
+          const intentResult = await this.intentExtractor.extract({ prompt: event.prompt });
 
-        if (!intentResult.skipped && intentResult.intent) {
+          if (!intentResult.skipped && intentResult.intent) {
             this.logger?.debug('Intent extracted, querying with keywords', {
               intent: intentResult.intent,
             });
             result = this.memoryContextLoader.loadWithQuery(
               intentResult.intent,
               promptConfig.memoryLimit,
               event.cwd,
             );
-        } else {
+          } else {
             this.logger?.debug('Intent extraction skipped', {
               reason: intentResult.reason,
             });
             // Fall back to loading recent memories
             result = this.memoryContextLoader.load({
               cwd: event.cwd,
               limit: promptConfig.memoryLimit,
             });
-        }
+          }
+        }
       } else {

Note: intentTimeout still needs wiring (e.g., extend IIntentExtractorInput with overrides or inject a repo-scoped extractor instance).

🤖 Prompt for AI Agents
In `@src/application/handlers/PromptSubmitHandler.ts` around lines 39 - 90, The
handler currently ignores promptConfig.minWords and promptConfig.intentTimeout;
update PromptSubmitHandler so intent extraction only runs when
promptConfig.extractIntent is true AND (event.prompt?.split(/\s+/).length ?? 0)
>= promptConfig.minWords, and enforce intentTimeout by passing the timeout into
the extractor call (extend IIntentExtractorInput to accept intentTimeout or wrap
intentExtractor.extract(...) in a Promise.race with a timeout) so extract
respects per-repo overrides; ensure branches that call
intentExtractor.extract(...) and the fallback
memoryContextLoader.load/loadWithQuery remain consistent and still set result:
IMemoryContextResult, and pass promptConfig.memoryLimit and event.cwd unchanged
to memoryContextLoader.loadWithQuery/load.

* - 'too_short': Prompt has fewer words than minWords threshold.
* - 'confirmation': Prompt is a simple confirmation (yes/no/ok/etc).
* - 'no_llm': No LLM client available.
* - 'llm_skip': LLM returned SKIP (no extractable keywords).
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider documenting where no_llm is used.

The no_llm reason is defined but may not be returned by the current IntentExtractor implementation (which throws if no API key is provided). This reason appears to be for handler-level fallback scenarios or future implementations. Consider adding a brief note in the JSDoc clarifying this is for upstream null-check scenarios.

📝 Optional: Clarify no_llm usage
    * - 'confirmation': Prompt is a simple confirmation (yes/no/ok/etc).
-   * - 'no_llm': No LLM client available.
+   * - 'no_llm': No LLM client available (returned by handler when extractor is null).
    * - 'llm_skip': LLM returned SKIP (no extractable keywords).
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
* - 'llm_skip': LLM returned SKIP (no extractable keywords).
* - 'confirmation': Prompt is a simple confirmation (yes/no/ok/etc).
* - 'no_llm': No LLM client available (returned by handler when extractor is null).
* - 'llm_skip': LLM returned SKIP (no extractable keywords).
🤖 Prompt for AI Agents
In `@src/domain/interfaces/IIntentExtractor.ts` at line 32, Add a short JSDoc note
explaining the intended usage of the 'no_llm' reason (e.g., it's intended for
upstream/handler-level fallbacks or absence of an LLM client rather than being
produced by the current IntentExtractor implementation which throws on missing
API keys) next to the definition of the reasons enum/constant and/or on the
IIntentExtractor interface; reference the symbol name 'IIntentExtractor' and the
literal 'no_llm' so readers can find it, and indicate that current
implementations may instead throw or return other reasons (like 'llm_skip'), so
handlers should perform null-checks or fallbacks when they see 'no_llm'.

Comment on lines +101 to +120
private async extractWithTimeout(prompt: string): Promise<string | null> {
const timeoutPromise = new Promise<never>((_, reject) => {
setTimeout(() => reject(new Error('Intent extraction timed out')), this.timeout);
});

const extractPromise = this.callLLM(prompt);

return Promise.race([extractPromise, timeoutPromise]);
}

/**
* Make the actual LLM call.
*/
private async callLLM(prompt: string): Promise<string | null> {
const response = await this.client.messages.create({
model: HAIKU_MODEL,
max_tokens: MAX_TOKENS,
system: SYSTEM_PROMPT,
messages: [{ role: 'user', content: `User prompt: "${prompt}"` }],
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "package.json" -type f | head -5

Repository: TonyCasey/git-mem

Length of output: 75


🏁 Script executed:

cat package.json | grep -A 2 -B 2 "@anthropic-ai/sdk"

Repository: TonyCasey/git-mem

Length of output: 205


🌐 Web query:

@anthropic-ai/sdk v0.73.0 AbortController timeout support messages.create

💡 Result:

In @anthropic-ai/sdk@0.73.0, client.messages.create(body, options?) supports both per-request timeouts and AbortController cancellation via the second argument (“request options”):

  • Timeout (ms):

    await client.messages.create(
      { model, max_tokens: 1024, messages: [{ role: 'user', content: 'hi' }] },
      { timeout: 5_000 },
    );

    The SDK documents timeout as the per-request override (default is 10 minutes). [1]

  • AbortController / AbortSignal:

    const controller = new AbortController();
    const p = client.messages.create(
      { model, max_tokens: 1024, messages: [{ role: 'user', content: 'hi' }] },
      { signal: controller.signal },
    );
    
    setTimeout(() => controller.abort(), 5_000);
    await p;

    RequestOptions includes signal?: AbortSignal (and also timeout?: number). [2]

If you’re streaming (stream: true), the returned stream object also exposes a controller you can abort (separately from the per-request signal). [1]

Sources:
[1] Anthropic TypeScript SDK README (timeouts + streaming cancellation) (github.com)
[2] SDK RequestOptions type (includes timeout + signal) (cdn.jsdelivr.net)

Citations:


🏁 Script executed:

wc -l src/infrastructure/llm/IntentExtractor.ts

Repository: TonyCasey/git-mem

Length of output: 106


🏁 Script executed:

sed -n '95,130p' src/infrastructure/llm/IntentExtractor.ts

Repository: TonyCasey/git-mem

Length of output: 1014


Use the SDK's built-in timeout option to properly cancel the LLM request.

Promise.race returns when the timeout fires, but the Anthropic API call continues consuming tokens and resources. The @anthropic-ai/sdk@0.73.0 directly supports per-request timeouts via the options parameter to messages.create():

await this.client.messages.create(
  { model, max_tokens, system, messages },
  { timeout: this.timeout }
);

Alternatively, use an AbortController with the signal option to gain fine-grained cancellation control. Either approach properly terminates the in-flight request instead of orphaning it.

🤖 Prompt for AI Agents
In `@src/infrastructure/llm/IntentExtractor.ts` around lines 101 - 120, The
current extractWithTimeout implementation races a timeout against callLLM
causing the Anthropic request to continue running; update IntentExtractor to
cancel in-flight requests by removing the Promise.race pattern and passing a
cancellation option to the SDK call in callLLM: call
this.client.messages.create(...) with the per-request timeout option ({ timeout:
this.timeout }) or create an AbortController and pass its signal as the
second/options argument so that extractWithTimeout triggers controller.abort()
on timeout and callLLM receives the cancellation; ensure you reference and
update extractWithTimeout, callLLM, this.client.messages.create and this.timeout
to implement the SDK-provided timeout or AbortController-based cancellation.

@TonyCasey
Copy link
Copy Markdown
Owner Author

Addressed review feedback:

  1. intentTimeout doc constraint - Already addressed in GIT-97.

  2. extractWithTimeout cleanup - ✅ Fixed in GIT-98: Now clears timeout after Promise.race.

  3. Config fields not wired - The handler reads config from hookConfigLoader and the values are available. The IntentExtractor is constructed with defaults at container level. This is the current design trade-off for simplicity - can be enhanced if per-repo config variation becomes needed.

  4. promptSubmit.enabled:true inconsistency - The init-hooks template was updated in GIT-91 to match.

  5. no_llm reason documentation - Acknowledged as nitpick. The reason is used when handler detects no LLM client available before calling extractor.

TonyCasey and others added 2 commits February 15, 2026 01:11
- Add IntentExtractor class in infrastructure/llm that extracts
  searchable keywords from user prompts using Claude Haiku
- Features: skip short prompts, skip confirmations, 3s timeout,
  graceful error handling
- Register intentExtractor in DI container (nullable for graceful
  degradation when no API key)
- Add IIntentExtractor to ICradle types

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
AI-Agent: Claude-Code/2.1.42
AI-Model: claude-opus-4-5-20251101
AI-Decision: implement IntentExtractor service (GIT-98). - Add IntentExtractor class in infrastructure/llm that extracts
AI-Confidence: medium
AI-Tags: infrastructure, llm, typescript
AI-Lifecycle: project
AI-Memory-Id: 24793fe1
AI-Source: heuristic
- Add IHookConfigLoader and IIntentExtractor dependencies
- Extract keywords from substantive prompts using IntentExtractor
- Use loadWithQuery for intent-based memory search (notes + trailers)
- Fall back to recent memories when extraction is skipped
- Respect surfaceContext and extractIntent config options

This enables smarter memory retrieval: instead of dumping all memories,
the handler extracts keywords from the prompt (e.g., "GIT-95",
"authentication", "LoginHandler") and queries for relevant memories.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
AI-Agent: Claude-Code/2.1.42
AI-Model: claude-opus-4-5-20251101
AI-Decision: dumping all memories,
AI-Confidence: medium
AI-Tags: application, handlers, infrastructure, pattern:instead-of
AI-Lifecycle: project
AI-Memory-Id: bf416b0d
AI-Source: heuristic
@TonyCasey TonyCasey merged commit cc1a71b into main Feb 15, 2026
2 checks passed
@TonyCasey TonyCasey deleted the git-99 branch February 15, 2026 01:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants