Problem
When memoryReflection.agentId is configured, the reflection generation via runEmbeddedPiAgent still injects the full system prompt and skills catalog into the LLM context, even though:
disableTools: true is set (tools schema correctly skipped)
bootstrapContextMode: "lightweight" is set (workspace bootstrap files correctly skipped)
The skills catalog and full system prompt (coding guidelines, tool usage rules, auto memory instructions, etc.) are completely irrelevant to reflection generation, which only needs to read a conversation and produce a summary. This wastes a significant number of tokens on every reflection run.
Root Cause
In index.ts:1201-1217, the call to runEmbeddedPiAgent:
runEmbeddedPiAgent({
sessionKey: "temp:memory-reflection",
agentId: params.agentId,
disableTools: true, // ✅ skips tool schemas
bootstrapContextMode: "lightweight", // ✅ skips workspace files
// ❌ no mechanism to skip skills or use minimal system prompt
});
In the gateway's attempt.ts:708-710, skills are only stripped when toolsAllow is set:
const effectivePromptMode = params.toolsAllow?.length ? ("minimal" as const) : promptMode;
const effectiveSkillsPrompt = params.toolsAllow?.length ? undefined : skillsPrompt;
Since toolsAllow is not passed, the full skills catalog and default system prompt are injected.
Evidence from Logs
13:40:37 memory-reflection: command:new reflection generation start for session ...
13:40:48 regex fallback found 0 capturable texts for agent memory-distiller
The reflection agent runs with the memory-distiller agent's model, but the full system prompt (including all skills, coding guidelines, tool usage rules) is still injected — verified by inspecting the actual prompt sent to the LLM.
Suggested Fix
Option A (minimal change): Pass toolsAllow: ["__noop__"] (a non-empty dummy array) to trigger the gateway's existing minimal prompt mode and strip the skills catalog.
Option B (cleaner): Add a promptMode: "minimal" parameter to runEmbeddedPiAgent and pass it from the reflection runner. This would explicitly request a minimal system prompt without relying on the toolsAllow side effect.
Option C (gateway-side): When disableTools: true, the gateway should automatically switch to minimal prompt mode and skip skills injection — since tools are disabled, skills and tool usage instructions are meaningless.
Additional Context
The memoryReflection.agentId config option is somewhat misleading — it only borrows the target agent's model configuration (resolveAgentPrimaryModelRef). The conversation context, storage attribution, and scope resolution all still use the sourceAgentId. This is fine by design, but the naming suggests a fully independent agent dispatch. A documentation note clarifying this would help.
Environment
- memory-lancedb-pro: 1.1.0-beta.10
- openclaw gateway
Problem
When
memoryReflection.agentIdis configured, the reflection generation viarunEmbeddedPiAgentstill injects the full system prompt and skills catalog into the LLM context, even though:disableTools: trueis set (tools schema correctly skipped)bootstrapContextMode: "lightweight"is set (workspace bootstrap files correctly skipped)The skills catalog and full system prompt (coding guidelines, tool usage rules, auto memory instructions, etc.) are completely irrelevant to reflection generation, which only needs to read a conversation and produce a summary. This wastes a significant number of tokens on every reflection run.
Root Cause
In
index.ts:1201-1217, the call torunEmbeddedPiAgent:In the gateway's
attempt.ts:708-710, skills are only stripped whentoolsAllowis set:Since
toolsAllowis not passed, the full skills catalog and default system prompt are injected.Evidence from Logs
The reflection agent runs with the
memory-distilleragent's model, but the full system prompt (including all skills, coding guidelines, tool usage rules) is still injected — verified by inspecting the actual prompt sent to the LLM.Suggested Fix
Option A (minimal change): Pass
toolsAllow: ["__noop__"](a non-empty dummy array) to trigger the gateway's existing minimal prompt mode and strip the skills catalog.Option B (cleaner): Add a
promptMode: "minimal"parameter torunEmbeddedPiAgentand pass it from the reflection runner. This would explicitly request a minimal system prompt without relying on thetoolsAllowside effect.Option C (gateway-side): When
disableTools: true, the gateway should automatically switch to minimal prompt mode and skip skills injection — since tools are disabled, skills and tool usage instructions are meaningless.Additional Context
The
memoryReflection.agentIdconfig option is somewhat misleading — it only borrows the target agent's model configuration (resolveAgentPrimaryModelRef). The conversation context, storage attribution, and scope resolution all still use thesourceAgentId. This is fine by design, but the naming suggests a fully independent agent dispatch. A documentation note clarifying this would help.Environment