-
Notifications
You must be signed in to change notification settings - Fork 222
DurableAgent should preserve reasoning content in conversation history across tool loop steps #1393
Description
I ran into this as I was researching differences between AI SDK's streamText() and DurableAgent's implementation. I came upon sanitizeProviderMetadataForToolCall() and tried to find out why it exists in the first place. Is there a reason why reasoning is not tracked or is that an oversight?
Problem
When DurableAgent's tool loop continues to the next step, the assistant message sent back to the model only contains tool-call parts — reasoning content is omitted:
// stream-text-iterator.ts, inside finishReason === 'tool-calls' branch
conversationPrompt.push({
role: 'assistant',
content: toolCalls.map(toolCall => ({
type: 'tool-call',
toolCallId: toolCall.toolCallId,
toolName: toolCall.toolName,
input: JSON.parse(toolCall.input),
})),
});The AI SDK's own streamText includes reasoning parts in the assistant message via toResponseMessages():
case 'reasoning':
content.push({
type: 'reasoning',
text: part.text,
providerOptions: part.providerMetadata,
});This means reasoning models (OpenAI o-series, Anthropic extended thinking, Gemini thinking) lose access to their own prior reasoning during multi-step tool loops in DurableAgent, but not in streamText.
Impact
-
OpenAI
itemIdworkaround would become unnecessary. PR fix(ai): strip openai itemID from tool call metadata #889 stripsitemIdfrom OpenAI metadata because it references reasoning items that aren't in the conversation. If reasoning items were preserved,itemIdreferences would be valid and no sanitization would be needed. -
Reasoning model quality during tool loops. As raised in Bug: DurableAgent + OpenAI Responses API fails on tool calls due to missing required
reasoningitem #880 by @bhuvaneshprasad: "without (itemId + reasoningItem) or previousResponseId, the code will still function, but I'm unclear on whether the reasoning models would be able to access and leverage the prior reasoning text to improve performance." This was never answered and the workaround was merged instead. -
Parity with AI SDK.
streamTextpreserves reasoning across steps. DurableAgent should too.
Proposed Fix
The reasoning data is already available — chunksToStep() in do-stream-step.ts collects reasoning chunks into the step result. The fix is to include reasoning content parts in the assistant message alongside tool-call parts in stream-text-iterator.ts, mirroring what toResponseMessages() does in the AI SDK.
This would also allow removing sanitizeProviderMetadataForToolCall() and the provider-specific itemId stripping from PR #889.
Context
- Bug: DurableAgent + OpenAI Responses API fails on tool calls due to missing required
reasoningitem #880 — OpenAI Responses API fails on tool calls due to missing reasoning item - fix(ai): strip openai itemID from tool call metadata #889 — Workaround: strip
itemIdfrom metadata - fix(ai): remove itemId from providerOptions for openai models #886 — Alternative approach (closed): use
previousResponseId - Gemini tool-calls fail after first step because thought_signature is dropped #727 — Gemini
thoughtSignaturedropped (fixed in fix(ai): preserve providerMetadata as providerOptions in multi-turn tool calls #733, but reasoning content still not preserved)