-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Summary
Each provider node should track and propagate total input and output token counts up through the execution chain. Currently it's unclear whether token usage from LLM calls (agent loops, prompt nodes, chat-agent nodes) is being consistently accumulated and surfaced.
Requirements
- Each provider node tracks
inputTokensandoutputTokensper LLM call - Token counts accumulate across multi-turn agent loops (not just last turn)
- Totals propagate up to the workflow execution summary
- Token usage is visible in the execution trace / debug output
- Multiple provider nodes in one workflow each report their own totals
- Parent workflow aggregates all provider node totals into a grand total
Areas to Check
workflowExecutor.ts—executeAgentNode/executeChatAgentNodeloop token accumulationWorkflowExecutionPanel.tsx— token display in execution results- Provider node output — does it include
usage.input_tokens/usage.output_tokens? onPromptExecuteinmain.js— does the LLM response include token usage and is it passed back?
Acceptance Criteria
After a workflow run with one or more LLM calls, the execution summary should show:
- Per-provider-node token breakdown (in/out)
- Total workflow token usage (sum of all providers)
- Token counts visible in both normal and debug execution modes
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels