feat: real-time inference progress events for web channel#514
feat: real-time inference progress events for web channel#514senamakel wants to merge 2 commits intotinyhumansai:mainfrom
Conversation
- Added new event listeners for inference start, iteration start, subagent spawning, and completion to track the live state of chat interactions. - Introduced an `InferenceStatus` interface to manage the current phase and active tools/subagents for each thread. - Updated the UI to display inference status indicators, enhancing user experience during chat interactions. - Created a new `progress` module in the Rust backend to emit real-time progress events, allowing for better integration with the web channel. - Refactored the `subscribeChatEvents` function to include new event handlers for managing inference and subagent events, improving clarity and maintainability of the event handling logic.
📝 WalkthroughWalkthroughThe PR introduces a real-time progress event streaming system for agent execution. It adds new event types to the chat service, instruments the Rust agent turn loop to emit lifecycle events (inference start, iteration, tool execution, subagent lifecycle) via an mpsc channel, bridges these events to web socket emissions, and updates the frontend to handle and display these new events through a new inference status UI component. Changes
Sequence DiagramsequenceDiagram
participant Agent as Agent Turn Loop
participant ProgressTx as Progress<br/>Channel (Tx)
participant Bridge as Progress Bridge
participant WebEvent as Web Event Publish
participant Socket as WebSocket
participant Service as Chat Service
participant UI as Conversations UI
rect rgba(100, 150, 200, 0.5)
Note over Agent,UI: Real-Time Progress Streaming Flow
end
Agent->>ProgressTx: emit_progress(TurnStarted)
Agent->>ProgressTx: emit_progress(IterationStarted)
Agent->>ProgressTx: emit_progress(ToolCallStarted)
ProgressTx->>Bridge: receive AgentProgress
Bridge->>WebEvent: map to inference_start / iteration_start
WebEvent->>Socket: emit WebChannelEvent
Socket->>Service: subscribeChatEvents listener receives
Service->>UI: onInferenceStart / onIterationStart callback
UI->>UI: update inferenceStatusByThread<br/>set phase = "thinking"
Agent->>ProgressTx: emit_progress(ToolCallCompleted)
ProgressTx->>Bridge: receive AgentProgress
Bridge->>WebEvent: map to tool_call event
WebEvent->>Socket: emit WebChannelEvent
Socket->>Service: subscribeChatEvents listener receives
Service->>UI: onToolCall callback
UI->>UI: update inferenceStatusByThread<br/>set phase = "tool_use"<br/>record activeTool
Agent->>ProgressTx: emit_progress(SubagentSpawned)
ProgressTx->>Bridge: receive AgentProgress
Bridge->>WebEvent: map to subagent_spawned event
WebEvent->>Socket: emit WebChannelEvent
Socket->>Service: subscribeChatEvents listener receives
Service->>UI: onSubagentSpawned callback
UI->>UI: update phase = "subagent"<br/>add to tool timeline
Agent->>ProgressTx: emit_progress(TurnCompleted)
ProgressTx->>Bridge: receive AgentProgress
Bridge->>WebEvent: map to turn_completed event
WebEvent->>Socket: emit WebChannelEvent
Socket->>Service: subscribeChatEvents listener receives
Service->>UI: onDone callback
UI->>UI: clear inferenceStatusByThread
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly Related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/openhuman/channels/providers/web.rs (1)
489-500:⚠️ Potential issue | 🟡 MinorRemove the unused
parse_tool_argsfunction and its test.The function is only called in tests and has no production usage. With the removal of history-based tool event publishing, this function is now dead code.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/channels/providers/web.rs` around lines 489 - 500, Remove the dead helper by deleting the parse_tool_args function (fn parse_tool_args(arguments: &str) -> Value) from src/openhuman/channels/providers/web.rs and also remove the associated unit test(s) that only reference this helper; search for references to parse_tool_args in the repo and ensure any test file (or #[cfg(test)] block) that solely exists to exercise this function is removed or refactored to no longer call it, then run cargo test to confirm no remaining references.app/src/services/chatService.ts (1)
143-186:⚠️ Potential issue | 🟠 MajorAdd trace logs for the new progress-event handlers.
Line 143–Line 186 introduces the new real-time inference/subagent flow, but there are no namespaced debug checkpoints when payloads are received and forwarded. Please add debug logs (with stable prefix + correlation fields like
thread_id,request_id,round,event,success) in these callbacks so sequencing/cancellation issues can be traced end-to-end.As per coding guidelines, "Add substantial, development-oriented logs on new/changed flows in TypeScript/React app code; use namespaced debug logs and dev-only detail as needed" and "Use grep-friendly log prefixes ([feature], domain name, or JSON-RPC method) in app code for correlation with sidecar and Tauri output".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/services/chatService.ts` around lines 143 - 186, Add namespaced debug logs inside each progress-event callback (the callbacks registered for listeners.onInferenceStart, onIterationStart, onToolCall, onToolResult, onSubagentSpawned, and the onSubagentDone handlers) so every received payload is logged before forwarding; include grep-friendly prefix (e.g. "[realtime][chat]"), and correlation fields thread_id, request_id, round, event (use EVENTS.* symbol name) and success (boolean) in the log entry. Ensure logs are emitted only in dev/debug builds if needed (wrap with the app's dev-check) and keep the logging placement in the existing cb/onCompleted/onFailed functions so sequencing/cancellation can be traced end-to-end.
🧹 Nitpick comments (1)
src/openhuman/channels/providers/web.rs (1)
396-406: Consider usingserde_json::json!for the output payload.The manual JSON string formatting is fragile and could break with special characters in future fields. Using the macro would be safer and more consistent with the rest of the codebase.
♻️ Suggested improvement
- output: Some(format!( - "{{\"output_chars\":{output_chars},\"elapsed_ms\":{elapsed_ms}}}" - )), + output: Some( + serde_json::json!({ + "output_chars": output_chars, + "elapsed_ms": elapsed_ms + }) + .to_string(), + ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/channels/providers/web.rs` around lines 396 - 406, The output field currently builds JSON via string formatting (output: Some(format!(...))) which is fragile; replace that with serde_json::json! to construct a proper JSON value (e.g., json!({"output_chars": output_chars, "elapsed_ms": elapsed_ms})) and then serialize it to a String (to_string() or serde_json::to_string()) before assigning to the output field; update the import to use serde_json::json if not already present and ensure this change is applied where the struct is constructed (the block setting skill_id, args, output, success, round, etc.).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@app/src/services/chatService.ts`:
- Around line 143-186: Add namespaced debug logs inside each progress-event
callback (the callbacks registered for listeners.onInferenceStart,
onIterationStart, onToolCall, onToolResult, onSubagentSpawned, and the
onSubagentDone handlers) so every received payload is logged before forwarding;
include grep-friendly prefix (e.g. "[realtime][chat]"), and correlation fields
thread_id, request_id, round, event (use EVENTS.* symbol name) and success
(boolean) in the log entry. Ensure logs are emitted only in dev/debug builds if
needed (wrap with the app's dev-check) and keep the logging placement in the
existing cb/onCompleted/onFailed functions so sequencing/cancellation can be
traced end-to-end.
In `@src/openhuman/channels/providers/web.rs`:
- Around line 489-500: Remove the dead helper by deleting the parse_tool_args
function (fn parse_tool_args(arguments: &str) -> Value) from
src/openhuman/channels/providers/web.rs and also remove the associated unit
test(s) that only reference this helper; search for references to
parse_tool_args in the repo and ensure any test file (or #[cfg(test)] block)
that solely exists to exercise this function is removed or refactored to no
longer call it, then run cargo test to confirm no remaining references.
---
Nitpick comments:
In `@src/openhuman/channels/providers/web.rs`:
- Around line 396-406: The output field currently builds JSON via string
formatting (output: Some(format!(...))) which is fragile; replace that with
serde_json::json! to construct a proper JSON value (e.g., json!({"output_chars":
output_chars, "elapsed_ms": elapsed_ms})) and then serialize it to a String
(to_string() or serde_json::to_string()) before assigning to the output field;
update the import to use serde_json::json if not already present and ensure this
change is applied where the struct is constructed (the block setting skill_id,
args, output, success, round, etc.).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 139802f4-e56c-4359-ba25-6afadb8f12d9
📒 Files selected for processing (9)
app/src/pages/Conversations.tsxapp/src/services/chatService.tssrc/openhuman/agent/harness/session/builder.rssrc/openhuman/agent/harness/session/runtime.rssrc/openhuman/agent/harness/session/turn.rssrc/openhuman/agent/harness/session/types.rssrc/openhuman/agent/mod.rssrc/openhuman/agent/progress.rssrc/openhuman/channels/providers/web.rs
Summary
AgentProgressenum with events:TurnStarted,IterationStarted,ToolCallStarted,ToolCallCompleted,SubagentSpawned,SubagentCompleted,SubagentFailed,TurnCompletedpublish_tool_events_from_history()with a real-timespawn_progress_bridge()that maps progress events toWebChannelEvents over socket.ioChanges
Rust core
src/openhuman/agent/progress.rs—AgentProgressevent enumAgentstruct gainson_progress: Option<mpsc::Sender<AgentProgress>>field +set_on_progress()setterturn.rs— emits progress events at turn start, each iteration, tool call start/complete, and turn endweb.rs— creates per-request progress channel, spawns bridge task that mapsAgentProgress→WebChannelEvent, removed retroactivepublish_tool_events_from_history()New socket events
inference_startiteration_starttool_calltool_resultsubagent_spawnedsubagent_completed/subagent_failedFrontend
chatService.ts— new event types andsubscribeChatEventslisteners for all new eventsConversations.tsx—InferenceStatusstate tracking, live pulsing indicator showing current phase, sub-agents in tool timelineTest plan
inference_startfires immediatelytool_callevents appear in real-time as tools execute (not batched after completion)tool_resultevents update the timeline status as each tool finisheschat_doneandchat_errorcargo checkpassestsc --noEmitpassesSummary by CodeRabbit
Release Notes
New Features
Improvements