Skip to content

feat: persist agent stream on browser refresh#482

Draft
Bl3f wants to merge 2 commits intomainfrom
cursor/stream-persistence-on-refresh-f08d
Draft

feat: persist agent stream on browser refresh#482
Bl3f wants to merge 2 commits intomainfrom
cursor/stream-persistence-on-refresh-f08d

Conversation

@Bl3f
Copy link
Contributor

@Bl3f Bl3f commented Mar 19, 2026

Problem

When a user refreshes the browser while an agent is running, the stream connection drops. The agent continues running on the backend, but the frontend loses its connection — the stream stops, the sidebar loader disappears, and the response appears frozen.

Solution

Leverage the Vercel AI SDK's built-in reconnectToStream / resumeStream support to resume the stream after refresh, and keep the sidebar showing a spinner for running agents.

Backend

  • StreamBuffer (apps/backend/src/lib/stream-buffer.ts) — A new utility that consumes a ReadableStream, buffers all chunks in memory, and allows creating multiple independent readers. Each reader replays buffered content from the start and then continues with live data. This enables stream resumption without losing any data.

  • AgentManager changes — The agent's stream() method now wraps the output through a StreamBuffer, and a new resumeStream() method returns a reader that filters out initial-setup events (data-newChat, data-newUserMessage) so they don't replay on reconnect.

  • GET /api/agent/:chatId/stream — New resume endpoint that returns a createUIMessageStreamResponse from the buffer, or 204 if no agent is running.

  • chat.activeChats tRPC query — Returns the list of chat IDs with actively running agents for the current user.

Frontend

  • useActiveAgents hook — Polls chat.activeChats on app load (and every 10s while agents are active). Sets chatActivityStore.running = true for active agents so the sidebar shows spinners immediately after refresh. Also clears stale running state when agents finish between polls.

  • useAgent hook — Configures prepareReconnectToStreamRequest on the transport so the AI SDK's reconnectToStream hits the correct URL. Computes a shouldResume flag based on: chat exists, DB messages loaded, activity store shows running, and no frontend stream is already active. Passes this as the resume option to useChat.

How it works

  1. User refreshes the browser while an agent is running
  2. On load, useActiveAgents queries chat.activeChats → backend returns running chat IDs
  3. chatActivityStore is updated → sidebar shows spinners for running chats
  4. When the user is on (or navigates to) a running chat, useAgent detects shouldResume = true
  5. useChat({ resume: true }) triggers the AI SDK's resumeStream()GET /api/agent/:chatId/stream
  6. Backend returns the full buffered stream (replay + live) → frontend processes it seamlessly
  7. If the agent finishes before the user reconnects, the endpoint returns 204 and the sidebar clears
Open in Web Open in Cursor 

Backend:
- Add StreamBuffer class that buffers stream chunks and allows creating
  multiple readers for stream resumption
- Modify AgentManager to buffer the stream output so reconnecting clients
  can replay accumulated data
- Add GET /api/agent/:chatId/stream endpoint that returns a resume stream
  (204 if no active agent)
- Add chat.activeChats tRPC query returning running chat IDs for the user

Frontend:
- Add useActiveAgents hook that polls for running backend agents and
  syncs chatActivityStore so the sidebar shows spinners after refresh
- Configure prepareReconnectToStreamRequest on the transport so the
  AI SDK's built-in reconnectToStream sends GET to the correct URL
- Pass resume flag to useChat when a backend agent is detected as
  running for the current chat (after DB messages are loaded)
- Add checkIsAgentRunning guard to shouldResume so active frontend
  streams don't trigger an unwanted reconnect
- Track previously active chat IDs in useActiveAgents and clear running
  state for agents that have finished between polls
@github-actions
Copy link
Contributor

🚀 Preview Deployment

URL https://pr-482-747f906.preview.getnao.io
Commit 747f906

⚠️ No LLM API keys configured - you'll see the API key setup flow when trying to chat.


Preview will be automatically removed when this PR is closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants