Skip to content

fix(server): ensure final assistant answer renders immediately in existing threads#215

Open
thatdaveguy1 wants to merge 1 commit intogetpaseo:mainfrom
thatdaveguy1:fix/final-answer-render-race-v2
Open

fix(server): ensure final assistant answer renders immediately in existing threads#215
thatdaveguy1 wants to merge 1 commit intogetpaseo:mainfrom
thatdaveguy1:fix/final-answer-render-race-v2

Conversation

@thatdaveguy1
Copy link
Copy Markdown
Contributor

Problem

When opening an existing thread and immediately asking a follow-up, the final assistant answer sometimes never appears — the turn completes but the last message is missing or shows partial/stale content.

Two coordinated bugs cause this:

Bug 1 — Server drops buffered messages on turn completion

In codex-app-server-agent.ts, item/agentMessage/delta events are collected into pendingAgentMessages and emitted lazily. But the turn_completed handler emits the turn_completed event without first flushing pending messages when status is "completed". The client sees the turn close and never receives the final assistant text.

Bug 2 — Client replaces streamed content with a stale tail

In stream.ts, flushHeadToTail() builds an append-only set of tail IDs. When the finalized head arrives, items with matching IDs are not replaced — they're just skipped. The UI ends up showing the partially-streamed tail content even after the head delivers the full final text.

Fix

Server (codex-app-server-agent.ts):

  • Added emitBufferedAssistantMessages() private method that flushes pendingAgentMessages as timeline/assistant_message events
  • Called before turn_completed emission whenever status === "completed" and the map is non-empty

Client (stream.ts):

  • Replaced flushHeadToTail() internals with a Map<id, index> lookup
  • When a finalized head item ID matches an existing tail entry, the tail entry is replaced in-place rather than appended

Tests

Two regression tests added — one per bug:

  • codex-app-server-agent.test.ts: "emits buffered assistant text before task_complete closes the turn"
  • stream-event.test.ts: "replaces stale tail content with finalized head content on turn completion"

All existing tests continue to pass (9/9 server, 7/7 stream).

…reads

Two related races prevented the last assistant message from appearing in the
UI after a turn completed.

Server (codex-app-server-agent):
- Codex can emit codex/event/task_complete before the buffered assistant
  delta has been promoted to a timeline item.  Add
  emitBufferedAssistantMessages() and call it when turn_completed status
  is 'completed' and pendingAgentMessages is non-empty.

Client (stream.ts):
- The head→tail completion flush only appended items whose IDs were absent
  from tail.  If tail already held a stale/partial item with the same ID
  the finalized version was silently dropped.  Update flushHeadToTail() to
  replace existing tail entries when the finalized head item differs.

Tests:
- Regression test: emits buffered assistant text before task_complete closes
  the turn (server).
- Regression test: replaces stale tail content with finalized head content
  on turn completion (client).
Copilot AI review requested due to automatic review settings April 7, 2026 14:17
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes a regression where the final assistant message can be missing or stale when continuing an existing thread by ensuring buffered assistant deltas are flushed before turn completion on the server, and by replacing stale tail items with finalized head items on the client.

Changes:

  • Server: flushes buffered item/agentMessage/delta content as timeline/assistant_message events before emitting turn_completed.
  • Client: updates flushHeadToTail() to replace existing tail items in-place when a finalized head item shares the same ID.
  • Tests: adds one regression test for each bug.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.

File Description
packages/server/src/server/agent/providers/codex-app-server-agent.ts Adds a buffered-assistant flush helper and calls it before turn_completed on successful completion.
packages/server/src/server/agent/providers/codex-app-server-agent.test.ts Adds regression test asserting buffered assistant text is emitted before turn_completed.
packages/app/src/types/stream.ts Changes head→tail flush logic to replace stale tail entries when IDs match finalized head items.
packages/app/src/types/stream-event.test.ts Adds regression test ensuring tail content is replaced by finalized head content on completion.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +3177 to +3189
private emitBufferedAssistantMessages(): void {
for (const [itemId, text] of this.pendingAgentMessages.entries()) {
if (!text) {
continue;
}
this.emitEvent({
type: "timeline",
provider: CODEX_PROVIDER,
item: { type: "assistant_message", text },
});
this.emittedItemCompletedIds.add(itemId);
}
this.pendingAgentMessages.clear();
Copy link

Copilot AI Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

emitBufferedAssistantMessages() calls this.emitEvent() for each buffered entry, but emitEvent() clears pendingAgentMessages whenever it emits a timeline assistant_message. That means the first emitted buffered message will clear the map during iteration and can prevent remaining buffered messages from being emitted. Consider snapshotting entries (e.g., const entries = [...pendingAgentMessages.entries()]), clearing the map before emitting, or emitting via notifySubscribers() without triggering the emitEvent() side effect so all buffered messages reliably flush.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants