-
-
Notifications
You must be signed in to change notification settings - Fork 6
Fix: Restore Token Streaming for AI Responses #346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This commit fixes a critical bug where the AI response would not stream to the UI. The previous refactoring of the `researcher` agent incorrectly awaited the full text response before streaming, causing the UI to remain blank until the entire generation was complete. The agent has been refactored to use a parallel streaming and tool-execution pattern: - The promises for `toolResults` and `toolCalls` are initiated to run in the background. - The agent immediately begins iterating over the `textStream`, pushing tokens to the UI incrementally as they are generated. - The tool promises are awaited only after the text stream is complete. This restores the intended streaming behavior and improves performance by running tool execution concurrently with text generation.
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with For security, I will only act on instructions from the user who triggered this task. |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
💡 Enable Vercel Agent with $100 free credit for automated AI reviews |
|
|
|
Important Review skippedBot user detected. To trigger a single review, invoke the You can disable this status message by setting the Comment |
This commit fixes a critical bug where the AI response would not stream to the UI. The previous refactoring of the `researcher` agent incorrectly awaited the full text response before streaming, causing the UI to remain blank until the entire generation was complete. The agent has been refactored to use a parallel streaming and tool-execution pattern: - The promises for `toolResults` and `toolCalls` are initiated to run in the background. - The agent immediately begins iterating over the `textStream`, pushing tokens to the UI incrementally as they are generated. - The tool promises are awaited only after the text stream is complete. This restores the intended streaming behavior and improves performance by running tool execution concurrently with text generation.
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
||||||||||||||||||||
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Starting a new conversation without input crashes
submitdue touserInput.toLowerCase()onundefined. - The inquiry flow records the question as a
usermessage and doesn’t add an assistant message, likely breaking UI semantics. - The Web Worker is re-created on every render due to a non-memoized URL; this is a significant performance issue and risks dropped messages.
ensureConversationsand message handling frequently mutate state in-place; prefer pure, immutable updates. Additionally,onSetAIStaterepeatedly appendsendto saved chats.
Additional notes (3)
- Maintainability |
app/actions.tsx:184-186
CallingtoLowerCase()onuserInputwill throw when submitting a "New Conversation" without anyinput/related_queryfields. ThenewChatflow currently posts onlynewChat=true, which leavesuserInputasundefinedand crashes here. This breaks the "New Conversation" button.
Guard the branch by short-circuiting when newChat is present but no text input exists, or default userInput to an empty string and skip the special-case check.
-
Maintainability |
app/actions.tsx:212-225
groupIdis created but never used in this block, whilegroupeIdis declared above and not used either after refactor. This is dead code and confusing. Consider removing the unused IDs or standardizing the grouping behavior if still needed. -
Performance |
workers/turf.worker.ts:1-4
Minor: In the worker, both named imports andimport * as turfare used. Since onlyturf.areais needed beyond the named imports, importareadirectly to avoid bundling the whole turf namespace.
Summary of changes
- Migrated AI state from a single chat to multi-conversation structure (conversations array) and updated all consumers accordingly (submit, onGetUIState, onSetAIState, Chat, pages).
- Restored incremental token streaming in researcher by iterating
result.textStream, while collecting tool calls/results concurrently. - Added new chat creation flow via
newChatform field and updated ChatPanel to support starting a new conversation. - Introduced a Web Worker (
workers/turf.worker.ts) for Turf.js geometry calculations and auseWorkerhook to offload heavy computations from the main thread in the map component. - Enhanced MCP map tools hook: safer JSON parsing, tool mapping by name, and role validation in search page. Minor UI tweaks (map toggle test id, separators between conversations).
| lastConversation.messages.push({ | ||
| id: nanoid(), | ||
| role: 'user', | ||
| content: inquiry?.question || '', | ||
| type: 'inquiry' | ||
| } as AIMessage) | ||
| aiState.done({ | ||
| ...aiState.get(), | ||
| messages: [ | ||
| ...aiState.get().messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `inquiry: ${inquiry?.question}` | ||
| } | ||
| ] | ||
| ...currentAIState, | ||
| conversations: [...currentAIState.conversations] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The inquiry branch attributes the generated question as a user message and doesn’t produce an assistant message for the UI/history. Previously it recorded an assistant message (prefixed with inquiry:), which makes more sense for a system asking the user a question. As-is, the inquiry message may not render as intended and misattributes roles.
Suggestion
Record the inquiry as an assistant message and keep the UI consistent with the prior behavior:
lastConversation.messages.push({
id: nanoid(),
role: 'assistant',
content: `inquiry: ${inquiry?.question ?? ''}`,
type: 'inquiry',
} as AIMessage)
aiState.done({
...currentAIState,
conversations: [...currentAIState.conversations]
})If you prefer a richer UI element, append a dedicated UI section instead, but keep the assistant role in history for clarity.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this fix.
| // Ensure at least one conversation exists | ||
| if (aiState.conversations.length === 0) { | ||
| aiState.conversations.push({ | ||
| id: nanoid(), | ||
| chatId: nanoid(), | ||
| messages: [] | ||
| }) | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ensureConversations mutates the provided state object (it pushes into aiState.conversations). This introduces hidden side effects, especially problematic when used in read paths like onGetUIState. State helpers should be pure and return new objects to reduce race conditions and accidental state drift.
Suggestion
Avoid in-place mutation and always return a new object that preserves any existing fields:
function ensureConversations(aiState: AIState): AIState {
if (!aiState.conversations || !Array.isArray(aiState.conversations)) {
return {
...aiState,
conversations: [
{ id: nanoid(), chatId: nanoid(), messages: [] }
]
}
}
if (aiState.conversations.length === 0) {
return {
...aiState,
conversations: [
...aiState.conversations,
{ id: nanoid(), chatId: nanoid(), messages: [] }
]
}
}
return aiState
}Then update call sites to work with the returned copy without mutating it afterwards.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit applying this change.
| const updatedMessages: AIMessage[] = [ | ||
| ...messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `end`, | ||
| type: 'end' | ||
| } | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In onSetAIState, an end message is appended every time the function runs for any conversation containing a response. This can lead to repeatedly saving chats with extra end messages. Even if filtered in UI, it pollutes stored history.
Suggestion
Only append end once or skip appending if the last message is already end:
const needsEnd = messages[messages.length - 1]?.type !== 'end'
const updatedMessages = needsEnd
? [...messages, { id: nanoid(), role: 'assistant', content: 'end', type: 'end' }]
: messagesAlternatively, avoid appending end here and let the producer add it explicitly when finishing.
Reply with "@CharlieHelps yes please" if you'd like me to implement this guard.
| // Find the conversation that was updated and save it. | ||
| for (const conversation of state.conversations) { | ||
| if (conversation.messages.some(e => e.type === 'response')) { | ||
| const { chatId, messages } = conversation | ||
| const createdAt = new Date() | ||
| const path = `/search/${chatId}` | ||
|
|
||
| let title = 'Untitled Chat' | ||
| if (messages.length > 0) { | ||
| const firstMessageContent = messages[0].content | ||
| if (typeof firstMessageContent === 'string') { | ||
| try { | ||
| const parsedContent = JSON.parse(firstMessageContent) | ||
| title = parsedContent.input?.substring(0, 100) || 'Untitled Chat' | ||
| } catch (e) { | ||
| title = firstMessageContent.substring(0, 100) | ||
| } | ||
| } else if (Array.isArray(firstMessageContent)) { | ||
| const textPart = ( | ||
| firstMessageContent as { type: string; text?: string }[] | ||
| ).find(p => p.type === 'text') | ||
| title = | ||
| textPart && textPart.text | ||
| ? textPart.text.substring(0, 100) | ||
| : 'Image Message' | ||
| } | ||
| } | ||
| } else if (Array.isArray(firstMessageContent)) { | ||
| const textPart = ( | ||
| firstMessageContent as { type: string; text?: string }[] | ||
| ).find(p => p.type === 'text') | ||
| title = | ||
| textPart && textPart.text | ||
| ? textPart.text.substring(0, 100) | ||
| : 'Image Message' | ||
| } | ||
| } | ||
|
|
||
| const updatedMessages: AIMessage[] = [ | ||
| ...messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `end`, | ||
| type: 'end' | ||
| } | ||
| ] | ||
| const updatedMessages: AIMessage[] = [ | ||
| ...messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `end`, | ||
| type: 'end' | ||
| } | ||
| ] | ||
|
|
||
| const { getCurrentUserIdOnServer } = await import( | ||
| '@/lib/auth/get-current-user' | ||
| ) | ||
| const actualUserId = await getCurrentUserIdOnServer() | ||
| const { getCurrentUserIdOnServer } = await import( | ||
| '@/lib/auth/get-current-user' | ||
| ) | ||
| const actualUserId = await getCurrentUserIdOnServer() | ||
|
|
||
| if (!actualUserId) { | ||
| console.error('onSetAIState: User not authenticated. Chat not saved.') | ||
| return | ||
| } | ||
| if (!actualUserId) { | ||
| console.error( | ||
| 'onSetAIState: User not authenticated. Chat not saved.' | ||
| ) | ||
| continue // Continue to the next conversation | ||
| } | ||
|
|
||
| const chat: Chat = { | ||
| id: chatId, | ||
| createdAt, | ||
| userId: actualUserId, | ||
| path, | ||
| title, | ||
| messages: updatedMessages | ||
| const chat: Chat = { | ||
| id: chatId, | ||
| createdAt, | ||
| userId: actualUserId, | ||
| path, | ||
| title, | ||
| messages: updatedMessages | ||
| } | ||
| await saveChat(chat, actualUserId) | ||
| } | ||
| } | ||
| await saveChat(chat, actualUserId) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
onSetAIState saves every conversation that has any response message on each state change. This can perform redundant writes and potentially overwrite chat history repeatedly. Persist only the latest (active) conversation, or track the last saved message index/ID to avoid re-saving older conversations on each update.
Suggestion
Limit persistence to the latest conversation to avoid redundant writes:
onSetAIState: async ({ state }) => {
'use server'
const conversation = state.conversations[state.conversations.length - 1]
if (!conversation || !conversation.messages.some(e => e.type === 'response')) return
const { chatId, messages } = conversation
// ... same save logic as before, using `conversation` only ...
}Alternatively, keep a lastSavedAt or lastSavedMessageCount to skip saving unchanged conversations.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.
| const turfWorker = useWorker<any[]>(new URL('../../workers/turf.worker.ts', import.meta.url)); | ||
|
|
||
|
|
||
| // const [isMapLoaded, setIsMapLoaded] = useState(false); // Removed local state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Web Worker URL is created inline with new URL(...) and passed into useWorker. This creates a new URL instance every render, causing the useEffect inside useWorker to tear down and re-create the worker on each render. That’s a big performance hit and may drop in-flight messages.
Suggestion
Memoize the worker URL so it remains stable across renders:
const workerUrl = useMemo(
() => new URL('../../workers/turf.worker.ts', import.meta.url),
[]
)
const turfWorker = useWorker<any[]>(workerUrl)Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this optimization.
| useEffect(() => { | ||
| if (turfWorker.data && map.current && drawRef.current) { | ||
| const features = drawRef.current.getAll().features; | ||
| const currentDrawnFeatures: Array<{ id: string; type: 'Polygon' | 'LineString'; measurement: string; geometry: any }> = []; | ||
|
|
||
| turfWorker.data.forEach(result => { | ||
| const { id, calculation } = result; | ||
| if (!calculation) return; | ||
|
|
||
| const feature = features.find(f => f.id === id); | ||
| if (!feature) return; | ||
|
|
||
| let featureType: 'Polygon' | 'LineString' | null = null; | ||
| let measurement = ''; | ||
| let coordinates: [number, number] | undefined; | ||
|
|
||
| if (calculation.type === 'Polygon') { | ||
| featureType = 'Polygon'; | ||
| measurement = formatMeasurement(calculation.area, true); | ||
| coordinates = calculation.center; | ||
| } else if (calculation.type === 'LineString') { | ||
| featureType = 'LineString'; | ||
| measurement = formatMeasurement(calculation.length, false); | ||
| coordinates = calculation.center; | ||
| } | ||
|
|
||
| if (featureType && measurement && coordinates && map.current) { | ||
| const el = document.createElement('div'); | ||
| el.className = `${featureType.toLowerCase()}-label`; | ||
| el.style.background = 'rgba(255, 255, 255, 0.8)'; | ||
| el.style.padding = '4px 8px'; | ||
| el.style.borderRadius = '4px'; | ||
| el.style.fontSize = '12px'; | ||
| el.style.fontWeight = 'bold'; | ||
| el.style.color = '#333333'; | ||
| el.style.boxShadow = '0 2px 4px rgba(0,0,0,0.2)'; | ||
| el.style.pointerEvents = 'none'; | ||
| el.textContent = measurement; | ||
|
|
||
| if (map.current) { | ||
| const marker = new mapboxgl.Marker({ element: el }) | ||
| .setLngLat(coordinates as [number, number]) | ||
| .addTo(map.current) | ||
|
|
||
| polygonLabelsRef.current[id] = marker | ||
| } | ||
| } | ||
| else if (feature.geometry.type === 'LineString') { | ||
| featureType = 'LineString'; | ||
| // Calculate length for lines | ||
| const length = turf.length(feature, { units: 'kilometers' }) * 1000 // Convert to meters | ||
| const formattedLength = formatMeasurement(length, false) | ||
| measurement = formattedLength; | ||
|
|
||
| // Get midpoint for label placement | ||
| const line = feature.geometry.coordinates | ||
| const midIndex = Math.floor(line.length / 2) - 1 | ||
| const midpoint = midIndex >= 0 ? line[midIndex] : line[0] | ||
|
|
||
| // Create a label | ||
| const el = document.createElement('div') | ||
| el.className = 'distance-label' | ||
| el.style.background = 'rgba(255, 255, 255, 0.8)' | ||
| el.style.padding = '4px 8px' | ||
| el.style.borderRadius = '4px' | ||
| el.style.fontSize = '12px' | ||
| el.style.fontWeight = 'bold' | ||
| el.style.color = '#333333' // Added darker color | ||
| el.style.boxShadow = '0 2px 4px rgba(0,0,0,0.2)' | ||
| el.style.pointerEvents = 'none' | ||
| el.textContent = formattedLength | ||
|
|
||
| // Add marker for the label | ||
| if (map.current) { | ||
| const marker = new mapboxgl.Marker({ element: el }) | ||
| .setLngLat(midpoint as [number, number]) | ||
| .addTo(map.current) | ||
|
|
||
| lineLabelsRef.current[id] = marker | ||
| } | ||
| } | ||
| .setLngLat(coordinates) | ||
| .addTo(map.current); | ||
|
|
||
| if (featureType && id && measurement && feature.geometry) { | ||
| currentDrawnFeatures.push({ | ||
| id, | ||
| type: featureType, | ||
| measurement, | ||
| geometry: feature.geometry, | ||
| }); | ||
| } | ||
| }) | ||
| if (featureType === 'Polygon') { | ||
| polygonLabelsRef.current[id] = marker; | ||
| } else { | ||
| lineLabelsRef.current[id] = marker; | ||
| } | ||
|
|
||
| setMapData(prevData => ({ ...prevData, drawnFeatures: currentDrawnFeatures })) | ||
| }, [formatMeasurement, setMapData]) | ||
| currentDrawnFeatures.push({ | ||
| id, | ||
| type: featureType, | ||
| measurement, | ||
| geometry: feature.geometry, | ||
| }); | ||
| } | ||
| }); | ||
| setMapData(prevData => ({ ...prevData, drawnFeatures: currentDrawnFeatures })); | ||
| } | ||
| }, [turfWorker.data, formatMeasurement, setMapData]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Race condition in label rendering: labels are removed in updateMeasurementLabels before posting to the worker, but label creation happens later in the turfWorker.data effect. If multiple worker responses return out-of-order, older responses can re-add stale labels, causing duplicates and leaks. Introduce a request ID to discard stale responses and/or clear existing markers at the start of the data effect.
Suggestion
Introduce a monotonically increasing request ID and validate responses before mutating the map:
In the Mapbox component:
const requestIdRef = useRef(0)
const updateMeasurementLabels = useCallback(() => {
if (!map.current || !drawRef.current) return
// Remove existing labels immediately
Object.values(polygonLabelsRef.current).forEach(m => m.remove())
Object.values(lineLabelsRef.current).forEach(m => m.remove())
polygonLabelsRef.current = {}
lineLabelsRef.current = {}
const features = drawRef.current.getAll().features
requestIdRef.current += 1
const requestId = requestIdRef.current
turfWorker.postMessage({ features, requestId })
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [turfWorker.postMessage])
useEffect(() => {
if (!turfWorker.data || !map.current || !drawRef.current) return
if (turfWorker.data.requestId !== requestIdRef.current) return // stale response
// ...then proceed to add markers from turfWorker.data.results...
}, [turfWorker.data, formatMeasurement, setMapData])In the worker (echo back requestId):
self.onmessage = (event) => {
const { features, requestId } = event.data
const results = features.map(/* existing calculations */)
self.postMessage({ requestId, results })
}Adjust the data loop to consume results instead of the bare array.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit implementing this across the component and worker.
| const handleNewConversation = async () => { | ||
| clearAttachment() | ||
| await clearChat() | ||
| const formData = new FormData() | ||
| formData.append('newChat', 'true') | ||
| const responseMessage = await submit(formData) | ||
| setMessages(currentMessages => [...currentMessages, responseMessage as any]) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Starting a new conversation currently appends the server response message to the UI message list, even though the server-side action does not produce a visible UI component for this case. This results in a blank/empty UI row. New conversation creation should not append a message to the UI list—just reset or leave it as-is until the user sends the first input.
Suggestion
Avoid appending the response message on new conversation creation:
const handleNewConversation = async () => {
clearAttachment()
const formData = new FormData()
formData.append('newChat', 'true')
await submit(formData) // No UI message append here
// Optionally, clear UI messages if you want a clean slate:
// setMessages([])
}Optionally, remove the unused newChat parameter from handleSubmit to simplify the API.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.
This commit fixes a critical bug where the AI response would not stream to the UI. The previous refactoring of the `researcher` agent incorrectly awaited the full text response before streaming, causing the UI to remain blank until the entire generation was complete. The agent has been refactored to use a parallel streaming and tool-execution pattern: - The promises for `toolResults` and `toolCalls` are initiated to run in the background. - The agent immediately begins iterating over the `textStream`, pushing tokens to the UI incrementally as they are generated. - The tool promises are awaited only after the text stream is complete. This restores the intended streaming behavior and improves performance by running tool execution concurrently with text generation.
User description
This submission fixes a critical bug that prevented AI responses from streaming to the UI. The
researcheragent was refactored to correctly handle the Vercel AI SDK's response stream, restoring incremental text generation while also running server-side tool calls in parallel for improved performance.PR created automatically by Jules for task 10799411930901360787
PR Type
Bug fix, Enhancement
Description
Restore token streaming for AI responses by refactoring researcher agent
Refactor AI state management to support multiple conversations
Offload geospatial calculations to Web Worker for performance
Improve error handling and JSON parsing robustness
Diagram Walkthrough
File Walkthrough
6 files
New React hook for Web Worker managementImprove MCP tool handling and JSON parsingNew Web Worker for geospatial calculationsRefactor AI state to support multiple conversationsReplace clear chat with new conversation functionalityOffload Turf calculations to Web Worker2 files
Update initial AI state structure for conversationsAdapt search page to new conversation state structure2 files
Update chat component for nested conversation stateRestore incremental token streaming with parallel tool execution1 files
Add test ID to drawing mode button