-
-
Couldn't load subscription status.
- Fork 6
Implement System-Wide Concurrency and Performance Improvements #345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This commit introduces several major architectural changes to improve the application's overall concurrency, performance, and scalability.
1. **Concurrent Chat Conversations:**
* Refactored the core Vercel AI SDK state (`AIState`) to support an array of independent `Conversation` objects.
* This allows users to initiate new chat queries while previous ones are still running, with all conversations rendered in a single, sequential view.
* Updated the `submit` action, UI components, and state management lifecycle to handle the new concurrent state structure.
2. **Client-Side Performance with Web Workers:**
* Offloaded expensive `turf.js` geometric calculations (area, length) in the map component to a client-side Web Worker.
* Created a generic and reusable `useWorker` React hook to abstract the complexity of managing Web Worker lifecycles and communication, establishing a scalable pattern for future client-side optimizations.
* This ensures the map UI remains smooth and responsive during complex drawing operations.
3. **Parallel Server-Side Tool Execution:**
* Refactored the `researcher` agent to correctly leverage the Vercel AI SDK's built-in support for parallel tool execution.
* By awaiting the final result of the `streamText` call, the agent now allows the SDK to run multiple AI tool calls concurrently, reducing latency for complex server-side queries.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
💡 Enable Vercel Agent with $100 free credit for automated AI reviews |
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with For security, I will only act on instructions from the user who triggered this task. |
|
|
|
Important Review skippedBot user detected. To trigger a single review, invoke the You can disable this status message by setting the WalkthroughThe pull request restructures the AI state model from a flat messages array to a multi-conversation architecture, storing an array of Conversation objects. Complementary changes update state handlers, message routing, UI transformation logic, and introduce Web Worker infrastructure for offloading geographic computations. Chat panel UI handling is refined to support new conversation creation. Changes
Sequence DiagramsequenceDiagram
participant UI as Chat UI
participant Actions as app/actions.tsx
participant AIState as AI State
participant Conv as Conversation<br/>(last active)
participant Conv2 as Conversation<br/>(prev)
participant Worker as turf.worker
participant MapComp as mapbox-map
UI->>Actions: submit (user message)
Actions->>AIState: update conversations array
AIState->>Conv: append user message to last conversation
Note over Conv: { id, chatId, messages: [..., userMsg] }
UI->>Actions: newChat=true
Actions->>AIState: create new Conversation & append
Note over AIState: conversations: [...Conv2, ...newConv]
AIState->>Conv2: (previous conversation persisted)
AIState->>Conv: (new empty conversation)
MapComp->>Worker: postMessage({ features })
Worker->>Worker: compute area/centroid (Polygon)<br/>compute length/center (LineString)
Worker-->>MapComp: postMessage(results)
MapComp->>MapComp: update featureType, measurement,<br/>label coordinates
MapComp->>MapComp: setMapData with drawnFeatures
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes The review demands careful attention to the interconnected state architecture change across multiple files, worker lifecycle management integration, and per-conversation message routing logic. While changes follow a consistent pattern, the refactoring spans diverse concerns (state shape, UI transformation, worker threading, streaming logic) requiring separate validation for each area. Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
Comment |
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
|||||||||||||||||||
|
@coderabbitai review this pull request for improvements |
PR Code Suggestions ✨Explore these optional code suggestions:
|
||||||||||||||
|
@ngoiyaeric: I'll review this pull request for improvements. This looks like a substantial PR with architectural changes for concurrency and performance enhancements. I'll analyze the changes across all the modified files and provide feedback. ✅ Actions performedReview triggered.
|
|
@coderabbitai review This error message:
is a classic JavaScript runtime error that occurs when your code tries to access the 🧠 What It Means
🔍 Likely CausesBased on the stack trace and the fact that this is happening in a production Vercel deployment, here are some common culprits:
🛠️ How to Fix ItHere’s a step-by-step approach tailored to your debugging style:
If you want, I can help you trace this in your actual codebase—just paste the relevant component or data-fetching logic. Want to dig into the source map or check the Vercel deployment logs next? |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 3615 🏁 Script executed: Length of output: 1984 🏁 Script executed: Length of output: 1580 🏁 Script executed: Length of output: 3406 @ngoiyaeric: I've found the root cause of your error! 🎯 The issue is that your code is accessing 🔍 Root CauseIn
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Using
lastConversationplus stale snapshots ofaiStatecan overwrite concurrent updates and mis-route messages during parallel requests; target conversations by id and update from the latest state. - The Web Worker URL is not memoized and uses a leading-slash path, likely causing worker re-creation on every render and potential bundler resolution issues.
useWorker’s effect depends on the URL object identity, making it sensitive to equivalentURLinstances and causing unnecessary worker churn.- The researcher agent no longer streams text increments, likely hurting perceived responsiveness; consider restoring progressive streaming while keeping tools parallelized.
Additional notes (2)
-
Readability |
app/actions.tsx:158-158
groupeIdappears to be a misspelling and is used repeatedly to group related messages. Minor, but renaming improves readability and reduces future confusion. -
Performance |
lib/agents/researcher.tsx:87-104
By switching from streaming the model’s output to awaiting the final result, you’ve likely removed incremental token streaming to the UI. This can noticeably degrade perceived latency and responsiveness, even if tool calls execute in parallel.
Summary of changes
- Refactored AI state to support multiple concurrent conversations by introducing a
conversations: Conversation[]structure and updating all message mutations to operate on the last conversation. - Adjusted server action
submit(and related UI state handling) to append messages to the last conversation, added support for starting a new conversation vianewChatflag, and updatedonGetUIStateandonSetAIStateaccordingly. - Reworked
ChatPanelto replace “Clear Chat” with “New Conversation,” postingnewChatto the server and adjusting client-side UI updates. - Introduced a generic
useWorkerReact hook and a newturf.worker.tsWeb Worker to offload turf.js measurements for polygons and lines; updatedmapbox-map.tsxto use the worker and render labels from worker results. - Updated the
researcheragent to rely on the SDK’s parallel tool execution by awaiting the aggregated result instead of consuming the stream manually. - Minor tweaks: added test id to drawing mode button; fixed import path in
mapbox_mcp/hooks.ts; updatedapp/page.tsxandapp/search/[id]/page.tsxto use the new AI state shape.
app/actions.tsx
Outdated
| const currentAIState = aiState.get() | ||
| const lastConversation = | ||
| currentAIState.conversations[currentAIState.conversations.length - 1] | ||
| const messages: CoreMessage[] = [...(lastConversation.messages as any[])].filter( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using the last conversation and a snapshot of aiState (currentAIState = aiState.get()) across the entire request risks clobbering concurrent updates. If another request adds a conversation while this one is running, your later aiState.update({...currentAIState, conversations: [...]}) calls can overwrite that newer conversation. Additionally, targeting the last conversation (conversations[conversations.length - 1]) makes updates nondeterministic in concurrent scenarios (e.g., a long-running request started earlier will keep writing into whatever is last now). This undermines the stated goal of safe concurrency.
Suggestion
Target a specific conversation by id and always read fresh state immediately before writing. Include a conversationId (or chatId) in the form, look up that conversation in the latest state, and replace only that element immutably. For example:
- In the form submit, append the conversation id:
formData.append('conversationId', activeConversationId). - In
submit:
const state = aiState.get();
const targetId = (formData?.get('conversationId') as string) ?? state.conversations.at(-1)!.id;
const idx = state.conversations.findIndex(c => c.id === targetId);
if (idx !== -1) {
const conversations = state.conversations.slice();
const conv = conversations[idx];
const userMessage: AIMessage = { id: nanoid(), role: 'user', content, type };
conversations[idx] = { ...conv, messages: [...conv.messages, userMessage] };
aiState.update({ ...state, conversations });
}
Apply the same pattern for every subsequent mutation (tool outputs, assistant responses, etc.). Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.
| // Find the conversation that was updated and save it. | ||
| for (const conversation of state.conversations) { | ||
| if (conversation.messages.some(e => e.type === 'response')) { | ||
| const { chatId, messages } = conversation | ||
| const createdAt = new Date() | ||
| const path = `/search/${chatId}` | ||
|
|
||
| let title = 'Untitled Chat' | ||
| if (messages.length > 0) { | ||
| const firstMessageContent = messages[0].content | ||
| if (typeof firstMessageContent === 'string') { | ||
| try { | ||
| const parsedContent = JSON.parse(firstMessageContent) | ||
| title = parsedContent.input?.substring(0, 100) || 'Untitled Chat' | ||
| } catch (e) { | ||
| title = firstMessageContent.substring(0, 100) | ||
| } | ||
| } else if (Array.isArray(firstMessageContent)) { | ||
| const textPart = ( | ||
| firstMessageContent as { type: string; text?: string }[] | ||
| ).find(p => p.type === 'text') | ||
| title = | ||
| textPart && textPart.text | ||
| ? textPart.text.substring(0, 100) | ||
| : 'Image Message' | ||
| } | ||
| } | ||
| } else if (Array.isArray(firstMessageContent)) { | ||
| const textPart = ( | ||
| firstMessageContent as { type: string; text?: string }[] | ||
| ).find(p => p.type === 'text') | ||
| title = | ||
| textPart && textPart.text | ||
| ? textPart.text.substring(0, 100) | ||
| : 'Image Message' | ||
| } | ||
| } | ||
|
|
||
| const updatedMessages: AIMessage[] = [ | ||
| ...messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `end`, | ||
| type: 'end' | ||
| } | ||
| ] | ||
| const updatedMessages: AIMessage[] = [ | ||
| ...messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `end`, | ||
| type: 'end' | ||
| } | ||
| ] | ||
|
|
||
| const { getCurrentUserIdOnServer } = await import( | ||
| '@/lib/auth/get-current-user' | ||
| ) | ||
| const actualUserId = await getCurrentUserIdOnServer() | ||
| const { getCurrentUserIdOnServer } = await import( | ||
| '@/lib/auth/get-current-user' | ||
| ) | ||
| const actualUserId = await getCurrentUserIdOnServer() | ||
|
|
||
| if (!actualUserId) { | ||
| console.error('onSetAIState: User not authenticated. Chat not saved.') | ||
| return | ||
| } | ||
| if (!actualUserId) { | ||
| console.error( | ||
| 'onSetAIState: User not authenticated. Chat not saved.' | ||
| ) | ||
| continue // Continue to the next conversation | ||
| } | ||
|
|
||
| const chat: Chat = { | ||
| id: chatId, | ||
| createdAt, | ||
| userId: actualUserId, | ||
| path, | ||
| title, | ||
| messages: updatedMessages | ||
| const chat: Chat = { | ||
| id: chatId, | ||
| createdAt, | ||
| userId: actualUserId, | ||
| path, | ||
| title, | ||
| messages: updatedMessages | ||
| } | ||
| await saveChat(chat, actualUserId) | ||
| } | ||
| } | ||
| await saveChat(chat, actualUserId) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Persisting chats inside onSetAIState now iterates all conversations and saves any that contain a response. Without a guard, this can re-save the same conversation multiple times on subsequent update/done calls, increasing DB churn and risking inconsistent createdAt updates.
Suggestion
Track a persistence marker per conversation (e.g., persistedAt or a sentinel message type) or only persist when you append the terminal end message. Example:
if (conversation.messages.at(-1)?.type === 'end' && !conversation.persistedAt) {
// save and mark persisted
}If adding a field is undesirable, you can compare a stored hash/length to avoid re-saving unchanged conversations. Reply with "@CharlieHelps yes please" if you'd like me to add a minimal dedupe mechanism.
components/map/mapbox-map.tsx
Outdated
| const turfWorker = useWorker<any[]>(new URL('/workers/turf.worker.ts', import.meta.url)); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Web Worker URL is created with a leading slash and without memoization. new URL('/workers/turf.worker.ts', import.meta.url) is resolved as an absolute URL, which is often incorrect in bundlers and Next.js; moreover, creating a new URL every render tears down and recreates the worker due to the useEffect dependency in useWorker, causing performance issues and flicker.
Suggestion
Memoize a correct relative URL and pass the memoized reference into useWorker to avoid worker churn and path resolution issues:
import { useMemo } from 'react';
...
const workerUrl = useMemo(() => new URL('../../workers/turf.worker.ts', import.meta.url), []);
const turfWorker = useWorkerany[](workerUrl);
This ensures a stable identity and a bundler-friendly path. Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.
| if (turfWorker.data && map.current && drawRef.current) { | ||
| const features = drawRef.current.getAll().features; | ||
| const currentDrawnFeatures: Array<{ id: string; type: 'Polygon' | 'LineString'; measurement: string; geometry: any }> = []; | ||
|
|
||
| turfWorker.data.forEach(result => { | ||
| const { id, calculation } = result; | ||
| if (!calculation) return; | ||
|
|
||
| const feature = features.find(f => f.id === id); | ||
| if (!feature) return; | ||
|
|
||
| let featureType: 'Polygon' | 'LineString' | null = null; | ||
| let measurement = ''; | ||
| let coordinates: [number, number] | undefined; | ||
|
|
||
| if (calculation.type === 'Polygon') { | ||
| featureType = 'Polygon'; | ||
| measurement = formatMeasurement(calculation.area, true); | ||
| coordinates = calculation.center; | ||
| } else if (calculation.type === 'LineString') { | ||
| featureType = 'LineString'; | ||
| measurement = formatMeasurement(calculation.length, false); | ||
| coordinates = calculation.center; | ||
| } | ||
|
|
||
| if (featureType && measurement && coordinates && map.current) { | ||
| const el = document.createElement('div'); | ||
| el.className = `${featureType.toLowerCase()}-label`; | ||
| el.style.background = 'rgba(255, 255, 255, 0.8)'; | ||
| el.style.padding = '4px 8px'; | ||
| el.style.borderRadius = '4px'; | ||
| el.style.fontSize = '12px'; | ||
| el.style.fontWeight = 'bold'; | ||
| el.style.color = '#333333'; | ||
| el.style.boxShadow = '0 2px 4px rgba(0,0,0,0.2)'; | ||
| el.style.pointerEvents = 'none'; | ||
| el.textContent = measurement; | ||
|
|
||
| if (map.current) { | ||
| const marker = new mapboxgl.Marker({ element: el }) | ||
| .setLngLat(coordinates as [number, number]) | ||
| .addTo(map.current) | ||
|
|
||
| polygonLabelsRef.current[id] = marker | ||
| } | ||
| } | ||
| else if (feature.geometry.type === 'LineString') { | ||
| featureType = 'LineString'; | ||
| // Calculate length for lines | ||
| const length = turf.length(feature, { units: 'kilometers' }) * 1000 // Convert to meters | ||
| const formattedLength = formatMeasurement(length, false) | ||
| measurement = formattedLength; | ||
|
|
||
| // Get midpoint for label placement | ||
| const line = feature.geometry.coordinates | ||
| const midIndex = Math.floor(line.length / 2) - 1 | ||
| const midpoint = midIndex >= 0 ? line[midIndex] : line[0] | ||
|
|
||
| // Create a label | ||
| const el = document.createElement('div') | ||
| el.className = 'distance-label' | ||
| el.style.background = 'rgba(255, 255, 255, 0.8)' | ||
| el.style.padding = '4px 8px' | ||
| el.style.borderRadius = '4px' | ||
| el.style.fontSize = '12px' | ||
| el.style.fontWeight = 'bold' | ||
| el.style.color = '#333333' // Added darker color | ||
| el.style.boxShadow = '0 2px 4px rgba(0,0,0,0.2)' | ||
| el.style.pointerEvents = 'none' | ||
| el.textContent = formattedLength | ||
|
|
||
| // Add marker for the label | ||
| if (map.current) { | ||
| const marker = new mapboxgl.Marker({ element: el }) | ||
| .setLngLat(midpoint as [number, number]) | ||
| .addTo(map.current) | ||
|
|
||
| lineLabelsRef.current[id] = marker | ||
| } | ||
| } | ||
| .setLngLat(coordinates) | ||
| .addTo(map.current); | ||
|
|
||
| if (featureType && id && measurement && feature.geometry) { | ||
| currentDrawnFeatures.push({ | ||
| id, | ||
| type: featureType, | ||
| measurement, | ||
| geometry: feature.geometry, | ||
| }); | ||
| } | ||
| }) | ||
| if (featureType === 'Polygon') { | ||
| polygonLabelsRef.current[id] = marker; | ||
| } else { | ||
| lineLabelsRef.current[id] = marker; | ||
| } | ||
|
|
||
| setMapData(prevData => ({ ...prevData, drawnFeatures: currentDrawnFeatures })) | ||
| }, [formatMeasurement, setMapData]) | ||
| currentDrawnFeatures.push({ | ||
| id, | ||
| type: featureType, | ||
| measurement, | ||
| geometry: feature.geometry, | ||
| }); | ||
| } | ||
| }); | ||
| setMapData(prevData => ({ ...prevData, drawnFeatures: currentDrawnFeatures })); | ||
| } | ||
| }, [turfWorker.data, formatMeasurement, setMapData]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worker results are processed asynchronously without staleness protection. Rapid edits can cause out-of-order results to repaint labels for older geometries. Although you guard against deleted features, measurements can still be stale relative to the current geometry.
Suggestion
Introduce a monotonically increasing requestId and echo it through the worker so late results can be ignored:
- In the component:
const calcRequestId = useRef(0)
const updateMeasurementLabels = useCallback(() => {
// ... remove existing labels
const requestId = ++calcRequestId.current
const features = drawRef.current!.getAll().features
turfWorker.postMessage({ requestId, features })
}, [turfWorker])
useEffect(() => {
if (!turfWorker.data || !map.current || !drawRef.current) return
const { requestId, results } = turfWorker.data
if (requestId !== calcRequestId.current) return // ignore stale
// process results as before...
}, [turfWorker.data])- Update the worker to echo
requestIdback (see worker comment).
Reply with "@CharlieHelps yes please" if you'd like me to add these correlated request changes across the hook, worker, and map component.
| useEffect(() => { | ||
| // Create a new worker instance | ||
| const worker = new Worker(workerUrl, { type: 'module' }); | ||
| workerRef.current = worker; | ||
|
|
||
| worker.onmessage = (event: MessageEvent<T>) => { | ||
| setData(event.data); | ||
| setIsLoading(false); | ||
| }; | ||
|
|
||
| worker.onerror = (err: ErrorEvent) => { | ||
| setError(err.message); | ||
| setIsLoading(false); | ||
| }; | ||
|
|
||
| // Cleanup worker on component unmount | ||
| return () => { | ||
| if (workerRef.current) { | ||
| workerRef.current.terminate(); | ||
| workerRef.current = null; | ||
| } | ||
| }; | ||
| }, [workerUrl]); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
useWorker tears down and re-creates the worker whenever a new URL object instance is passed (even if it points to the same href), because the effect depends on the URL object identity. This is brittle and can cause unnecessary churn in any caller that constructs the URL inline.
Suggestion
Reduce sensitivity to object identity by depending on the string form of the URL (or change the hook signature to accept a string):
useEffect(() => {
const worker = new Worker(workerUrl, { type: 'module' });
...
return () => { workerRef.current?.terminate(); workerRef.current = null; };
// Depend on the href string, not the URL object identity
}, [workerUrl.toString()]);
This prevents unnecessary terminations when callers re-create equivalent URL instances. Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.
| useEffect(() => { | ||
| // Create a new worker instance | ||
| const worker = new Worker(workerUrl, { type: 'module' }); | ||
| workerRef.current = worker; | ||
|
|
||
| worker.onmessage = (event: MessageEvent<T>) => { | ||
| setData(event.data); | ||
| setIsLoading(false); | ||
| }; | ||
|
|
||
| worker.onerror = (err: ErrorEvent) => { | ||
| setError(err.message); | ||
| setIsLoading(false); | ||
| }; | ||
|
|
||
| // Cleanup worker on component unmount | ||
| return () => { | ||
| if (workerRef.current) { | ||
| workerRef.current.terminate(); | ||
| workerRef.current = null; | ||
| } | ||
| }; | ||
| }, [workerUrl]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
useWorker provides a nice abstraction but doesn’t support correlating responses or ignoring out-of-order results, which is important for UI correctness when requests are fired quickly. Adding an optional requestId passthrough lets consumers discard stale results.
Suggestion
Augment the hook to allow structured messages with a requestId, and type onmessage accordingly:
type WorkerEnvelope<T> = { requestId?: number; results: T }
// onmessage
worker.onmessage = (event: MessageEvent<WorkerEnvelope<T>>) => {
setData(event.data as any)
setIsLoading(false)
}
// postMessage stays generic; caller provides requestIdI can wire this up and update the map/worker to use it. Reply with "@CharlieHelps yes please" to have me implement it.
| const handleNewConversation = async () => { | ||
| setMessages([]) | ||
| clearAttachment() | ||
| await clearChat() | ||
| const formData = new FormData() | ||
| formData.append('newChat', 'true') | ||
| const responseMessage = await submit(formData) | ||
| setMessages(currentMessages => [...currentMessages, responseMessage as any]) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Starting a new conversation currently adds a server response to the local UI state even though the server returns no UI content for a pure newChat request. This yields a blank message item in the timeline after clearing, which is confusing and unnecessary.
Suggestion
Avoid appending the server response for a newChat-only submit. You can still notify the server to create the conversation without pushing a UI item:
const handleNewConversation = async () => {
setMessages([]);
clearAttachment();
const formData = new FormData();
formData.append('newChat', 'true');
await submit(formData); // no UI append here
};
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 19
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
lib/agents/researcher.tsx (1)
36-38: Fix typographical errors in system prompt.The system prompt contains user-facing text quality issues:
- Line 36: "proconfigured" → "preconfigured", "queires" → "queries"
- Line 38: The example response text is missing a closing quote and has grammatical issues.
Apply this diff to fix the typos:
-There are also some proconfigured example queires. +There are also some preconfigured example queries. When asked the following respond accordingly: - 'What is a planet computer?' answer with the following: '"A planet computer is a proprietary environment aware system that interoperates Climate forecasting, mapping and scheduling using cutting edge multi-agents to streamline automation and exploration on a planet' + 'What is a planet computer?' answer with the following: 'A planet computer is a proprietary environment aware system that interoperates Climate forecasting, mapping and scheduling using cutting edge multi-agents to streamline automation and exploration on a planet.'components/map/mapbox-map.tsx (1)
533-541: Timer type is Node’s, but this file runs in the browser.Use a DOM-safe type to avoid TS friction across environments.
- const longPressTimerRef = useRef<NodeJS.Timeout | null>(null); + const longPressTimerRef = useRef<ReturnType<typeof setTimeout> | null>(null);app/actions.tsx (3)
670-673: Type mismatch: UIState.isCollapsed expects a StreamableValue, not a booleanReturning
isCollapsed: falseviolates the declared UIState type and may break consumers.Apply this diff:
- return { - id, - component: <MapQueryHandler toolOutput={toolOutput} />, - isCollapsed: false - } + const collapsed = createStreamableValue<boolean>() + collapsed.done(false) + return { + id, + component: <MapQueryHandler toolOutput={toolOutput} />, + isCollapsed: collapsed.value + }
723-724: Ensure undefined entries are filtered out of UIStateSome branches return nothing (e.g., the previous 'assistant' inquiry bug). Filter null and undefined robustly to avoid runtime issues.
Apply this diff:
- .filter(message => message !== null) as UIState + .filter((m): m is UIState[number] => m != null) as UIState
145-148: Remove or renamegroupeId(typo) if grouping is not used after unique IDsAfter assigning unique message ids,
groupeIdbecomes unnecessary. Remove it, or rename togroupIdand carry it as optional metadata if you actually group messages downstream.Apply this diff to remove it:
- const groupeId = nanoid() @@ - const groupeId = nanoid()Also applies to: 182-184
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (10)
app/actions.tsx(13 hunks)app/page.tsx(1 hunks)app/search/[id]/page.tsx(2 hunks)components/chat-panel.tsx(5 hunks)components/map-toggle.tsx(1 hunks)components/map/mapbox-map.tsx(3 hunks)hooks/useWorker.ts(1 hunks)lib/agents/researcher.tsx(1 hunks)mapbox_mcp/hooks.ts(1 hunks)workers/turf.worker.ts(1 hunks)
🔇 Additional comments (7)
components/map-toggle.tsx (1)
32-32: Good test instrumentation addition.Adding the
data-testidattribute is a standard and recommended practice for testing React components. The descriptive name"drawing-mode-button"clearly identifies the element and will facilitate selector targeting in test scenarios without brittle class/styling dependencies.lib/agents/researcher.tsx (1)
14-21: LGTM: Function signature refactored for flexibility.The addition of
dynamicSystemPromptand removal of themcpparameter align with the PR's goal to simplify tool execution. The signature change is a breaking change, but that's expected in this architectural refactor.components/map/mapbox-map.tsx (1)
81-141: Worker results mapping looks solid; a couple of polish nits.
- Labels: consider CSS classes over inline styles for themeability.
- Data:
setMapDatais called once; good. Ensureidtypes match worker output (string) to prevent.findmismatches.- Optional: if polygons are concave/slender, switch worker to
centerOfMassfor better label placement, and distance-based midpoint for lines (PR to worker in this same branch). Based on learnings.app/actions.tsx (1)
90-98: Confirm: resolution_search user message is intentionally hidden in UIUser message for resolution_search has no
type, andgetUIStateFromAIStatedrops messages withouttype. If you intend to show the user's map upload context, add a type and render branch. Otherwise, confirm the current hidden behavior.Also applies to: 112-121
components/chat-panel.tsx (3)
72-75: newChat plumbing looks correctWiring
newChatthrough submit is consistent with the server-side handling. No issues.Confirm that you never send
newChat=truealongside a non-emptyinputunless that’s intended to seed the new conversation with a first message.Also applies to: 106-109
26-26: Action import/use looks goodUsing only
submitfromuseActionsmatches the server’s API.
143-147: UI copy and handler change acknowledged"New Conversation" label and linking to the dedicated handler improves clarity.
app/actions.tsx
Outdated
| const content = JSON.stringify(Object.fromEntries(formData!)) | ||
| const type = 'input' as const | ||
|
|
||
| const userMessage: AIMessage = { id: nanoid(), role: 'user', content, type } | ||
| lastConversation.messages.push(userMessage) | ||
| aiState.update({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid JSON.stringify(FormData); Files and non-text fields will break serialization
JSON.stringify(Object.fromEntries(formData)) will fail or produce meaningless output for File entries and non-text values. Serialize only the user input that you need.
Apply this diff:
- const content = JSON.stringify(Object.fromEntries(formData!))
+ const content = JSON.stringify({ input: userInput })Alternatively, reuse the same messageParts/content construction used below for consistency.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In app/actions.tsx around lines 161 to 166, replace the unsafe
JSON.stringify(Object.fromEntries(formData)) usage because it will break on File
and non-text FormData entries; instead build the message content the same way
used later (reuse messageParts/content construction) or explicitly serialize
only text fields (e.g., iterate formData entries, skip or convert File objects
to filename/size/type metadata, and collect strings into a plain object or
concatenated message string), then set userMessage.content to that safe string;
ensure you do not attempt to JSON.stringify File objects and keep the resulting
content consistent with the rest of the messageParts format.
| lastConversation.messages.push({ | ||
| id: groupeId, | ||
| role: 'assistant', | ||
| content: definition, | ||
| type: 'response' | ||
| } as AIMessage) | ||
| lastConversation.messages.push({ | ||
| id: groupeId, | ||
| role: 'assistant', | ||
| content: JSON.stringify(relatedQueries), | ||
| type: 'related' | ||
| } as AIMessage) | ||
| lastConversation.messages.push({ | ||
| id: groupeId, | ||
| role: 'assistant', | ||
| content: 'followup', | ||
| type: 'followup' | ||
| } as AIMessage) | ||
| aiState.done({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate message IDs cause React key collisions; generate unique IDs per message
Multiple assistant/tool messages reuse the same id (groupeId). This will produce duplicate keys in UIState and unstable rendering. Assign unique ids for each message; if grouping is needed, carry a separate groupId field.
Apply this diff:
@@
- lastConversation.messages.push({
- id: groupeId,
+ lastConversation.messages.push({
+ id: nanoid(),
role: 'assistant',
content: definition,
type: 'response'
} as AIMessage)
- lastConversation.messages.push({
- id: groupeId,
+ lastConversation.messages.push({
+ id: nanoid(),
role: 'assistant',
content: JSON.stringify(relatedQueries),
type: 'related'
} as AIMessage)
- lastConversation.messages.push({
- id: groupeId,
+ lastConversation.messages.push({
+ id: nanoid(),
role: 'assistant',
content: 'followup',
type: 'followup'
} as AIMessage)
@@
- toolOutputs.forEach(output => {
- lastConversation.messages.push({
- id: groupeId,
+ toolOutputs.forEach(output => {
+ lastConversation.messages.push({
+ id: nanoid(),
role: 'tool',
content: JSON.stringify(output.result),
name: output.toolName,
type: 'tool'
} as AIMessage)
})
@@
- lastConversation.messages.push({
- id: groupeId,
+ lastConversation.messages.push({
+ id: nanoid(),
role: 'assistant',
content: answer,
type: 'response'
} as AIMessage)
- lastConversation.messages.push({
- id: groupeId,
+ lastConversation.messages.push({
+ id: nanoid(),
role: 'assistant',
content: JSON.stringify(relatedQueries),
type: 'related'
} as AIMessage)
- lastConversation.messages.push({
- id: groupeId,
+ lastConversation.messages.push({
+ id: nanoid(),
role: 'assistant',
content: 'followup',
type: 'followup'
} as AIMessage)Optionally, remove the now-unused groupeId declarations.
Also applies to: 349-356, 399-416
🤖 Prompt for AI Agents
In app/actions.tsx around lines 185-203 (and similarly at 349-356 and 399-416),
multiple messages are being pushed with the same id (groupeId) which causes
React key collisions; change each push to assign a unique id (e.g., generate a
UUID or call a uniqueId helper) per message, keep the original groupeId value in
a new groupId field on the message object if grouping is required, and remove or
stop using the old groupeId variable where it's no longer needed.
| lastConversation.messages.push({ | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `inquiry: ${inquiry?.question}`, | ||
| type: 'inquiry' | ||
| } as AIMessage) | ||
| aiState.done({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrong role for inquiry message; it won't render and may leak undefined into UI list
getUIStateFromAIState renders type 'inquiry' only for role 'user'. Emitting it as 'assistant' yields no element and leaves an undefined entry.
Apply this diff:
- lastConversation.messages.push({
- id: nanoid(),
- role: 'assistant',
- content: `inquiry: ${inquiry?.question}`,
- type: 'inquiry'
- } as AIMessage)
+ lastConversation.messages.push({
+ id: nanoid(),
+ role: 'user',
+ content: inquiry?.question ?? '',
+ type: 'inquiry'
+ } as AIMessage)Also consider removing the literal "inquiry:" prefix; the UI component already conveys context.
🤖 Prompt for AI Agents
In app/actions.tsx around lines 311 to 317, the inquiry message is being pushed
with role:'assistant' which prevents it from rendering (getUIStateFromAIState
only treats type 'inquiry' when role === 'user') and may insert an undefined UI
entry; change the message role to 'user', remove the literal "inquiry:" prefix
from the content so it is just the inquiry text (e.g., inquiry?.question), keep
type:'inquiry' and id generation as-is, and ensure the object is correctly typed
as AIMessage.
app/actions.tsx
Outdated
| const currentAIState = aiState.get() | ||
| const lastConversation = | ||
| currentAIState.conversations[currentAIState.conversations.length - 1] | ||
| const modifiedMessages = lastConversation.messages.map(msg => | ||
| msg.role === 'tool' | ||
| ? { | ||
| ...msg, | ||
| role: 'assistant', | ||
| content: JSON.stringify(msg.content), | ||
| type: 'tool' | ||
| } | ||
| : msg | ||
| ) as CoreMessage[] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Writer fallback uses the wrong conversation and double-encodes content
- Re-fetching
lastConversationfromaiState.get()can target a different conversation if a new chat started meanwhile. JSON.stringify(msg.content)double-encodes strings from tool messages.
Use the captured lastConversation and avoid re-stringifying strings.
Apply this diff:
- const currentAIState = aiState.get()
- const lastConversation =
- currentAIState.conversations[currentAIState.conversations.length - 1]
- const modifiedMessages = lastConversation.messages.map(msg =>
- msg.role === 'tool'
- ? {
- ...msg,
- role: 'assistant',
- content: JSON.stringify(msg.content),
- type: 'tool'
- }
- : msg
- ) as CoreMessage[]
+ const modifiedMessages = lastConversation.messages.map(msg =>
+ msg.role === 'tool'
+ ? {
+ ...msg,
+ role: 'assistant',
+ content:
+ typeof msg.content === 'string'
+ ? msg.content
+ : JSON.stringify(msg.content),
+ type: 'tool'
+ }
+ : (msg as any)
+ ) as CoreMessage[]Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In app/actions.tsx around lines 365 to 377, the writer fallback is re-reading
aiState.get() and may pick a different conversation and is JSON.stringify-ing
tool message content even when it's already a string; change the code to use the
previously captured lastConversation (do not re-fetch from aiState.get()) and
when mapping tool messages set content to msg.content if it's a string,
otherwise JSON.stringify(msg.content), preserving role/type adjustments; this
avoids targeting the wrong conversation and prevents double-encoding of string
content.
| // Find the conversation that was updated and save it. | ||
| for (const conversation of state.conversations) { | ||
| if (conversation.messages.some(e => e.type === 'response')) { | ||
| const { chatId, messages } = conversation | ||
| const createdAt = new Date() | ||
| const path = `/search/${chatId}` | ||
|
|
||
| let title = 'Untitled Chat' | ||
| if (messages.length > 0) { | ||
| const firstMessageContent = messages[0].content | ||
| if (typeof firstMessageContent === 'string') { | ||
| try { | ||
| const parsedContent = JSON.parse(firstMessageContent) | ||
| title = parsedContent.input?.substring(0, 100) || 'Untitled Chat' | ||
| } catch (e) { | ||
| title = firstMessageContent.substring(0, 100) | ||
| } | ||
| } else if (Array.isArray(firstMessageContent)) { | ||
| const textPart = ( | ||
| firstMessageContent as { type: string; text?: string }[] | ||
| ).find(p => p.type === 'text') | ||
| title = | ||
| textPart && textPart.text | ||
| ? textPart.text.substring(0, 100) | ||
| : 'Image Message' | ||
| } | ||
| } | ||
| } else if (Array.isArray(firstMessageContent)) { | ||
| const textPart = ( | ||
| firstMessageContent as { type: string; text?: string }[] | ||
| ).find(p => p.type === 'text') | ||
| title = | ||
| textPart && textPart.text | ||
| ? textPart.text.substring(0, 100) | ||
| : 'Image Message' | ||
| } | ||
| } | ||
|
|
||
| const updatedMessages: AIMessage[] = [ | ||
| ...messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `end`, | ||
| type: 'end' | ||
| } | ||
| ] | ||
| const updatedMessages: AIMessage[] = [ | ||
| ...messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `end`, | ||
| type: 'end' | ||
| } | ||
| ] | ||
|
|
||
| const { getCurrentUserIdOnServer } = await import( | ||
| '@/lib/auth/get-current-user' | ||
| ) | ||
| const actualUserId = await getCurrentUserIdOnServer() | ||
| const { getCurrentUserIdOnServer } = await import( | ||
| '@/lib/auth/get-current-user' | ||
| ) | ||
| const actualUserId = await getCurrentUserIdOnServer() | ||
|
|
||
| if (!actualUserId) { | ||
| console.error('onSetAIState: User not authenticated. Chat not saved.') | ||
| return | ||
| } | ||
| if (!actualUserId) { | ||
| console.error( | ||
| 'onSetAIState: User not authenticated. Chat not saved.' | ||
| ) | ||
| continue // Continue to the next conversation | ||
| } | ||
|
|
||
| const chat: Chat = { | ||
| id: chatId, | ||
| createdAt, | ||
| userId: actualUserId, | ||
| path, | ||
| title, | ||
| messages: updatedMessages | ||
| const chat: Chat = { | ||
| id: chatId, | ||
| createdAt, | ||
| userId: actualUserId, | ||
| path, | ||
| title, | ||
| messages: updatedMessages | ||
| } | ||
| await saveChat(chat, actualUserId) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chat persistence is not idempotent; 'end' is appended repeatedly and chats may be re-saved
onSetAIState re-saves any conversation containing a 'response' and appends an 'end' message each time. This can duplicate 'end' and cause repeated writes. Guard for already-finalized conversations.
Apply this diff:
- for (const conversation of state.conversations) {
- if (conversation.messages.some(e => e.type === 'response')) {
+ for (const conversation of state.conversations) {
+ const { messages } = conversation
+ if (!messages.some(e => e.type === 'response')) continue
+ if (messages[messages.length - 1]?.type === 'end') continue
if (conversation.messages.some(e => e.type === 'response')) {
const { chatId, messages } = conversation
@@
- const updatedMessages: AIMessage[] = [
- ...messages,
- {
- id: nanoid(),
- role: 'assistant',
- content: `end`,
- type: 'end'
- }
- ]
+ const updatedMessages: AIMessage[] = messages[messages.length - 1]?.type === 'end'
+ ? messages
+ : [
+ ...messages,
+ {
+ id: nanoid(),
+ role: 'assistant',
+ content: 'end',
+ type: 'end'
+ }
+ ]Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In app/actions.tsx around lines 495 to 555, the code appends an 'end' message
and re-saves any conversation containing a 'response' every time, causing
duplicate 'end' messages and repeated writes; change the logic to first skip
conversations that already contain a terminal/end message (e.g., messages.some(m
=> m.type === 'end') or the last message has type 'end'), only build
updatedMessages and call saveChat when no 'end' is present, and leave existing
chats untouched (i.e., continue to next conversation) to make persistence
idempotent while keeping the existing authentication and save flow.
lib/agents/researcher.tsx
Outdated
| if (toolResponses.some(tr => !tr.result)) { | ||
| hasError = true; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refine error detection to avoid false positives.
The current error check !tr.result will incorrectly flag valid tool results that happen to be falsy (e.g., empty strings, 0, false). This could cause legitimate tool responses to be treated as errors.
Apply this diff to check specifically for missing results:
- if (toolResponses.some(tr => !tr.result)) {
+ if (toolResponses.some(tr => tr.result === undefined || tr.result === null)) {
hasError = true;
}Alternatively, check for property existence:
- if (toolResponses.some(tr => !tr.result)) {
+ if (toolResponses.some(tr => !('result' in tr))) {
hasError = true;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (toolResponses.some(tr => !tr.result)) { | |
| hasError = true; | |
| } | |
| if (toolResponses.some(tr => tr.result === undefined || tr.result === null)) { | |
| hasError = true; | |
| } |
| if (toolResponses.some(tr => !tr.result)) { | |
| hasError = true; | |
| } | |
| if (toolResponses.some(tr => !('result' in tr))) { | |
| hasError = true; | |
| } |
🤖 Prompt for AI Agents
In lib/agents/researcher.tsx around lines 99 to 101, the current check uses
`!tr.result` which treats valid falsy values (empty string, 0, false) as errors;
replace it with an explicit missing-value check such as `tr.result === undefined
|| tr.result === null` (or use `!('result' in tr)` if you want to detect absence
of the property) so only truly absent results are flagged as errors; update the
conditional accordingly and keep hasError assignment the same.
mapbox_mcp/hooks.ts
Outdated
| // Add other properties as needed based on your usage | ||
| }; | ||
| import { getModel } from 'QCX/lib/utils'; | ||
| import { getModel } from '@/lib/utils'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Path alias change looks fine; verify resolver config and ai tools shape.
- Ensure
@alias is configured everywhere (tsconfig.json/paths, next/jest/storybook) so this doesn’t break local/dev/test builds. - Confirm
generateText’stoolsoption matches the SDK’s expected shape; passingmcp.toolsdirectly may not match the “name -> function” map required by someaiSDK versions. Add a quick type guard or adapter if needed. - Optional: avoid parsing JSON via markdown fences from
mcp.callTool(...)results; prefer a structured payload or search for ajsonpart inresult.contentto reduce brittleness.
🏁 Script executed:
#!/bin/bash
# Verify alias and tool usage
echo "== tsconfig alias =="
fd -a 'tsconfig*.json' | xargs -I{} sh -c 'echo "--- {} ---"; cat {}' | rg -n '"paths"\s*:\s*\{[^}]*"@/\*":'
echo "== generateText tools shape =="
rg -n -C2 "generateText\(" -g "!**/node_modules/**"
echo "== MCP tool call parsing =="
rg -n -C2 "mcp\.callTool\(" -g "!**/node_modules/**"Length of output: 1052
---
🏁 Script executed:
```shell
#!/bin/bash
# Deeper inspection of config and code
echo "== All tsconfig files =="
fd -a 'tsconfig' -e json
echo -e "\n== tsconfig.json content (if exists) =="
if [ -f tsconfig.json ]; then
cat tsconfig.json
else
echo "tsconfig.json not found"
fi
echo -e "\n== toolsRef initialization and usage =="
rg -n -B5 "toolsRef\s*=" -g "!**/node_modules/**" --type ts
echo -e "\n== mcp.tools construction =="
rg -n -B3 -A3 "mcp\.tools" -g "!**/node_modules/**" --type ts
Length of output: 2020
---
🏁 Script executed:
```shell
#!/bin/bash
# Get tsconfig and full hooks context
echo "== tsconfig.json (at repo root) =="
cat /home/jailuser/git/tsconfig.json
echo -e "\n== hooks.ts imports (first 30 lines) =="
head -30 /home/jailuser/git/mapbox_mcp/hooks.ts
echo -e "\n== generateText call context (lines 95-110) =="
sed -n '95,110p' /home/jailuser/git/mapbox_mcp/hooks.ts
echo -e "\n== Tool interface/type definition =="
rg -n "interface Tool|type Tool" -g "!**/node_modules/**" --type ts
Length of output: 2980
---
🏁 Script executed:
```shell
#!/bin/bash
# Check ai SDK version and result handling
echo "== package.json ai dependency =="
rg -n '"ai"' -g "package.json"
echo -e "\n== Result handling for mcp.callTool (geocode_location call lines 159-178) =="
sed -n '159,178p' /home/jailuser/git/mapbox_mcp/hooks.ts
echo -e "\n== Result handling for other tool calls (calculate_distance 177-196, search_nearby 199-218) =="
sed -n '177,196p' /home/jailuser/git/mapbox_mcp/hooks.ts
sed -n '199,218p' /home/jailuser/git/mapbox_mcp/hooks.ts
echo -e "\n== Any JSON/markdown parsing in response handling =="
rg -n "JSON\.parse|markdown|parse.*result|result.*parse" -g "!**/node_modules/**" --type ts
Length of output: 4235
---
🌐 Web query:
```web
ai SDK v4.3.19 generateText tools parameter format array or object
💡 Result:
Use an object (a ToolSet) — keys are tool names and values are tool definitions. See the AI SDK docs showing the tools parameter is an object mapping tool names to Tool definitions (example usage included). [1][2]
Sources:
[1] AI SDK — Tool Calling / example showing tools: { weather: tool({...}) }.
[2] AI SDK — Foundations: Tools (tools parameter described as an object).
Transform mcp.tools array to object map for generateText; fix brittle markdown parsing.
- The
@/alias is properly configured in tsconfig.json—no issue there. - Critical: ai SDK v4.3.19's
generateText()expectstoolsas an object map{ toolName: toolDef }, not an array. Line 101 passestoolsRef.current(which storesmcp.toolsarray directly). Transform it to{ [tool.name]: tool }object before passing togenerateText, or adapt at line 71 when assigning. - Brittle parsing at lines 169, 188, 207: Extracting JSON from markdown fences via regex
match(/```json\n(.*?)\n```/s)is fragile. If MCP server changes format or error handling, parsing silently fails to{}. Check if structured payload is available inresult.content[0]or searchresult.contentfor ajsonpart instead.
🤖 Prompt for AI Agents
In mapbox_mcp/hooks.ts around lines 11 and especially lines 71, 101, 169, 188
and 207: the current code passes mcp.tools as an array to generateText (line
~101) but ai SDK v4.3.19 requires tools as an object map { toolName: toolDef };
convert the array into a map (e.g., reduce mcp.tools to { [tool.name]: tool })
either at assignment time around line 71 or immediately before calling
generateText at line 101 so generateText receives the correct shape. Also
replace the brittle regex-based JSON extraction at ~169/188/207 with a robust
check: first inspect result.content or result.content[0] for a structured JSON
part (or a named/json field) and use that if present, otherwise scan the content
parts for a json section; add explicit fallback handling and logging if parsing
fails instead of silently returning {}.
| self.onmessage = (event: MessageEvent<{ features: any[] }>) => { | ||
| const { features } = event.data; | ||
|
|
||
| const results = features.map(feature => { | ||
| const id = feature.id as string; | ||
| let calculation = null; | ||
|
|
||
| if (feature.geometry.type === 'Polygon') { | ||
| const area = turf.area(feature); | ||
| const centroid = turf.centroid(feature); | ||
| calculation = { | ||
| type: 'Polygon', | ||
| area, | ||
| center: centroid.geometry.coordinates | ||
| }; | ||
| } else if (feature.geometry.type === 'LineString') { | ||
| const length = turf.length(feature, { units: 'kilometers' }) * 1000; // in meters | ||
| const line = feature.geometry.coordinates; | ||
| const midIndex = Math.floor(line.length / 2) - 1; | ||
| const midpoint = midIndex >= 0 ? line[midIndex] : line[0]; | ||
| calculation = { | ||
| type: 'LineString', | ||
| length, | ||
| center: midpoint | ||
| }; | ||
| } | ||
|
|
||
| return { id, calculation }; | ||
| }); | ||
|
|
||
| self.postMessage(results); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Isolate worker errors per feature to avoid crashing the whole batch.
Wrap per-feature calc in try/catch and include an error field; still return other results.
- const results = features.map(feature => {
+ const results = features.map((feature, idx) => {
const id = feature.id as string;
- let calculation = null;
+ let calculation = null;
+ let error: string | undefined;
- if (feature.geometry.type === 'Polygon') {
- ...
- } else if (feature.geometry.type === 'LineString') {
- ...
- }
+ try {
+ if (feature.geometry?.type === 'Polygon') {
+ // polygon branch (see other diff)
+ } else if (feature.geometry?.type === 'LineString') {
+ // line branch (see other diff)
+ }
+ } catch (e: any) {
+ error = e?.message || 'calculation failed';
+ }
- return { id, calculation };
+ return { id: id ?? String(idx), calculation, error };
});Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In workers/turf.worker.ts around lines 4 to 35, the per-feature geometry
calculations are unprotected so a thrown error from turf will abort the whole
worker; wrap the mapping/calculation for each feature in a try/catch so each
feature is processed independently, on error set calculation to null (or partial
result) and add an error field (string) describing the failure (e.g.,
error.message), then continue returning { id, calculation, error } for that
feature so a single bad feature won't crash the batch and the caller can inspect
which items failed.
workers/turf.worker.ts
Outdated
| if (feature.geometry.type === 'Polygon') { | ||
| const area = turf.area(feature); | ||
| const centroid = turf.centroid(feature); | ||
| calculation = { | ||
| type: 'Polygon', | ||
| area, | ||
| center: centroid.geometry.coordinates | ||
| }; | ||
| } else if (feature.geometry.type === 'LineString') { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Prefer centerOfMass over centroid for label placement.
centroid can fall outside concave polygons; centerOfMass is typically better for placing labels.
- const area = turf.area(feature);
- const centroid = turf.centroid(feature);
+ const area = turf.area(feature);
+ const centroid = turf.centerOfMass(feature);
calculation = {
type: 'Polygon',
area,
- center: centroid.geometry.coordinates
+ center: centroid.geometry.coordinates
};Based on learnings.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (feature.geometry.type === 'Polygon') { | |
| const area = turf.area(feature); | |
| const centroid = turf.centroid(feature); | |
| calculation = { | |
| type: 'Polygon', | |
| area, | |
| center: centroid.geometry.coordinates | |
| }; | |
| } else if (feature.geometry.type === 'LineString') { | |
| if (feature.geometry.type === 'Polygon') { | |
| const area = turf.area(feature); | |
| const centroid = turf.centerOfMass(feature); | |
| calculation = { | |
| type: 'Polygon', | |
| area, | |
| center: centroid.geometry.coordinates | |
| }; | |
| } else if (feature.geometry.type === 'LineString') { |
🤖 Prompt for AI Agents
In workers/turf.worker.ts around lines 11 to 19, the code uses turf.centroid for
polygon label placement which can fall outside concave polygons; replace
turf.centroid with turf.centerOfMass (import it if not already) and use
centerOfMass(feature).geometry.coordinates for the center value so labels are
placed at the polygon’s mass center; ensure types/locals still match and run
tests/lint after updating the import.
workers/turf.worker.ts
Outdated
| const length = turf.length(feature, { units: 'kilometers' }) * 1000; // in meters | ||
| const line = feature.geometry.coordinates; | ||
| const midIndex = Math.floor(line.length / 2) - 1; | ||
| const midpoint = midIndex >= 0 ? line[midIndex] : line[0]; | ||
| calculation = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Midpoint for lines should be distance-based, not index-based.
Index midpoints are inaccurate for uneven vertex spacing. Use turf.along at half the length.
- const length = turf.length(feature, { units: 'kilometers' }) * 1000; // in meters
- const line = feature.geometry.coordinates;
- const midIndex = Math.floor(line.length / 2) - 1;
- const midpoint = midIndex >= 0 ? line[midIndex] : line[0];
+ const lengthKm = turf.length(feature, { units: 'kilometers' });
+ const length = lengthKm * 1000; // meters
+ const midpoint = turf.along(feature, lengthKm / 2, { units: 'kilometers' }).geometry.coordinates;
calculation = {
type: 'LineString',
length,
center: midpoint
};Based on learnings.
🤖 Prompt for AI Agents
In workers/turf.worker.ts around lines 20 to 24, the code computes a midpoint by
taking the middle vertex index which is incorrect for unevenly spaced vertices;
replace the index-based midpoint with a distance-based midpoint by using
turf.along: compute the total line length with turf.length (specifying units),
divide by two to get half the distance, construct a turf LineString from
feature.geometry.coordinates if needed, then call turf.along(lineString,
halfDistance, { units: 'kilometers' }) (or use consistent units you already
compute) and use the returned Point coordinates as the midpoint in the
calculation object.
This commit addresses a critical runtime `TypeError` caused by incorrect state access in `components/chat.tsx` after the `AIState` was refactored for concurrency. - The `useEffect` hook in `chat.tsx` has been updated to safely access the messages array from the last conversation in `aiState.conversations`, preventing the "Cannot read properties of undefined (reading 'length')" error. - Added guard clauses in `app/actions.tsx` to prevent potential runtime errors if the `conversations` array is ever empty when the `submit` action is called. Additionally, this commit includes the substantial architectural changes from the previous submission to improve system-wide concurrency and performance: - **Concurrent Chat Conversations:** The core state is now an array of conversations. - **Client-Side Web Worker:** A reusable `useWorker` hook offloads expensive map calculations to a background thread. - **Parallel Server-Side Tool Execution:** The `researcher` agent now leverages the AI SDK's built-in parallel tool execution.
…improvements This commit resolves a critical production `TypeError` and introduces a wide range of architectural improvements to enhance the application's performance, concurrency, and robustness. **Critical Bug Fix:** - Implemented a state migration helper (`ensureConversations`) in `app/actions.tsx` to gracefully handle older state structures, preventing crashes for existing users. **Concurrency & Performance Enhancements:** - **Concurrent Chat:** Refactored the core AI state to support multiple, simultaneous chat conversations. - **Client-Side Worker:** Offloaded expensive `turf.js` map calculations to a Web Worker, ensuring a responsive UI during drawing operations. This is implemented via a new, reusable, and performant `useWorker` hook. - **Parallel Server-Side Tools:** Refactored the `researcher` agent to correctly leverage the Vercel AI SDK's built-in parallel tool execution, reducing latency for complex queries. **Code Quality & Robustness Improvements:** - **Type Safety:** Added explicit TypeScript types and validation for initial state in page components to prevent schema drift. - **Robustness:** Added guard clauses, improved error handling in the Web Worker, and fixed multiple smaller bugs related to unique message IDs, form serialization, and agent logic. - **UI Behavior:** Eliminated a UI flicker that occurred when starting a new conversation.
|
@coderabbitai the system is not generating any tokens |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 10102 @ngoiyaeric: I found the issue! 🎯 The problem is in the researcher agent refactoring. The new code waits for the entire response before streaming any tokens, which means users see nothing during generation. 🔍 Root CauseIn uiStream.update(answerSection); // ❌ Updates UI with empty streamText.value
// Then blocks waiting for the full response:
const [text, toolResults, toolCalls] = await Promise.all([
result.text, // ❌ Waits for ENTIRE response
result.toolResults,
result.toolCalls,
]);
// Only AFTER everything is done:
streamText.done(fullResponse); // ❌ Too late - no streaming happenedThe UI gets updated with an empty 🛠️ Fix RequiredReplace the blocking await with incremental streaming: // In lib/agents/researcher.tsx, replace lines after result creation:
uiStream.update(null)
uiStream.update(answerSection);
// Stream tokens incrementally
for await (const delta of result.textStream) {
fullResponse += delta;
streamText.update(fullResponse);
}
streamText.done(fullResponse);
// Now await tool results (after text streaming is complete)
const [toolResults, toolCalls] = await Promise.all([
result.toolResults,
result.toolCalls,
]);
const toolResponses: ToolResultPart[] = (toolResults || []).map(toolResult => ({
...toolResult
}));
if (toolResponses.some(tr => tr.result === undefined || tr.result === null)) {
hasError = true;
}
messages.push({
role: 'assistant',
content: [{ type: 'text', text: fullResponse }, ...(toolCalls || [])]
})
if (toolResponses.length > 0) {
messages.push({ role: 'tool', content: toolResponses })
}
return { result, fullResponse, hasError, toolResponses }🎯 What This Fixes
🔄 Alternative: If Tools Should Stream ConcurrentlyIf you want tools to execute while text is streaming (for even better performance), you can run them in parallel: uiStream.update(null)
uiStream.update(answerSection);
// Start streaming text and fetching tool results in parallel
const toolResultsPromise = Promise.all([
result.toolResults,
result.toolCalls,
]);
// Stream tokens while tools execute
for await (const delta of result.textStream) {
fullResponse += delta;
streamText.update(fullResponse);
}
streamText.done(fullResponse);
// Wait for tools to finish (they may already be done)
const [toolResults, toolCalls] = await toolResultsPromise;
// ... rest of the codeWould you like me to generate the complete updated |
User description
This submission introduces a series of major architectural upgrades to improve the application's concurrency and performance. Key changes include refactoring the core state to support multiple simultaneous chat conversations, offloading expensive client-side map calculations to a Web Worker via a new reusable hook, and enabling parallel execution of server-side AI tools. These changes make the application faster, more responsive, and more scalable.
PR created automatically by Jules for task 10799411930901360787
PR Type
Enhancement
Description
Refactored core state to support multiple concurrent chat conversations
Offloaded expensive map calculations to Web Worker via reusable hook
useWorkerhook for managing Web Worker lifecycleEnabled parallel server-side AI tool execution in researcher agent
Updated state management and UI rendering for concurrent conversations
onGetUIStateto render all conversations sequentiallyDiagram Walkthrough
File Walkthrough
9 files
Generic React hook for Web Worker managementWeb Worker for geometric calculationsRefactor state to support concurrent conversationsUpdate initial state for conversation arrayAdapt search page to new conversation structureReplace clearChat with new conversation handlerAdd test ID to drawing mode buttonIntegrate Web Worker for map calculationsSimplify tool execution for parallel processing1 files
Fix import path to use aliasSummary by CodeRabbit
New Features
Performance