Skip to content

Conversation

@google-labs-jules
Copy link
Contributor

@google-labs-jules google-labs-jules bot commented Oct 20, 2025

User description

This submission introduces a series of major architectural upgrades to improve the application's concurrency and performance. Key changes include refactoring the core state to support multiple simultaneous chat conversations, offloading expensive client-side map calculations to a Web Worker via a new reusable hook, and enabling parallel execution of server-side AI tools. These changes make the application faster, more responsive, and more scalable.


PR created automatically by Jules for task 10799411930901360787


PR Type

Enhancement


Description

  • Refactored core state to support multiple concurrent chat conversations

    • Each conversation maintains independent message history
    • Users can initiate new chats while previous ones are running
  • Offloaded expensive map calculations to Web Worker via reusable hook

    • Created generic useWorker hook for managing Web Worker lifecycle
    • Turf.js geometric calculations now execute off main thread
  • Enabled parallel server-side AI tool execution in researcher agent

    • Simplified tool call handling by awaiting final result
    • Allows SDK to run multiple tool calls concurrently
  • Updated state management and UI rendering for concurrent conversations

    • Added conversation separators in UI
    • Modified onGetUIState to render all conversations sequentially

Diagram Walkthrough

flowchart LR
  A["Single Conversation State"] -->|Refactor| B["Multiple Concurrent Conversations"]
  C["Main Thread Calculations"] -->|Offload| D["Web Worker via useWorker Hook"]
  E["Sequential Tool Execution"] -->|Enable| F["Parallel Tool Execution"]
  B --> G["Enhanced Performance & Scalability"]
  D --> G
  F --> G
Loading

File Walkthrough

Relevant files
Enhancement
9 files
useWorker.ts
Generic React hook for Web Worker management                         
+50/-0   
turf.worker.ts
Web Worker for geometric calculations                                       
+35/-0   
actions.tsx
Refactor state to support concurrent conversations             
+217/-203
page.tsx
Update initial state for conversation array                           
+10/-1   
page.tsx
Adapt search page to new conversation structure                   
+12/-8   
chat-panel.tsx
Replace clearChat with new conversation handler                   
+24/-13 
map-toggle.tsx
Add test ID to drawing mode button                                             
+1/-1     
mapbox-map.tsx
Integrate Web Worker for map calculations                               
+60/-85 
researcher.tsx
Simplify tool execution for parallel processing                   
+17/-35 
Bug fix
1 files
hooks.ts
Fix import path to use alias                                                         
+1/-1     

Summary by CodeRabbit

  • New Features

    • Multi-conversation support enables managing multiple chat threads simultaneously within a single session
    • New "New Conversation" button for easily creating and switching between separate discussion threads
    • Enhanced conversation organization and improved state management for better workflow
  • Performance

    • Optimized geographic calculations for drawing and measurement features

This commit introduces several major architectural changes to improve the application's overall concurrency, performance, and scalability.

1.  **Concurrent Chat Conversations:**
    *   Refactored the core Vercel AI SDK state (`AIState`) to support an array of independent `Conversation` objects.
    *   This allows users to initiate new chat queries while previous ones are still running, with all conversations rendered in a single, sequential view.
    *   Updated the `submit` action, UI components, and state management lifecycle to handle the new concurrent state structure.

2.  **Client-Side Performance with Web Workers:**
    *   Offloaded expensive `turf.js` geometric calculations (area, length) in the map component to a client-side Web Worker.
    *   Created a generic and reusable `useWorker` React hook to abstract the complexity of managing Web Worker lifecycles and communication, establishing a scalable pattern for future client-side optimizations.
    *   This ensures the map UI remains smooth and responsive during complex drawing operations.

3.  **Parallel Server-Side Tool Execution:**
    *   Refactored the `researcher` agent to correctly leverage the Vercel AI SDK's built-in support for parallel tool execution.
    *   By awaiting the final result of the `streamText` call, the agent now allows the SDK to run multiple AI tool calls concurrently, reducing latency for complex server-side queries.
@vercel
Copy link

vercel bot commented Oct 20, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
qcx Ready Ready Preview Comment Oct 20, 2025 2:35pm

💡 Enable Vercel Agent with $100 free credit for automated AI reviews

@google-labs-jules
Copy link
Contributor Author

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!


For security, I will only act on instructions from the user who triggered this task.

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 20, 2025

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The pull request restructures the AI state model from a flat messages array to a multi-conversation architecture, storing an array of Conversation objects. Complementary changes update state handlers, message routing, UI transformation logic, and introduce Web Worker infrastructure for offloading geographic computations. Chat panel UI handling is refined to support new conversation creation.

Changes

Cohort / File(s) Summary
AI State Architecture
app/actions.tsx, app/page.tsx, app/search/[id]/page.tsx
Refactored AIState from flat { messages, chatId } structure to { conversations: Conversation[] } model. Updated initialAIState initialization, message routing to target the last active conversation, and per-conversation UI state transformation in getUIStateFromAIState and onSetAIState. Dynamic conversation initialization in search page now fetches and transforms database messages.
Chat Panel UI Updates
components/chat-panel.tsx
Removed clearChat from useActions; enhanced handleSubmit to accept optional newChat parameter. Introduced dedicated handleNewConversation handler that clears state, submits with newChat=true flag, and appends response. Updated button label from "New" to "New Conversation".
Worker Infrastructure
hooks/useWorker.ts, workers/turf.worker.ts
Introduced new useWorker<T> hook for Web Worker lifecycle management with postMessage, data, error, and isLoading return values. Created turf.worker.ts to offload geographic calculations—computes polygon area/centroid and linestring length/center for drawing features.
Map Integration
components/map/mapbox-map.tsx, components/map-toggle.tsx
Integrated turf worker into mapbox-map component; replaces inline geographic calculations with worker-based approach using turfWorker.postMessage and effect-driven feature updates. Added data-testid attribute to drawing mode button in map-toggle.
Agent & Researcher Logic
lib/agents/researcher.tsx
Consolidated streaming logic from incremental deltas to post-stream structured result handling. Removed mcp parameter from researcher function signature. Simplified error handling and updates messages with final toolCalls payload.
Import Path Correction
mapbox_mcp/hooks.ts
Updated import path for getModel from 'QCX/lib/utils' to '@/lib/utils' for correct module resolution.

Sequence Diagram

sequenceDiagram
    participant UI as Chat UI
    participant Actions as app/actions.tsx
    participant AIState as AI State
    participant Conv as Conversation<br/>(last active)
    participant Conv2 as Conversation<br/>(prev)
    participant Worker as turf.worker
    participant MapComp as mapbox-map

    UI->>Actions: submit (user message)
    Actions->>AIState: update conversations array
    AIState->>Conv: append user message to last conversation
    Note over Conv: { id, chatId, messages: [..., userMsg] }
    
    UI->>Actions: newChat=true
    Actions->>AIState: create new Conversation & append
    Note over AIState: conversations: [...Conv2, ...newConv]
    AIState->>Conv2: (previous conversation persisted)
    AIState->>Conv: (new empty conversation)
    
    MapComp->>Worker: postMessage({ features })
    Worker->>Worker: compute area/centroid (Polygon)<br/>compute length/center (LineString)
    Worker-->>MapComp: postMessage(results)
    MapComp->>MapComp: update featureType, measurement,<br/>label coordinates
    MapComp->>MapComp: setMapData with drawnFeatures
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

The review demands careful attention to the interconnected state architecture change across multiple files, worker lifecycle management integration, and per-conversation message routing logic. While changes follow a consistent pattern, the refactoring spans diverse concerns (state shape, UI transformation, worker threading, streaming logic) requiring separate validation for each area.

Possibly related PRs

Poem

🐰 Conversations bloom, no longer flat,
Each chat its own, imagine that!
Workers toil in threads unseen,
Drawing maps with measurements keen—
State flows fresh through each new talk,
Multi-conversation walks!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title Check ❓ Inconclusive The title "Implement System-Wide Concurrency and Performance Improvements" is related to the changeset and accurately references real aspects of the changes—the PR does introduce concurrency improvements through multi-conversation support and parallel tool execution, as well as performance improvements via Web Worker integration for expensive calculations. However, the title is quite broad and uses generic category terms that don't clearly communicate the primary architectural changes to someone scanning PR history. While it conveys that improvements are being made, terms like "System-Wide" and the broad categories of "Concurrency" and "Performance Improvements" lack specificity about what was fundamentally refactored—specifically the multi-conversation state model, Web Worker offload for map calculations, and parallel execution in the researcher agent. Consider revising the title to be more specific about the primary architectural changes, such as "Refactor chat state to support multiple concurrent conversations and offload map calculations to Web Workers" or "Enable multi-conversation support with concurrent execution and Web Worker integration for performance." This would help teammates quickly understand the core nature of the changes rather than relying on high-level improvement categories.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

Comment @coderabbitai help to get the list of available commands and usage tips.

@ngoiyaeric ngoiyaeric marked this pull request as ready for review October 20, 2025 12:22
@charliecreates charliecreates bot requested a review from CharlieHelps October 20, 2025 12:22
@qodo-merge-pro
Copy link
Contributor

qodo-merge-pro bot commented Oct 20, 2025

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
Unvalidated worker messages

Description: The hook posts messages to a Worker with arbitrary 'any' payload and resets state without
validating origin or message shape, which could enable UI state spoofing if a malicious or
wrong-origin worker URL is supplied.
useWorker.ts [40-47]

Referred Code
const postMessage = (messageData: any) => {
  if (workerRef.current) {
    setIsLoading(true);
    setError(null);
    setData(null);
    workerRef.current.postMessage(messageData);
  }
};
Potential DOM injection

Description: Creates DOM elements for labels using inner text from computed values but without
sanitization of potential coordinates-derived data; if feature properties leak untrusted
strings into labels in future, this pattern could allow DOM injection—use textContent only
and avoid HTML assignment.
mapbox-map.tsx [108-129]

Referred Code
const el = document.createElement('div');
el.className = `${featureType.toLowerCase()}-label`;
el.style.background = 'rgba(255, 255, 255, 0.8)';
el.style.padding = '4px 8px';
el.style.borderRadius = '4px';
el.style.fontSize = '12px';
el.style.fontWeight = 'bold';
el.style.color = '#333333';
el.style.boxShadow = '0 2px 4px rgba(0,0,0,0.2)';
el.style.pointerEvents = 'none';
el.textContent = measurement;

const marker = new mapboxgl.Marker({ element: el })
  .setLngLat(coordinates)
  .addTo(map.current);

if (featureType === 'Polygon') {
  polygonLabelsRef.current[id] = marker;
} else {
  lineLabelsRef.current[id] = marker;
}


 ... (clipped 1 lines)
Ticket Compliance
🎫 No ticket provided
- [ ] Create ticket/issue <!-- /create_ticket --create_ticket=true -->

</details></td></tr>
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
No custom compliance provided

Follow the guide to enable custom compliance check.

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@ngoiyaeric
Copy link
Collaborator

@coderabbitai review this pull request for improvements

@qodo-merge-pro
Copy link
Contributor

qodo-merge-pro bot commented Oct 20, 2025

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Re-evaluate the multi-conversation interaction model

The current multi-conversation implementation only allows interaction with the
newest conversation. This should be updated to allow users to select and resume
any previous conversation.

Examples:

app/actions.tsx [71-73]
    const lastConversation =
      currentAIState.conversations[currentAIState.conversations.length - 1]
    const messages: CoreMessage[] = [...(lastConversation.messages as any[])].filter(
app/actions.tsx [135-137]
  const lastConversation =
    currentAIState.conversations[currentAIState.conversations.length - 1]
  const messages: CoreMessage[] = [...(lastConversation.messages as any[])].filter(

Solution Walkthrough:

Before:

async function submit(formData) {
  'use server'
  const aiState = getMutableAIState();
  const currentAIState = aiState.get();

  // Always selects the last conversation in the array.
  const lastConversation =
    currentAIState.conversations[currentAIState.conversations.length - 1];

  // All new messages are added to the last conversation.
  const userMessage = { ... };
  lastConversation.messages.push(userMessage);

  // ... agent logic ...

  const assistantMessage = { ... };
  lastConversation.messages.push(assistantMessage);

  aiState.update({ ...currentAIState });
}

After:

async function submit(formData) {
  'use server'
  // The ID of the target conversation should be passed from the client.
  const conversationId = formData.get('conversationId');

  const aiState = getMutableAIState();
  const currentAIState = aiState.get();

  // Find the specific conversation to interact with.
  const activeConversation = currentAIState.conversations.find(
    c => c.id === conversationId
  );

  if (!activeConversation) { /* handle error */ }

  // Add new messages to the correct, active conversation.
  const userMessage = { ... };
  activeConversation.messages.push(userMessage);

  // ... agent logic ...

  const assistantMessage = { ... };
  activeConversation.messages.push(assistantMessage);

  aiState.update({ ...currentAIState });
}
Suggestion importance[1-10]: 9

__

Why: This suggestion correctly identifies a major design limitation in the new multi-conversation feature, where only the last conversation is interactive, which misaligns with user expectations for a multi-chat UI.

High
Possible issue
Fix new conversation button logic

In handleNewConversation, remove the second setMessages call that adds the
response from submit to the UI. The submit action is only needed to reset the
server state, and the UI should remain empty.

components/chat-panel.tsx [117-124]

 const handleNewConversation = async () => {
   setMessages([])
   clearAttachment()
   const formData = new FormData()
   formData.append('newChat', 'true')
-  const responseMessage = await submit(formData)
-  setMessages(currentMessages => [...currentMessages, responseMessage as any])
+  // The submit action will create a new conversation state on the server.
+  // We don't need to display anything from its response, just clear the UI.
+  await submit(formData)
 }
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that setMessages is called with the return value of submit, which adds an unwanted empty component to the UI after clearing it. Removing this call fixes the bug and correctly resets the chat interface for a new conversation.

Medium
Fix race condition in worker logic

To prevent a race condition, modify the web worker to return the features it
processed along with the results. This ensures the useEffect hook operates on a
consistent snapshot of data.

components/map/mapbox-map.tsx [81-140]

 useEffect(() => {
   if (turfWorker.data && map.current && drawRef.current) {
-    const features = drawRef.current.getAll().features;
+    // The worker now returns the features it processed, preventing a race condition.
+    const { results, features } = turfWorker.data;
     const currentDrawnFeatures: Array<{ id: string; type: 'Polygon' | 'LineString'; measurement: string; geometry: any }> = [];
 
-    turfWorker.data.forEach(result => {
+    results.forEach(result => {
       const { id, calculation } = result;
       if (!calculation) return;
 
       const feature = features.find(f => f.id === id);
       if (!feature) return;
 
       let featureType: 'Polygon' | 'LineString' | null = null;
       ...
     });
     setMapData(prevData => ({ ...prevData, drawnFeatures: currentDrawnFeatures }));
   }
 }, [turfWorker.data, formatMeasurement, setMapData])

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a potential race condition where the features on the map could change while the web worker is processing, leading to inconsistent state. Passing the features to and from the worker ensures data consistency, making the application more robust.

Medium
  • Update

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 20, 2025

@ngoiyaeric: I'll review this pull request for improvements. This looks like a substantial PR with architectural changes for concurrency and performance enhancements. I'll analyze the changes across all the modified files and provide feedback.

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@ngoiyaeric
Copy link
Collaborator

@coderabbitai review This error message:

Uncaught TypeError: Cannot read properties of undefined (reading 'length')

is a classic JavaScript runtime error that occurs when your code tries to access the .length property of something that is undefined. In this case, the stack trace is minified, but we can still break down what's likely happening.


🧠 What It Means

  • Somewhere in your code, you're doing something like:
    const len = someVar.length;
    But someVar is undefined, so JavaScript throws a TypeError.

🔍 Likely Causes

Based on the stack trace and the fact that this is happening in a production Vercel deployment, here are some common culprits:

  • Missing or malformed data: A component might expect a prop or API response to be an array or string, but it's receiving undefined.
  • Async data not yet loaded: If you're rendering before data fetching completes, you might be accessing .length on undefined.
  • Improper destructuring: Something like:
    const { items } = props;
    items.length; // but props is undefined
  • Optional chaining missing: You might need items?.length instead of items.length.

🛠️ How to Fix It

Here’s a step-by-step approach tailored to your debugging style:

  1. Use DevTools to inspect the failing line
    Unminify the bundle or use source maps if available. Look at line 1:67905 in the file 755-94bb09cbd8e7c0f8….

  2. Find the variable that’s undefined
    Identify which variable is being accessed with .length. Add a console log before that line:

    console.log('Variable:', someVar);
  3. Add a guard clause or fallback
    Once you know what’s undefined, protect the access:

    if (Array.isArray(someVar)) {
      // safe to use someVar.length
    }
  4. Check upstream data flow
    If it's a prop or fetched data, trace back to where it's coming from. Ensure it's initialized properly.

  5. Use optional chaining or default values

    const len = (someVar || []).length;
    // or
    const len = someVar?.length ?? 0;

If you want, I can help you trace this in your actual codebase—just paste the relevant component or data-fetching logic. Want to dig into the source map or check the Vercel deployment logs next?

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 20, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# Let's examine the key files changed in this PR for .length accesses
echo "=== Examining actions.tsx for .length accesses ==="
rg -n "\.length" app/actions.tsx -C 2

echo -e "\n=== Examining page.tsx files for .length accesses ==="
rg -n "\.length" app/search/page.tsx -C 2
rg -n "\.length" app/page.tsx -C 2

echo -e "\n=== Examining chat-panel.tsx for .length accesses ==="
rg -n "\.length" components/chat-panel.tsx -C 2

echo -e "\n=== Looking for conversation state access patterns ==="
rg -n "conversations\[" app/actions.tsx app/page.tsx app/search/page.tsx components/chat-panel.tsx -C 1

Length of output: 3615


🏁 Script executed:

#!/bin/bash

# Let's check the initial state structure and where conversations is initialized
echo "=== Checking initial state in page.tsx files ==="
rg -n "conversations" app/page.tsx -C 3

# Check the AIState type definition
echo -e "\n=== Looking for AIState type definition ==="
rg -n "type AIState|interface AIState" -C 5

# Check if there are any guards before accessing conversations
echo -e "\n=== Checking for guards before conversations access in actions.tsx ==="
rg -n "conversations\?" app/actions.tsx -C 2

# Look at the specific lines where the error likely occurs
echo -e "\n=== Examining lines 70-73 in actions.tsx ==="
sed -n '68,75p' app/actions.tsx

echo -e "\n=== Examining lines 134-140 in actions.tsx ==="
sed -n '132,142p' app/actions.tsx

echo -e "\n=== Examining lines 365-370 in actions.tsx ==="
sed -n '363,372p' app/actions.tsx

Length of output: 1984


🏁 Script executed:

#!/bin/bash

# Check where conversations array might be initialized as empty
echo "=== Looking for empty conversations initialization ==="
rg -n "conversations:\s*\[\]" -C 2

# Check the createAI or state initialization
echo -e "\n=== Looking for createAI usage and initial state ==="
rg -n "createAI|initialAIState" app/actions.tsx -C 3

# Check if there's any code that might clear or reset conversations
echo -e "\n=== Looking for code that might clear conversations ==="
rg -n "conversations\s*=\s*\[" -C 2

# Let's see the full context around one of the problematic sections
echo -e "\n=== Full context of submitUserMessage function ==="
sed -n '60,80p' app/actions.tsx

Length of output: 1580


🏁 Script executed:

#!/bin/bash

# Let's check how state is persisted/loaded - could be coming from server
echo "=== Checking for state loading from server or storage ==="
rg -n "getAIState|setAIState" app/actions.tsx -C 2

# Look for the onGetUIState implementation which loads state
echo -e "\n=== Checking onGetUIState implementation ==="
sed -n '473,490p' app/actions.tsx

# Check if there's any state migration or restoration logic
echo -e "\n=== Looking for state restoration logic ==="
rg -n "restore|migrate|hydrate" -i --type ts -C 2

# Let's see if there are other places accessing conversations without guards
echo -e "\n=== All places accessing conversations array ==="
rg -n "conversations\[" app/actions.tsx

Length of output: 3406


@ngoiyaeric: I've found the root cause of your error! 🎯

The issue is that your code is accessing conversations array without checking if it exists, and this is happening because persisted state from production still has the old state structure (before this PR's refactoring from single conversation to multiple conversations).

🔍 Root CauseIn app/actions.tsx, there are three critical locations where the code accesses conversations[conversations.length - 1] without guards:

  1. Line 72 - in the resolution_search action
  2. Line 136 - in the main submit function
  3. Line 367 - in the specific API fallback
  4. Line 478 - in onGetUIState

When users with old persisted state (before your PR) load the app, conversations is undefined, causing the .length access to throw the error.

🛠️ Fix Required

Add guards and migration logic to handle both old and new state structures:

// In app/actions.tsx, add a helper function at the top after type definitions:

function ensureConversations(aiState: AIState): AIState {
  // Migration: Handle old state structure without conversations array
  if (!aiState.conversations || !Array.isArray(aiState.conversations)) {
    return {
      conversations: [
        {
          id: nanoid(),
          chatId: nanoid(),
          messages: []
        }
      ]
    }
  }
  
  // Ensure at least one conversation exists
  if (aiState.conversations.length === 0) {
    aiState.conversations.push({
      id: nanoid(),
      chatId: nanoid(),
      messages: []
    })
  }
  
  return aiState
}

// Then update all three problematic locations:

// Location 1 (around line 70):
const currentAIState = ensureConversations(aiState.get())
const lastConversation =
  currentAIState.conversations[currentAIState.conversations.length - 1]

// Location 2 (around line 134):
const currentAIState = ensureConversations(aiState.get())
const lastConversation =
  currentAIState.conversations[currentAIState.conversations.length - 1]

// Location 3 (around line 365):
const currentAIState = ensureConversations(aiState.get())
const lastConversation =
  currentAIState.conversations[currentAIState.conversations.length - 1]

// Location 4 (around line 475 in onGetUIState):
const aiState = ensureConversations(getAIState() as AIState)
if (aiState) {
  const allUiComponents: UIState = []
  aiState.conversations.forEach((conversation, index) => {
    // ... rest of the code
  })
}

🎯 Why This Fixes It

  1. Migration path: The ensureConversations helper handles state from before your PR
  2. Guards against empty array: Ensures at least one conversation always exists
  3. Type safety: Maintains proper AIState structure
  4. No breaking changes: New sessions work as expected, old sessions get migrated

🧪 Testing Checklist

After applying the fix:

  • Clear browser storage and test fresh session
  • Test with existing session data
  • Verify multiple concurrent conversations still work
  • Check production deployment logs for the error

Would you like me to generate the complete diff for these changes?

Copy link

@charliecreates charliecreates bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Using lastConversation plus stale snapshots of aiState can overwrite concurrent updates and mis-route messages during parallel requests; target conversations by id and update from the latest state.
  • The Web Worker URL is not memoized and uses a leading-slash path, likely causing worker re-creation on every render and potential bundler resolution issues.
  • useWorker’s effect depends on the URL object identity, making it sensitive to equivalent URL instances and causing unnecessary worker churn.
  • The researcher agent no longer streams text increments, likely hurting perceived responsiveness; consider restoring progressive streaming while keeping tools parallelized.
Additional notes (2)
  • Readability | app/actions.tsx:158-158
    groupeId appears to be a misspelling and is used repeatedly to group related messages. Minor, but renaming improves readability and reduces future confusion.

  • Performance | lib/agents/researcher.tsx:87-104
    By switching from streaming the model’s output to awaiting the final result, you’ve likely removed incremental token streaming to the UI. This can noticeably degrade perceived latency and responsiveness, even if tool calls execute in parallel.

Summary of changes
  • Refactored AI state to support multiple concurrent conversations by introducing a conversations: Conversation[] structure and updating all message mutations to operate on the last conversation.
  • Adjusted server action submit (and related UI state handling) to append messages to the last conversation, added support for starting a new conversation via newChat flag, and updated onGetUIState and onSetAIState accordingly.
  • Reworked ChatPanel to replace “Clear Chat” with “New Conversation,” posting newChat to the server and adjusting client-side UI updates.
  • Introduced a generic useWorker React hook and a new turf.worker.ts Web Worker to offload turf.js measurements for polygons and lines; updated mapbox-map.tsx to use the worker and render labels from worker results.
  • Updated the researcher agent to rely on the SDK’s parallel tool execution by awaiting the aggregated result instead of consuming the stream manually.
  • Minor tweaks: added test id to drawing mode button; fixed import path in mapbox_mcp/hooks.ts; updated app/page.tsx and app/search/[id]/page.tsx to use the new AI state shape.

app/actions.tsx Outdated
Comment on lines 134 to 137
const currentAIState = aiState.get()
const lastConversation =
currentAIState.conversations[currentAIState.conversations.length - 1]
const messages: CoreMessage[] = [...(lastConversation.messages as any[])].filter(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the last conversation and a snapshot of aiState (currentAIState = aiState.get()) across the entire request risks clobbering concurrent updates. If another request adds a conversation while this one is running, your later aiState.update({...currentAIState, conversations: [...]}) calls can overwrite that newer conversation. Additionally, targeting the last conversation (conversations[conversations.length - 1]) makes updates nondeterministic in concurrent scenarios (e.g., a long-running request started earlier will keep writing into whatever is last now). This undermines the stated goal of safe concurrency.

Suggestion

Target a specific conversation by id and always read fresh state immediately before writing. Include a conversationId (or chatId) in the form, look up that conversation in the latest state, and replace only that element immutably. For example:

  • In the form submit, append the conversation id: formData.append('conversationId', activeConversationId).
  • In submit:

const state = aiState.get();
const targetId = (formData?.get('conversationId') as string) ?? state.conversations.at(-1)!.id;
const idx = state.conversations.findIndex(c => c.id === targetId);
if (idx !== -1) {
const conversations = state.conversations.slice();
const conv = conversations[idx];
const userMessage: AIMessage = { id: nanoid(), role: 'user', content, type };
conversations[idx] = { ...conv, messages: [...conv.messages, userMessage] };
aiState.update({ ...state, conversations });
}

Apply the same pattern for every subsequent mutation (tool outputs, assistant responses, etc.). Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.

Comment on lines +495 to 556
// Find the conversation that was updated and save it.
for (const conversation of state.conversations) {
if (conversation.messages.some(e => e.type === 'response')) {
const { chatId, messages } = conversation
const createdAt = new Date()
const path = `/search/${chatId}`

let title = 'Untitled Chat'
if (messages.length > 0) {
const firstMessageContent = messages[0].content
if (typeof firstMessageContent === 'string') {
try {
const parsedContent = JSON.parse(firstMessageContent)
title = parsedContent.input?.substring(0, 100) || 'Untitled Chat'
} catch (e) {
title = firstMessageContent.substring(0, 100)
}
} else if (Array.isArray(firstMessageContent)) {
const textPart = (
firstMessageContent as { type: string; text?: string }[]
).find(p => p.type === 'text')
title =
textPart && textPart.text
? textPart.text.substring(0, 100)
: 'Image Message'
}
}
} else if (Array.isArray(firstMessageContent)) {
const textPart = (
firstMessageContent as { type: string; text?: string }[]
).find(p => p.type === 'text')
title =
textPart && textPart.text
? textPart.text.substring(0, 100)
: 'Image Message'
}
}

const updatedMessages: AIMessage[] = [
...messages,
{
id: nanoid(),
role: 'assistant',
content: `end`,
type: 'end'
}
]
const updatedMessages: AIMessage[] = [
...messages,
{
id: nanoid(),
role: 'assistant',
content: `end`,
type: 'end'
}
]

const { getCurrentUserIdOnServer } = await import(
'@/lib/auth/get-current-user'
)
const actualUserId = await getCurrentUserIdOnServer()
const { getCurrentUserIdOnServer } = await import(
'@/lib/auth/get-current-user'
)
const actualUserId = await getCurrentUserIdOnServer()

if (!actualUserId) {
console.error('onSetAIState: User not authenticated. Chat not saved.')
return
}
if (!actualUserId) {
console.error(
'onSetAIState: User not authenticated. Chat not saved.'
)
continue // Continue to the next conversation
}

const chat: Chat = {
id: chatId,
createdAt,
userId: actualUserId,
path,
title,
messages: updatedMessages
const chat: Chat = {
id: chatId,
createdAt,
userId: actualUserId,
path,
title,
messages: updatedMessages
}
await saveChat(chat, actualUserId)
}
}
await saveChat(chat, actualUserId)
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Persisting chats inside onSetAIState now iterates all conversations and saves any that contain a response. Without a guard, this can re-save the same conversation multiple times on subsequent update/done calls, increasing DB churn and risking inconsistent createdAt updates.

Suggestion

Track a persistence marker per conversation (e.g., persistedAt or a sentinel message type) or only persist when you append the terminal end message. Example:

if (conversation.messages.at(-1)?.type === 'end' && !conversation.persistedAt) {
  // save and mark persisted
}

If adding a field is undesirable, you can compare a stored hash/length to avoid re-saving unchanged conversations. Reply with "@CharlieHelps yes please" if you'd like me to add a minimal dedupe mechanism.

Comment on lines 42 to 43
const turfWorker = useWorker<any[]>(new URL('/workers/turf.worker.ts', import.meta.url));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Web Worker URL is created with a leading slash and without memoization. new URL('/workers/turf.worker.ts', import.meta.url) is resolved as an absolute URL, which is often incorrect in bundlers and Next.js; moreover, creating a new URL every render tears down and recreates the worker due to the useEffect dependency in useWorker, causing performance issues and flicker.

Suggestion

Memoize a correct relative URL and pass the memoized reference into useWorker to avoid worker churn and path resolution issues:

import { useMemo } from 'react';
...
const workerUrl = useMemo(() => new URL('../../workers/turf.worker.ts', import.meta.url), []);
const turfWorker = useWorkerany[](workerUrl);

This ensures a stable identity and a bundler-friendly path. Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.

Comment on lines +82 to +140
if (turfWorker.data && map.current && drawRef.current) {
const features = drawRef.current.getAll().features;
const currentDrawnFeatures: Array<{ id: string; type: 'Polygon' | 'LineString'; measurement: string; geometry: any }> = [];

turfWorker.data.forEach(result => {
const { id, calculation } = result;
if (!calculation) return;

const feature = features.find(f => f.id === id);
if (!feature) return;

let featureType: 'Polygon' | 'LineString' | null = null;
let measurement = '';
let coordinates: [number, number] | undefined;

if (calculation.type === 'Polygon') {
featureType = 'Polygon';
measurement = formatMeasurement(calculation.area, true);
coordinates = calculation.center;
} else if (calculation.type === 'LineString') {
featureType = 'LineString';
measurement = formatMeasurement(calculation.length, false);
coordinates = calculation.center;
}

if (featureType && measurement && coordinates && map.current) {
const el = document.createElement('div');
el.className = `${featureType.toLowerCase()}-label`;
el.style.background = 'rgba(255, 255, 255, 0.8)';
el.style.padding = '4px 8px';
el.style.borderRadius = '4px';
el.style.fontSize = '12px';
el.style.fontWeight = 'bold';
el.style.color = '#333333';
el.style.boxShadow = '0 2px 4px rgba(0,0,0,0.2)';
el.style.pointerEvents = 'none';
el.textContent = measurement;

if (map.current) {
const marker = new mapboxgl.Marker({ element: el })
.setLngLat(coordinates as [number, number])
.addTo(map.current)

polygonLabelsRef.current[id] = marker
}
}
else if (feature.geometry.type === 'LineString') {
featureType = 'LineString';
// Calculate length for lines
const length = turf.length(feature, { units: 'kilometers' }) * 1000 // Convert to meters
const formattedLength = formatMeasurement(length, false)
measurement = formattedLength;

// Get midpoint for label placement
const line = feature.geometry.coordinates
const midIndex = Math.floor(line.length / 2) - 1
const midpoint = midIndex >= 0 ? line[midIndex] : line[0]

// Create a label
const el = document.createElement('div')
el.className = 'distance-label'
el.style.background = 'rgba(255, 255, 255, 0.8)'
el.style.padding = '4px 8px'
el.style.borderRadius = '4px'
el.style.fontSize = '12px'
el.style.fontWeight = 'bold'
el.style.color = '#333333' // Added darker color
el.style.boxShadow = '0 2px 4px rgba(0,0,0,0.2)'
el.style.pointerEvents = 'none'
el.textContent = formattedLength

// Add marker for the label
if (map.current) {
const marker = new mapboxgl.Marker({ element: el })
.setLngLat(midpoint as [number, number])
.addTo(map.current)

lineLabelsRef.current[id] = marker
}
}
.setLngLat(coordinates)
.addTo(map.current);

if (featureType && id && measurement && feature.geometry) {
currentDrawnFeatures.push({
id,
type: featureType,
measurement,
geometry: feature.geometry,
});
}
})
if (featureType === 'Polygon') {
polygonLabelsRef.current[id] = marker;
} else {
lineLabelsRef.current[id] = marker;
}

setMapData(prevData => ({ ...prevData, drawnFeatures: currentDrawnFeatures }))
}, [formatMeasurement, setMapData])
currentDrawnFeatures.push({
id,
type: featureType,
measurement,
geometry: feature.geometry,
});
}
});
setMapData(prevData => ({ ...prevData, drawnFeatures: currentDrawnFeatures }));
}
}, [turfWorker.data, formatMeasurement, setMapData])

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worker results are processed asynchronously without staleness protection. Rapid edits can cause out-of-order results to repaint labels for older geometries. Although you guard against deleted features, measurements can still be stale relative to the current geometry.

Suggestion

Introduce a monotonically increasing requestId and echo it through the worker so late results can be ignored:

  • In the component:
const calcRequestId = useRef(0)
const updateMeasurementLabels = useCallback(() => {
  // ... remove existing labels
  const requestId = ++calcRequestId.current
  const features = drawRef.current!.getAll().features
  turfWorker.postMessage({ requestId, features })
}, [turfWorker])

useEffect(() => {
  if (!turfWorker.data || !map.current || !drawRef.current) return
  const { requestId, results } = turfWorker.data
  if (requestId !== calcRequestId.current) return // ignore stale
  // process results as before...
}, [turfWorker.data])
  • Update the worker to echo requestId back (see worker comment).

Reply with "@CharlieHelps yes please" if you'd like me to add these correlated request changes across the hook, worker, and map component.

Comment on lines 16 to 39
useEffect(() => {
// Create a new worker instance
const worker = new Worker(workerUrl, { type: 'module' });
workerRef.current = worker;

worker.onmessage = (event: MessageEvent<T>) => {
setData(event.data);
setIsLoading(false);
};

worker.onerror = (err: ErrorEvent) => {
setError(err.message);
setIsLoading(false);
};

// Cleanup worker on component unmount
return () => {
if (workerRef.current) {
workerRef.current.terminate();
workerRef.current = null;
}
};
}, [workerUrl]);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

useWorker tears down and re-creates the worker whenever a new URL object instance is passed (even if it points to the same href), because the effect depends on the URL object identity. This is brittle and can cause unnecessary churn in any caller that constructs the URL inline.

Suggestion

Reduce sensitivity to object identity by depending on the string form of the URL (or change the hook signature to accept a string):

useEffect(() => {
const worker = new Worker(workerUrl, { type: 'module' });
...
return () => { workerRef.current?.terminate(); workerRef.current = null; };
// Depend on the href string, not the URL object identity
}, [workerUrl.toString()]);

This prevents unnecessary terminations when callers re-create equivalent URL instances. Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.

Comment on lines 16 to 38
useEffect(() => {
// Create a new worker instance
const worker = new Worker(workerUrl, { type: 'module' });
workerRef.current = worker;

worker.onmessage = (event: MessageEvent<T>) => {
setData(event.data);
setIsLoading(false);
};

worker.onerror = (err: ErrorEvent) => {
setError(err.message);
setIsLoading(false);
};

// Cleanup worker on component unmount
return () => {
if (workerRef.current) {
workerRef.current.terminate();
workerRef.current = null;
}
};
}, [workerUrl]);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

useWorker provides a nice abstraction but doesn’t support correlating responses or ignoring out-of-order results, which is important for UI correctness when requests are fired quickly. Adding an optional requestId passthrough lets consumers discard stale results.

Suggestion

Augment the hook to allow structured messages with a requestId, and type onmessage accordingly:

type WorkerEnvelope<T> = { requestId?: number; results: T }

// onmessage
worker.onmessage = (event: MessageEvent<WorkerEnvelope<T>>) => {
  setData(event.data as any)
  setIsLoading(false)
}

// postMessage stays generic; caller provides requestId

I can wire this up and update the map/worker to use it. Reply with "@CharlieHelps yes please" to have me implement it.

Comment on lines 117 to 124
const handleNewConversation = async () => {
setMessages([])
clearAttachment()
await clearChat()
const formData = new FormData()
formData.append('newChat', 'true')
const responseMessage = await submit(formData)
setMessages(currentMessages => [...currentMessages, responseMessage as any])
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Starting a new conversation currently adds a server response to the local UI state even though the server returns no UI content for a pure newChat request. This yields a blank message item in the timeline after clearing, which is confusing and unnecessary.

Suggestion

Avoid appending the server response for a newChat-only submit. You can still notify the server to create the conversation without pushing a UI item:

const handleNewConversation = async () => {
setMessages([]);
clearAttachment();
const formData = new FormData();
formData.append('newChat', 'true');
await submit(formData); // no UI append here
};

Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.

@charliecreates charliecreates bot removed the request for review from CharlieHelps October 20, 2025 12:29
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
lib/agents/researcher.tsx (1)

36-38: Fix typographical errors in system prompt.

The system prompt contains user-facing text quality issues:

  • Line 36: "proconfigured" → "preconfigured", "queires" → "queries"
  • Line 38: The example response text is missing a closing quote and has grammatical issues.

Apply this diff to fix the typos:

-There are also some proconfigured example queires. 
+There are also some preconfigured example queries.
     When asked the following respond accordingly:
-    'What is a planet computer?' answer with the following: '"A planet computer is a proprietary environment aware system that interoperates Climate forecasting, mapping and scheduling using cutting edge multi-agents to streamline automation and exploration on a planet'
+    'What is a planet computer?' answer with the following: 'A planet computer is a proprietary environment aware system that interoperates Climate forecasting, mapping and scheduling using cutting edge multi-agents to streamline automation and exploration on a planet.'
components/map/mapbox-map.tsx (1)

533-541: Timer type is Node’s, but this file runs in the browser.

Use a DOM-safe type to avoid TS friction across environments.

-  const longPressTimerRef = useRef<NodeJS.Timeout | null>(null);
+  const longPressTimerRef = useRef<ReturnType<typeof setTimeout> | null>(null);
app/actions.tsx (3)

670-673: Type mismatch: UIState.isCollapsed expects a StreamableValue, not a boolean

Returning isCollapsed: false violates the declared UIState type and may break consumers.

Apply this diff:

-              return {
-                id,
-                component: <MapQueryHandler toolOutput={toolOutput} />,
-                isCollapsed: false
-              }
+              const collapsed = createStreamableValue<boolean>()
+              collapsed.done(false)
+              return {
+                id,
+                component: <MapQueryHandler toolOutput={toolOutput} />,
+                isCollapsed: collapsed.value
+              }

723-724: Ensure undefined entries are filtered out of UIState

Some branches return nothing (e.g., the previous 'assistant' inquiry bug). Filter null and undefined robustly to avoid runtime issues.

Apply this diff:

-    .filter(message => message !== null) as UIState
+    .filter((m): m is UIState[number] => m != null) as UIState

145-148: Remove or rename groupeId (typo) if grouping is not used after unique IDs

After assigning unique message ids, groupeId becomes unnecessary. Remove it, or rename to groupId and carry it as optional metadata if you actually group messages downstream.

Apply this diff to remove it:

-  const groupeId = nanoid()
@@
-    const groupeId = nanoid()

Also applies to: 182-184

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 74ee447 and eb3c708.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (10)
  • app/actions.tsx (13 hunks)
  • app/page.tsx (1 hunks)
  • app/search/[id]/page.tsx (2 hunks)
  • components/chat-panel.tsx (5 hunks)
  • components/map-toggle.tsx (1 hunks)
  • components/map/mapbox-map.tsx (3 hunks)
  • hooks/useWorker.ts (1 hunks)
  • lib/agents/researcher.tsx (1 hunks)
  • mapbox_mcp/hooks.ts (1 hunks)
  • workers/turf.worker.ts (1 hunks)
🔇 Additional comments (7)
components/map-toggle.tsx (1)

32-32: Good test instrumentation addition.

Adding the data-testid attribute is a standard and recommended practice for testing React components. The descriptive name "drawing-mode-button" clearly identifies the element and will facilitate selector targeting in test scenarios without brittle class/styling dependencies.

lib/agents/researcher.tsx (1)

14-21: LGTM: Function signature refactored for flexibility.

The addition of dynamicSystemPrompt and removal of the mcp parameter align with the PR's goal to simplify tool execution. The signature change is a breaking change, but that's expected in this architectural refactor.

components/map/mapbox-map.tsx (1)

81-141: Worker results mapping looks solid; a couple of polish nits.

  • Labels: consider CSS classes over inline styles for themeability.
  • Data: setMapData is called once; good. Ensure id types match worker output (string) to prevent .find mismatches.
  • Optional: if polygons are concave/slender, switch worker to centerOfMass for better label placement, and distance-based midpoint for lines (PR to worker in this same branch). Based on learnings.
app/actions.tsx (1)

90-98: Confirm: resolution_search user message is intentionally hidden in UI

User message for resolution_search has no type, and getUIStateFromAIState drops messages without type. If you intend to show the user's map upload context, add a type and render branch. Otherwise, confirm the current hidden behavior.

Also applies to: 112-121

components/chat-panel.tsx (3)

72-75: newChat plumbing looks correct

Wiring newChat through submit is consistent with the server-side handling. No issues.

Confirm that you never send newChat=true alongside a non-empty input unless that’s intended to seed the new conversation with a first message.

Also applies to: 106-109


26-26: Action import/use looks good

Using only submit from useActions matches the server’s API.


143-147: UI copy and handler change acknowledged

"New Conversation" label and linking to the dedicated handler improves clarity.

app/actions.tsx Outdated
Comment on lines 161 to 166
const content = JSON.stringify(Object.fromEntries(formData!))
const type = 'input' as const

const userMessage: AIMessage = { id: nanoid(), role: 'user', content, type }
lastConversation.messages.push(userMessage)
aiState.update({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid JSON.stringify(FormData); Files and non-text fields will break serialization

JSON.stringify(Object.fromEntries(formData)) will fail or produce meaningless output for File entries and non-text values. Serialize only the user input that you need.

Apply this diff:

-    const content = JSON.stringify(Object.fromEntries(formData!))
+    const content = JSON.stringify({ input: userInput })

Alternatively, reuse the same messageParts/content construction used below for consistency.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In app/actions.tsx around lines 161 to 166, replace the unsafe
JSON.stringify(Object.fromEntries(formData)) usage because it will break on File
and non-text FormData entries; instead build the message content the same way
used later (reuse messageParts/content construction) or explicitly serialize
only text fields (e.g., iterate formData entries, skip or convert File objects
to filename/size/type metadata, and collect strings into a plain object or
concatenated message string), then set userMessage.content to that safe string;
ensure you do not attempt to JSON.stringify File objects and keep the resulting
content consistent with the rest of the messageParts format.

Comment on lines 185 to 203
lastConversation.messages.push({
id: groupeId,
role: 'assistant',
content: definition,
type: 'response'
} as AIMessage)
lastConversation.messages.push({
id: groupeId,
role: 'assistant',
content: JSON.stringify(relatedQueries),
type: 'related'
} as AIMessage)
lastConversation.messages.push({
id: groupeId,
role: 'assistant',
content: 'followup',
type: 'followup'
} as AIMessage)
aiState.done({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Duplicate message IDs cause React key collisions; generate unique IDs per message

Multiple assistant/tool messages reuse the same id (groupeId). This will produce duplicate keys in UIState and unstable rendering. Assign unique ids for each message; if grouping is needed, carry a separate groupId field.

Apply this diff:

@@
-    lastConversation.messages.push({
-      id: groupeId,
+    lastConversation.messages.push({
+      id: nanoid(),
       role: 'assistant',
       content: definition,
       type: 'response'
     } as AIMessage)
-    lastConversation.messages.push({
-      id: groupeId,
+    lastConversation.messages.push({
+      id: nanoid(),
       role: 'assistant',
       content: JSON.stringify(relatedQueries),
       type: 'related'
     } as AIMessage)
-    lastConversation.messages.push({
-      id: groupeId,
+    lastConversation.messages.push({
+      id: nanoid(),
       role: 'assistant',
       content: 'followup',
       type: 'followup'
     } as AIMessage)
@@
-        toolOutputs.forEach(output => {
-          lastConversation.messages.push({
-            id: groupeId,
+        toolOutputs.forEach(output => {
+          lastConversation.messages.push({
+            id: nanoid(),
             role: 'tool',
             content: JSON.stringify(output.result),
             name: output.toolName,
             type: 'tool'
           } as AIMessage)
         })
@@
-      lastConversation.messages.push({
-        id: groupeId,
+      lastConversation.messages.push({
+        id: nanoid(),
         role: 'assistant',
         content: answer,
         type: 'response'
       } as AIMessage)
-      lastConversation.messages.push({
-        id: groupeId,
+      lastConversation.messages.push({
+        id: nanoid(),
         role: 'assistant',
         content: JSON.stringify(relatedQueries),
         type: 'related'
       } as AIMessage)
-      lastConversation.messages.push({
-        id: groupeId,
+      lastConversation.messages.push({
+        id: nanoid(),
         role: 'assistant',
         content: 'followup',
         type: 'followup'
       } as AIMessage)

Optionally, remove the now-unused groupeId declarations.

Also applies to: 349-356, 399-416

🤖 Prompt for AI Agents
In app/actions.tsx around lines 185-203 (and similarly at 349-356 and 399-416),
multiple messages are being pushed with the same id (groupeId) which causes
React key collisions; change each push to assign a unique id (e.g., generate a
UUID or call a uniqueId helper) per message, keep the original groupeId value in
a new groupId field on the message object if grouping is required, and remove or
stop using the old groupeId variable where it's no longer needed.

Comment on lines 311 to 317
lastConversation.messages.push({
id: nanoid(),
role: 'assistant',
content: `inquiry: ${inquiry?.question}`,
type: 'inquiry'
} as AIMessage)
aiState.done({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Wrong role for inquiry message; it won't render and may leak undefined into UI list

getUIStateFromAIState renders type 'inquiry' only for role 'user'. Emitting it as 'assistant' yields no element and leaves an undefined entry.

Apply this diff:

-      lastConversation.messages.push({
-        id: nanoid(),
-        role: 'assistant',
-        content: `inquiry: ${inquiry?.question}`,
-        type: 'inquiry'
-      } as AIMessage)
+      lastConversation.messages.push({
+        id: nanoid(),
+        role: 'user',
+        content: inquiry?.question ?? '',
+        type: 'inquiry'
+      } as AIMessage)

Also consider removing the literal "inquiry:" prefix; the UI component already conveys context.

🤖 Prompt for AI Agents
In app/actions.tsx around lines 311 to 317, the inquiry message is being pushed
with role:'assistant' which prevents it from rendering (getUIStateFromAIState
only treats type 'inquiry' when role === 'user') and may insert an undefined UI
entry; change the message role to 'user', remove the literal "inquiry:" prefix
from the content so it is just the inquiry text (e.g., inquiry?.question), keep
type:'inquiry' and id generation as-is, and ensure the object is correctly typed
as AIMessage.

app/actions.tsx Outdated
Comment on lines 365 to 377
const currentAIState = aiState.get()
const lastConversation =
currentAIState.conversations[currentAIState.conversations.length - 1]
const modifiedMessages = lastConversation.messages.map(msg =>
msg.role === 'tool'
? {
...msg,
role: 'assistant',
content: JSON.stringify(msg.content),
type: 'tool'
}
: msg
) as CoreMessage[]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Writer fallback uses the wrong conversation and double-encodes content

  • Re-fetching lastConversation from aiState.get() can target a different conversation if a new chat started meanwhile.
  • JSON.stringify(msg.content) double-encodes strings from tool messages.

Use the captured lastConversation and avoid re-stringifying strings.

Apply this diff:

-      const currentAIState = aiState.get()
-      const lastConversation =
-        currentAIState.conversations[currentAIState.conversations.length - 1]
-      const modifiedMessages = lastConversation.messages.map(msg =>
-        msg.role === 'tool'
-          ? {
-              ...msg,
-              role: 'assistant',
-              content: JSON.stringify(msg.content),
-              type: 'tool'
-            }
-          : msg
-      ) as CoreMessage[]
+      const modifiedMessages = lastConversation.messages.map(msg =>
+        msg.role === 'tool'
+          ? {
+              ...msg,
+              role: 'assistant',
+              content:
+                typeof msg.content === 'string'
+                  ? msg.content
+                  : JSON.stringify(msg.content),
+              type: 'tool'
+            }
+          : (msg as any)
+      ) as CoreMessage[]

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In app/actions.tsx around lines 365 to 377, the writer fallback is re-reading
aiState.get() and may pick a different conversation and is JSON.stringify-ing
tool message content even when it's already a string; change the code to use the
previously captured lastConversation (do not re-fetch from aiState.get()) and
when mapping tool messages set content to msg.content if it's a string,
otherwise JSON.stringify(msg.content), preserving role/type adjustments; this
avoids targeting the wrong conversation and prevents double-encoding of string
content.

Comment on lines +495 to 555
// Find the conversation that was updated and save it.
for (const conversation of state.conversations) {
if (conversation.messages.some(e => e.type === 'response')) {
const { chatId, messages } = conversation
const createdAt = new Date()
const path = `/search/${chatId}`

let title = 'Untitled Chat'
if (messages.length > 0) {
const firstMessageContent = messages[0].content
if (typeof firstMessageContent === 'string') {
try {
const parsedContent = JSON.parse(firstMessageContent)
title = parsedContent.input?.substring(0, 100) || 'Untitled Chat'
} catch (e) {
title = firstMessageContent.substring(0, 100)
}
} else if (Array.isArray(firstMessageContent)) {
const textPart = (
firstMessageContent as { type: string; text?: string }[]
).find(p => p.type === 'text')
title =
textPart && textPart.text
? textPart.text.substring(0, 100)
: 'Image Message'
}
}
} else if (Array.isArray(firstMessageContent)) {
const textPart = (
firstMessageContent as { type: string; text?: string }[]
).find(p => p.type === 'text')
title =
textPart && textPart.text
? textPart.text.substring(0, 100)
: 'Image Message'
}
}

const updatedMessages: AIMessage[] = [
...messages,
{
id: nanoid(),
role: 'assistant',
content: `end`,
type: 'end'
}
]
const updatedMessages: AIMessage[] = [
...messages,
{
id: nanoid(),
role: 'assistant',
content: `end`,
type: 'end'
}
]

const { getCurrentUserIdOnServer } = await import(
'@/lib/auth/get-current-user'
)
const actualUserId = await getCurrentUserIdOnServer()
const { getCurrentUserIdOnServer } = await import(
'@/lib/auth/get-current-user'
)
const actualUserId = await getCurrentUserIdOnServer()

if (!actualUserId) {
console.error('onSetAIState: User not authenticated. Chat not saved.')
return
}
if (!actualUserId) {
console.error(
'onSetAIState: User not authenticated. Chat not saved.'
)
continue // Continue to the next conversation
}

const chat: Chat = {
id: chatId,
createdAt,
userId: actualUserId,
path,
title,
messages: updatedMessages
const chat: Chat = {
id: chatId,
createdAt,
userId: actualUserId,
path,
title,
messages: updatedMessages
}
await saveChat(chat, actualUserId)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Chat persistence is not idempotent; 'end' is appended repeatedly and chats may be re-saved

onSetAIState re-saves any conversation containing a 'response' and appends an 'end' message each time. This can duplicate 'end' and cause repeated writes. Guard for already-finalized conversations.

Apply this diff:

-    for (const conversation of state.conversations) {
-      if (conversation.messages.some(e => e.type === 'response')) {
+    for (const conversation of state.conversations) {
+      const { messages } = conversation
+      if (!messages.some(e => e.type === 'response')) continue
+      if (messages[messages.length - 1]?.type === 'end') continue
       if (conversation.messages.some(e => e.type === 'response')) {
         const { chatId, messages } = conversation
@@
-        const updatedMessages: AIMessage[] = [
-          ...messages,
-          {
-            id: nanoid(),
-            role: 'assistant',
-            content: `end`,
-            type: 'end'
-          }
-        ]
+        const updatedMessages: AIMessage[] = messages[messages.length - 1]?.type === 'end'
+          ? messages
+          : [
+              ...messages,
+              {
+                id: nanoid(),
+                role: 'assistant',
+                content: 'end',
+                type: 'end'
+              }
+            ]

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In app/actions.tsx around lines 495 to 555, the code appends an 'end' message
and re-saves any conversation containing a 'response' every time, causing
duplicate 'end' messages and repeated writes; change the logic to first skip
conversations that already contain a terminal/end message (e.g., messages.some(m
=> m.type === 'end') or the last message has type 'end'), only build
updatedMessages and call saveChat when no 'end' is present, and leave existing
chats untouched (i.e., continue to next conversation) to make persistence
idempotent while keeping the existing authentication and save flow.

Comment on lines 99 to 101
if (toolResponses.some(tr => !tr.result)) {
hasError = true;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Refine error detection to avoid false positives.

The current error check !tr.result will incorrectly flag valid tool results that happen to be falsy (e.g., empty strings, 0, false). This could cause legitimate tool responses to be treated as errors.

Apply this diff to check specifically for missing results:

- if (toolResponses.some(tr => !tr.result)) {
+ if (toolResponses.some(tr => tr.result === undefined || tr.result === null)) {
    hasError = true;
  }

Alternatively, check for property existence:

- if (toolResponses.some(tr => !tr.result)) {
+ if (toolResponses.some(tr => !('result' in tr))) {
    hasError = true;
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (toolResponses.some(tr => !tr.result)) {
hasError = true;
}
if (toolResponses.some(tr => tr.result === undefined || tr.result === null)) {
hasError = true;
}
Suggested change
if (toolResponses.some(tr => !tr.result)) {
hasError = true;
}
if (toolResponses.some(tr => !('result' in tr))) {
hasError = true;
}
🤖 Prompt for AI Agents
In lib/agents/researcher.tsx around lines 99 to 101, the current check uses
`!tr.result` which treats valid falsy values (empty string, 0, false) as errors;
replace it with an explicit missing-value check such as `tr.result === undefined
|| tr.result === null` (or use `!('result' in tr)` if you want to detect absence
of the property) so only truly absent results are flagged as errors; update the
conditional accordingly and keep hasError assignment the same.

// Add other properties as needed based on your usage
};
import { getModel } from 'QCX/lib/utils';
import { getModel } from '@/lib/utils';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Path alias change looks fine; verify resolver config and ai tools shape.

  • Ensure @ alias is configured everywhere (tsconfig.json/paths, next/jest/storybook) so this doesn’t break local/dev/test builds.
  • Confirm generateText’s tools option matches the SDK’s expected shape; passing mcp.tools directly may not match the “name -> function” map required by some ai SDK versions. Add a quick type guard or adapter if needed.
  • Optional: avoid parsing JSON via markdown fences from mcp.callTool(...) results; prefer a structured payload or search for a json part in result.content to reduce brittleness.

🏁 Script executed:

#!/bin/bash
# Verify alias and tool usage

echo "== tsconfig alias =="
fd -a 'tsconfig*.json' | xargs -I{} sh -c 'echo "--- {} ---"; cat {}' | rg -n '"paths"\s*:\s*\{[^}]*"@/\*":'

echo "== generateText tools shape =="
rg -n -C2 "generateText\(" -g "!**/node_modules/**"

echo "== MCP tool call parsing =="
rg -n -C2 "mcp\.callTool\(" -g "!**/node_modules/**"

Length of output: 1052



---

🏁 Script executed:

```shell
#!/bin/bash
# Deeper inspection of config and code

echo "== All tsconfig files =="
fd -a 'tsconfig' -e json

echo -e "\n== tsconfig.json content (if exists) =="
if [ -f tsconfig.json ]; then
  cat tsconfig.json
else
  echo "tsconfig.json not found"
fi

echo -e "\n== toolsRef initialization and usage =="
rg -n -B5 "toolsRef\s*=" -g "!**/node_modules/**" --type ts

echo -e "\n== mcp.tools construction =="
rg -n -B3 -A3 "mcp\.tools" -g "!**/node_modules/**" --type ts

Length of output: 2020



---

🏁 Script executed:

```shell
#!/bin/bash
# Get tsconfig and full hooks context

echo "== tsconfig.json (at repo root) =="
cat /home/jailuser/git/tsconfig.json

echo -e "\n== hooks.ts imports (first 30 lines) =="
head -30 /home/jailuser/git/mapbox_mcp/hooks.ts

echo -e "\n== generateText call context (lines 95-110) =="
sed -n '95,110p' /home/jailuser/git/mapbox_mcp/hooks.ts

echo -e "\n== Tool interface/type definition =="
rg -n "interface Tool|type Tool" -g "!**/node_modules/**" --type ts

Length of output: 2980



---

🏁 Script executed:

```shell
#!/bin/bash
# Check ai SDK version and result handling

echo "== package.json ai dependency =="
rg -n '"ai"' -g "package.json"

echo -e "\n== Result handling for mcp.callTool (geocode_location call lines 159-178) =="
sed -n '159,178p' /home/jailuser/git/mapbox_mcp/hooks.ts

echo -e "\n== Result handling for other tool calls (calculate_distance 177-196, search_nearby 199-218) =="
sed -n '177,196p' /home/jailuser/git/mapbox_mcp/hooks.ts
sed -n '199,218p' /home/jailuser/git/mapbox_mcp/hooks.ts

echo -e "\n== Any JSON/markdown parsing in response handling =="
rg -n "JSON\.parse|markdown|parse.*result|result.*parse" -g "!**/node_modules/**" --type ts

Length of output: 4235



---

🌐 Web query:

```web
ai SDK v4.3.19 generateText tools parameter format array or object

💡 Result:

Use an object (a ToolSet) — keys are tool names and values are tool definitions. See the AI SDK docs showing the tools parameter is an object mapping tool names to Tool definitions (example usage included). [1][2]

Sources:
[1] AI SDK — Tool Calling / example showing tools: { weather: tool({...}) }.
[2] AI SDK — Foundations: Tools (tools parameter described as an object).


Transform mcp.tools array to object map for generateText; fix brittle markdown parsing.

  • The @/ alias is properly configured in tsconfig.json—no issue there.
  • Critical: ai SDK v4.3.19's generateText() expects tools as an object map { toolName: toolDef }, not an array. Line 101 passes toolsRef.current (which stores mcp.tools array directly). Transform it to { [tool.name]: tool } object before passing to generateText, or adapt at line 71 when assigning.
  • Brittle parsing at lines 169, 188, 207: Extracting JSON from markdown fences via regex match(/```json\n(.*?)\n```/s) is fragile. If MCP server changes format or error handling, parsing silently fails to {}. Check if structured payload is available in result.content[0] or search result.content for a json part instead.
🤖 Prompt for AI Agents
In mapbox_mcp/hooks.ts around lines 11 and especially lines 71, 101, 169, 188
and 207: the current code passes mcp.tools as an array to generateText (line
~101) but ai SDK v4.3.19 requires tools as an object map { toolName: toolDef };
convert the array into a map (e.g., reduce mcp.tools to { [tool.name]: tool })
either at assignment time around line 71 or immediately before calling
generateText at line 101 so generateText receives the correct shape. Also
replace the brittle regex-based JSON extraction at ~169/188/207 with a robust
check: first inspect result.content or result.content[0] for a structured JSON
part (or a named/json field) and use that if present, otherwise scan the content
parts for a json section; add explicit fallback handling and logging if parsing
fails instead of silently returning {}.

Comment on lines 4 to 35
self.onmessage = (event: MessageEvent<{ features: any[] }>) => {
const { features } = event.data;

const results = features.map(feature => {
const id = feature.id as string;
let calculation = null;

if (feature.geometry.type === 'Polygon') {
const area = turf.area(feature);
const centroid = turf.centroid(feature);
calculation = {
type: 'Polygon',
area,
center: centroid.geometry.coordinates
};
} else if (feature.geometry.type === 'LineString') {
const length = turf.length(feature, { units: 'kilometers' }) * 1000; // in meters
const line = feature.geometry.coordinates;
const midIndex = Math.floor(line.length / 2) - 1;
const midpoint = midIndex >= 0 ? line[midIndex] : line[0];
calculation = {
type: 'LineString',
length,
center: midpoint
};
}

return { id, calculation };
});

self.postMessage(results);
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Isolate worker errors per feature to avoid crashing the whole batch.

Wrap per-feature calc in try/catch and include an error field; still return other results.

-  const results = features.map(feature => {
+  const results = features.map((feature, idx) => {
     const id = feature.id as string;
-    let calculation = null;
+    let calculation = null;
+    let error: string | undefined;
 
-    if (feature.geometry.type === 'Polygon') {
-      ...
-    } else if (feature.geometry.type === 'LineString') {
-      ...
-    }
+    try {
+      if (feature.geometry?.type === 'Polygon') {
+        // polygon branch (see other diff)
+      } else if (feature.geometry?.type === 'LineString') {
+        // line branch (see other diff)
+      }
+    } catch (e: any) {
+      error = e?.message || 'calculation failed';
+    }
 
-    return { id, calculation };
+    return { id: id ?? String(idx), calculation, error };
   });

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In workers/turf.worker.ts around lines 4 to 35, the per-feature geometry
calculations are unprotected so a thrown error from turf will abort the whole
worker; wrap the mapping/calculation for each feature in a try/catch so each
feature is processed independently, on error set calculation to null (or partial
result) and add an error field (string) describing the failure (e.g.,
error.message), then continue returning { id, calculation, error } for that
feature so a single bad feature won't crash the batch and the caller can inspect
which items failed.

Comment on lines 11 to 19
if (feature.geometry.type === 'Polygon') {
const area = turf.area(feature);
const centroid = turf.centroid(feature);
calculation = {
type: 'Polygon',
area,
center: centroid.geometry.coordinates
};
} else if (feature.geometry.type === 'LineString') {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Prefer centerOfMass over centroid for label placement.

centroid can fall outside concave polygons; centerOfMass is typically better for placing labels.

-      const area = turf.area(feature);
-      const centroid = turf.centroid(feature);
+      const area = turf.area(feature);
+      const centroid = turf.centerOfMass(feature);
       calculation = {
         type: 'Polygon',
         area,
-        center: centroid.geometry.coordinates
+        center: centroid.geometry.coordinates
       };

Based on learnings.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (feature.geometry.type === 'Polygon') {
const area = turf.area(feature);
const centroid = turf.centroid(feature);
calculation = {
type: 'Polygon',
area,
center: centroid.geometry.coordinates
};
} else if (feature.geometry.type === 'LineString') {
if (feature.geometry.type === 'Polygon') {
const area = turf.area(feature);
const centroid = turf.centerOfMass(feature);
calculation = {
type: 'Polygon',
area,
center: centroid.geometry.coordinates
};
} else if (feature.geometry.type === 'LineString') {
🤖 Prompt for AI Agents
In workers/turf.worker.ts around lines 11 to 19, the code uses turf.centroid for
polygon label placement which can fall outside concave polygons; replace
turf.centroid with turf.centerOfMass (import it if not already) and use
centerOfMass(feature).geometry.coordinates for the center value so labels are
placed at the polygon’s mass center; ensure types/locals still match and run
tests/lint after updating the import.

Comment on lines 20 to 24
const length = turf.length(feature, { units: 'kilometers' }) * 1000; // in meters
const line = feature.geometry.coordinates;
const midIndex = Math.floor(line.length / 2) - 1;
const midpoint = midIndex >= 0 ? line[midIndex] : line[0];
calculation = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Midpoint for lines should be distance-based, not index-based.

Index midpoints are inaccurate for uneven vertex spacing. Use turf.along at half the length.

-      const length = turf.length(feature, { units: 'kilometers' }) * 1000; // in meters
-      const line = feature.geometry.coordinates;
-      const midIndex = Math.floor(line.length / 2) - 1;
-      const midpoint = midIndex >= 0 ? line[midIndex] : line[0];
+      const lengthKm = turf.length(feature, { units: 'kilometers' });
+      const length = lengthKm * 1000; // meters
+      const midpoint = turf.along(feature, lengthKm / 2, { units: 'kilometers' }).geometry.coordinates;
       calculation = {
         type: 'LineString',
         length,
         center: midpoint
       };

Based on learnings.

🤖 Prompt for AI Agents
In workers/turf.worker.ts around lines 20 to 24, the code computes a midpoint by
taking the middle vertex index which is incorrect for unevenly spaced vertices;
replace the index-based midpoint with a distance-based midpoint by using
turf.along: compute the total line length with turf.length (specifying units),
divide by two to get half the distance, construct a turf LineString from
feature.geometry.coordinates if needed, then call turf.along(lineString,
halfDistance, { units: 'kilometers' }) (or use consistent units you already
compute) and use the returned Point coordinates as the midpoint in the
calculation object.

This commit addresses a critical runtime `TypeError` caused by incorrect state access in `components/chat.tsx` after the `AIState` was refactored for concurrency.

- The `useEffect` hook in `chat.tsx` has been updated to safely access the messages array from the last conversation in `aiState.conversations`, preventing the "Cannot read properties of undefined (reading 'length')" error.
- Added guard clauses in `app/actions.tsx` to prevent potential runtime errors if the `conversations` array is ever empty when the `submit` action is called.

Additionally, this commit includes the substantial architectural changes from the previous submission to improve system-wide concurrency and performance:

- **Concurrent Chat Conversations:** The core state is now an array of conversations.
- **Client-Side Web Worker:** A reusable `useWorker` hook offloads expensive map calculations to a background thread.
- **Parallel Server-Side Tool Execution:** The `researcher` agent now leverages the AI SDK's built-in parallel tool execution.
…improvements

This commit resolves a critical production `TypeError` and introduces a wide range of architectural improvements to enhance the application's performance, concurrency, and robustness.

**Critical Bug Fix:**
- Implemented a state migration helper (`ensureConversations`) in `app/actions.tsx` to gracefully handle older state structures, preventing crashes for existing users.

**Concurrency & Performance Enhancements:**
- **Concurrent Chat:** Refactored the core AI state to support multiple, simultaneous chat conversations.
- **Client-Side Worker:** Offloaded expensive `turf.js` map calculations to a Web Worker, ensuring a responsive UI during drawing operations. This is implemented via a new, reusable, and performant `useWorker` hook.
- **Parallel Server-Side Tools:** Refactored the `researcher` agent to correctly leverage the Vercel AI SDK's built-in parallel tool execution, reducing latency for complex queries.

**Code Quality & Robustness Improvements:**
- **Type Safety:** Added explicit TypeScript types and validation for initial state in page components to prevent schema drift.
- **Robustness:** Added guard clauses, improved error handling in the Web Worker, and fixed multiple smaller bugs related to unique message IDs, form serialization, and agent logic.
- **UI Behavior:** Eliminated a UI flicker that occurred when starting a new conversation.
@ngoiyaeric
Copy link
Collaborator

@coderabbitai the system is not generating any tokens

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 20, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# Examine the researcher agent for streaming and response handling
echo "=== Checking researcher.tsx streaming and response handling ==="
cat lib/agents/researcher.tsx

echo -e "\n=== Checking how researcher is called from actions.tsx ==="
rg -n "researcher\(" app/actions.tsx -A 10 -B 5

echo -e "\n=== Looking for streamText or AI streaming calls ==="
rg -n "streamText|createStreamableValue" app/actions.tsx -C 3

echo -e "\n=== Checking for error handling that might swallow errors ==="
rg -n "try\s*\{|catch" app/actions.tsx -C 2

Length of output: 10102


@ngoiyaeric: I found the issue! 🎯

The problem is in the researcher agent refactoring. The new code waits for the entire response before streaming any tokens, which means users see nothing during generation.

🔍 Root Cause

In lib/agents/researcher.tsx, the code does this:

uiStream.update(answerSection);  // ❌ Updates UI with empty streamText.value

// Then blocks waiting for the full response:
const [text, toolResults, toolCalls] = await Promise.all([
  result.text,        // ❌ Waits for ENTIRE response
  result.toolResults,
  result.toolCalls,
]);

// Only AFTER everything is done:
streamText.done(fullResponse);  // ❌ Too late - no streaming happened

The UI gets updated with an empty streamText.value, then the code blocks waiting for the complete response. No tokens are streamed incrementally.

🛠️ Fix Required

Replace the blocking await with incremental streaming:

// In lib/agents/researcher.tsx, replace lines after result creation:

uiStream.update(null)
uiStream.update(answerSection);

// Stream tokens incrementally
for await (const delta of result.textStream) {
  fullResponse += delta;
  streamText.update(fullResponse);
}

streamText.done(fullResponse);

// Now await tool results (after text streaming is complete)
const [toolResults, toolCalls] = await Promise.all([
  result.toolResults,
  result.toolCalls,
]);

const toolResponses: ToolResultPart[] = (toolResults || []).map(toolResult => ({
  ...toolResult
}));

if (toolResponses.some(tr => tr.result === undefined || tr.result === null)) {
  hasError = true;
}

messages.push({
  role: 'assistant',
  content: [{ type: 'text', text: fullResponse }, ...(toolCalls || [])]
})

if (toolResponses.length > 0) {
  messages.push({ role: 'tool', content: toolResponses })
}

return { result, fullResponse, hasError, toolResponses }

🎯 What This Fixes

  1. Restores incremental streaming: Tokens appear as they're generated
  2. Updates UI progressively: streamText.update() called for each delta
  3. Maintains tool execution: Tools still execute properly after text streaming
  4. Keeps concurrent tool calls: The original PR goal of parallel tool execution is preserved

🔄 Alternative: If Tools Should Stream Concurrently

If you want tools to execute while text is streaming (for even better performance), you can run them in parallel:

uiStream.update(null)
uiStream.update(answerSection);

// Start streaming text and fetching tool results in parallel
const toolResultsPromise = Promise.all([
  result.toolResults,
  result.toolCalls,
]);

// Stream tokens while tools execute
for await (const delta of result.textStream) {
  fullResponse += delta;
  streamText.update(fullResponse);
}

streamText.done(fullResponse);

// Wait for tools to finish (they may already be done)
const [toolResults, toolCalls] = await toolResultsPromise;

// ... rest of the code

Would you like me to generate the complete updated researcher.tsx file with the fix?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants