Skip to content

Conversation

@ngoiyaeric
Copy link
Collaborator

@ngoiyaeric ngoiyaeric commented Oct 2, 2025

User description

This commit introduces the necessary infrastructure to integrate an ONNX model hosted on an external server.

Key changes include:

  • A new API route at /api/onnx/route.ts that acts as a proxy to the ONNX model endpoint. It currently returns a simulated response for development purposes.
  • A new button in the ChatPanel component that allows users to send input to the ONNX model.
  • The onnxruntime-node package has been added as a dependency.
  • The UI has been updated to display the model's response in the chat history.

PR Type

Enhancement


Description

  • Add ONNX model integration via API proxy endpoint

  • Implement new chat UI button for ONNX model interaction

  • Add automated verification testing for ONNX functionality

  • Include onnxruntime-node dependency for model support


Diagram Walkthrough

flowchart LR
  A["Chat Input"] --> B["ONNX Button"]
  B --> C["API Proxy /api/onnx"]
  C --> D["Azure VM Endpoint"]
  D --> E["ONNX Model Response"]
  E --> F["Chat Display"]
Loading

File Walkthrough

Relevant files
Enhancement
route.ts
New ONNX API proxy endpoint                                                           

app/api/onnx/route.ts

  • Create new API endpoint for ONNX model proxy
  • Implement POST handler with simulated response
  • Add error handling and Azure VM endpoint placeholder
  • Return structured JSON response with prediction data
+34/-0   
chat-panel.tsx
ONNX integration in chat interface                                             

components/chat-panel.tsx

  • Add BrainCircuit icon import for ONNX button
  • Implement handleOnnxClick function for API calls
  • Add new ONNX button with brain circuit icon
  • Update textarea padding to accommodate new button
  • Display ONNX responses in chat with formatted JSON
+87/-13 
Tests
verify_onnx_button.py
Automated ONNX button verification test                                   

jules-scratch/verification/verify_onnx_button.py

  • Create Playwright test script for ONNX functionality
  • Test chat input, ONNX button click, and response display
  • Include screenshot capture for verification
  • Handle loading overlay and error scenarios
+49/-0   
Dependencies
package.json
Add ONNX runtime dependency                                                           

package.json

  • Add onnxruntime-node version 1.23.0 dependency
+1/-0     

Summary by CodeRabbit

  • New Features
    • Settings: geospatial model toggle to enable ONNX-backed processing for the QCX-Terra model.
    • Chat: when enabled, submissions use an ONNX flow and show structured success or error responses.
    • Backend: added a server-side proxy endpoint to forward ONNX requests to the processing backend.
    • Global state & Layout: app-wide geospatial model provider added and wired into the main layout.
    • Reliability: app-wide error boundary with fallback UI.

This commit introduces the necessary infrastructure to integrate an ONNX model hosted on an external server.

Key changes include:
- A new API route at `/api/onnx/route.ts` that acts as a proxy to the ONNX model endpoint. It currently returns a simulated response for development purposes.
- A new button in the `ChatPanel` component that allows users to send input to the ONNX model.
- The `onnxruntime-node` package has been added as a dependency.
- The UI has been updated to display the model's response in the chat history.
@charliecreates charliecreates bot requested a review from CharlieHelps October 2, 2025 09:54
@vercel
Copy link

vercel bot commented Oct 2, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
qcx Ready Ready Preview Comment Oct 3, 2025 2:00pm

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 2, 2025

Walkthrough

Adds a GeospatialModel context/provider and ErrorBoundary, wires a geospatial toggle into settings and chat, branches chat submission to call a new /api/onnx POST proxy (server-side FormData proxy to an Azure ONNX endpoint with env-based mocking and error/reporting handling), and adds an attachment handler to the chat panel ref.

Changes

Cohort / File(s) Change Summary
ONNX API proxy
app/api/onnx/route.ts
New POST handler export async function POST(request: Request) that parses incoming FormData (requires a query string or file), reads AZURE_ONNX_ENDPOINT env, returns a mocked structured response if unset, or forwards the FormData via server-side fetch (multipart/form-data) to the Azure endpoint, validates non-OK responses (throws with status), returns Azure JSON on success, logs in development, and reports errors in production.
Chat panel behavior & ref signature
components/chat-panel.tsx
Branches handleSubmit: when isGeospatialModelEnabled is true, POSTs input to /api/onnx, renders structured ONNX response or an ONNX error block; otherwise preserves existing message/image submission flow. Adds handleAttachmentClick to ChatPanelRef interface.
Geospatial model context & provider
lib/geospatial-model-context.tsx, app/layout.tsx
Adds GeospatialModelContext and useGeospatialModel hook; implements GeospatialModelProvider managing isGeospatialModelEnabled and toggleGeospatialModel. Exports GeospatialModelProvider and wraps the app root with it inside app/layout.tsx (placed between ProfileToggleProvider and ThemeProvider).
Settings UI
components/settings/components/model-selection-form.tsx
Adds a QCX‑Terra specific toggle (Switch + Label) bound to useGeospatialModel (isGeospatialModelEnabled, toggleGeospatialModel) and adjusts layout to accommodate the toggle.
Error boundary component
components/error-boundary.tsx
New class-based ErrorBoundary default export with getDerivedStateFromError, componentDidCatch logging, a fallback UI and a “Try again” reset button.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant Settings
  participant Context as GeospatialModelContext
  participant Chat as ChatPanel
  participant API as /api/onnx
  participant Azure as Azure VM

  User->>Settings: toggle geospatial model
  Settings->>Context: toggleGeospatialModel()
  Note right of Context #DDEBF7: isGeospatialModelEnabled updated

  User->>Chat: submit message
  Chat->>Context: read isGeospatialModelEnabled
  alt geospatial enabled
    Chat->>API: POST FormData (query/file)
    API->>Azure: forward FormData via POST (multipart/form-data) OR return mock if env absent
    Azure-->>API: JSON response (status)
    alt success
      API-->>Chat: JSON payload
      Chat-->>User: render ONNX response block
    else failure
      API-->>Chat: error/status
      Chat-->>User: render ONNX error block
    end
  else geospatial disabled
    Chat-->>User: proceed with existing submission flow (text/image path)
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25–30 minutes

Poem

I nudged the toggle, whiskers keen and bright,
A FormData parcel hopped off through the night.
If AZURE sleeps, I mock what it would say,
Else I ferry bytes where the VM holds sway.
Hooray — two routes now bounce beneath the moonlight. 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title clearly and concisely summarizes the primary enhancement in this changeset by indicating the addition of ONNX model integration via an API endpoint, matching the main modifications in both backend routing and related UI adjustments.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/onnx-integration

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@qodo-merge-pro
Copy link
Contributor

qodo-merge-pro bot commented Oct 2, 2025

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
External call risks

Description: Placeholder external endpoint string suggests future network calls without authentication,
input validation, or timeout handling, which could lead to SSRF or exfiltration risks when
implemented.
route.ts [7-20]

Referred Code
// The Azure VM API endpoint URL will be placed here.
// For now, we will use a placeholder.
const azureVmEndpoint = 'https://your-azure-vm-api-endpoint.com/terramind.onnx';

// In a real implementation, you would forward the request to the Azure VM.
// For now, we will just simulate a response.
// const response = await fetch(azureVmEndpoint, {
//   method: 'POST',
//   headers: {
//     'Content-Type': 'application/json',
//   },
//   body: JSON.stringify(body),
// });
// const data = await response.json();
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
No custom compliance provided

Follow the guide to enable custom compliance check.

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@qodo-merge-pro
Copy link
Contributor

qodo-merge-pro bot commented Oct 2, 2025

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Unnecessary dependency contradicts proxy architecture

The onnxruntime-node dependency is added but not used, as the implementation is
a proxy to an external model. This creates unnecessary bloat and should be
removed.

Examples:

package.json [66]
    "onnxruntime-node": "^1.23.0",
app/api/onnx/route.ts [1-34]
import { NextResponse } from 'next/server';

export async function POST(request: Request) {
  try {
    const body = await request.json();

    // The Azure VM API endpoint URL will be placed here.
    // For now, we will use a placeholder.
    const azureVmEndpoint = 'https://your-azure-vm-api-endpoint.com/terramind.onnx';


 ... (clipped 24 lines)

Solution Walkthrough:

Before:

// package.json
"dependencies": {
  ...
  "onnxruntime-node": "^1.23.0",
  ...
}

// app/api/onnx/route.ts
import { NextResponse } from 'next/server';

export async function POST(request: Request) {
  // The code implements a proxy to an external endpoint.
  // It does not import or use 'onnxruntime-node'.
  const azureVmEndpoint = '...';
  const simulatedData = { ... };
  return NextResponse.json(simulatedData);
}

After:

// package.json
"dependencies": {
  ...
  // "onnxruntime-node" is removed as it's not used.
  ...
}

// app/api/onnx/route.ts
import { NextResponse } from 'next/server';

export async function POST(request: Request) {
  // No change is needed in the API route, as it correctly
  // implements the proxy pattern without the unused dependency.
  const azureVmEndpoint = '...';
  const simulatedData = { ... };
  return NextResponse.json(simulatedData);
}
Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies that the onnxruntime-node dependency is completely unused in the proxy-based architecture, adding unnecessary bloat and architectural confusion.

High
Security
Use environment variables for API endpoints
Suggestion Impact:The commit replaced the hardcoded URL with an environment variable (process.env.AZURE_ONNX_ENDPOINT) and added a runtime check that falls back to a mock response when the variable is unset.

code diff:

+  // 2. Use Environment Variable for Endpoint and Provide Mock Fallback
+  const azureVmEndpoint = process.env.AZURE_ONNX_ENDPOINT;
+
+  if (!azureVmEndpoint) {
+    console.warn('AZURE_ONNX_ENDPOINT is not configured. Falling back to mock response.');
+    const mockResponse = {
+      prediction: "This is a simulated response because the Azure endpoint is not configured.",
+      confidence: 0.98,
+      input: {
+        query: query,
+        fileName: file?.name,
+        fileSize: file?.size,
+      }
     };
+    return NextResponse.json(mockResponse);
+  }

Replace the hardcoded Azure VM endpoint URL with an environment variable and add
a runtime check to ensure it is configured.

app/api/onnx/route.ts [7-9]

-// The Azure VM API endpoint URL will be placed here.
-// For now, we will use a placeholder.
-const azureVmEndpoint = 'https://your-azure-vm-api-endpoint.com/terramind.onnx';
+// The Azure VM API endpoint URL is loaded from environment variables.
+const azureVmEndpoint = process.env.AZURE_VM_ONNX_ENDPOINT;
 
+if (!azureVmEndpoint) {
+  console.error('AZURE_VM_ONNX_ENDPOINT environment variable is not set.');
+  return NextResponse.json({ error: 'Internal Server Error: API endpoint not configured.' }, { status: 500 });
+}
+

[Suggestion processed]

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a hardcoded URL and proposes using an environment variable, which is a crucial security and configuration best practice.

Medium
General
Prevent duplicate API request submissions

Add a loading state to the handleOnnxClick function to disable the ONNX button
during an API call, preventing duplicate submissions.

components/chat-panel.tsx [72-130]

+const [isOnnxLoading, setIsOnnxLoading] = useState(false);
+
 const handleOnnxClick = async () => {
-  if (!input) {
+  if (!input || isOnnxLoading) {
     return;
   }
 
+  setIsOnnxLoading(true);
   const currentInput = input;
 
   setMessages(currentMessages => [
     ...currentMessages,
     {
       id: nanoid(),
       component: <UserMessage content={[{ type: 'text', text: `[ONNX Request]: ${currentInput}` }]} />
     }
   ]);
 
   setInput('');
 
   try {
     const response = await fetch('/api/onnx', {
       method: 'POST',
       headers: {
         'Content-Type': 'application/json',
       },
       body: JSON.stringify({ query: currentInput }),
     });
 
     if (!response.ok) {
       throw new Error('Failed to get response from ONNX API');
     }
 
     const data = await response.json();
 
     setMessages(currentMessages => [
       ...currentMessages,
       {
         id: nanoid(),
         component: (
           <div className="p-4 my-2 border rounded-lg bg-muted">
             <p><strong>ONNX Model Response:</strong></p>
             <pre className="whitespace-pre-wrap">{JSON.stringify(data, null, 2)}</pre>
           </div>
         )
       }
     ]);
   } catch (error) {
     console.error('ONNX API call failed:', error);
     setMessages(currentMessages => [
       ...currentMessages,
       {
         id: nanoid(),
         component: (
           <div className="p-4 my-2 border rounded-lg bg-destructive/20 text-destructive">
             <p><strong>Error:</strong> Failed to get ONNX model response.</p>
           </div>
         )
       }
     ]);
+  } finally {
+    setIsOnnxLoading(false);
   }
 };

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a potential race condition where multiple clicks can trigger duplicate API calls and improves robustness by adding a loading state.

Medium
  • Update

Copy link

@charliecreates charliecreates bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • onnxruntime-node is a heavy native dependency that appears unused given the external API proxy approach; it adds installation overhead and potential deployment issues.
  • Keeping it in dependencies risks accidental client/edge imports and incompatible runtime bundling.
  • Remove it now or move to optionalDependencies and guard usage via dynamic import in Node-only server code when local inference is actually implemented.
Additional notes (1)
  • Maintainability | package.json:66-66
    onnxruntime-node ships prebuilt native binaries and can be sensitive to ABI changes. Using a caret range (^1.23.0) risks pulling in breaking changes in minor updates for native modules. Pinning or using a tilde is safer for stability until you’ve tested newer builds.
Summary of changes
  • Added onnxruntime-node@^1.23.0 to the dependencies section of package.json.

package.json Outdated
"mapbox-gl": "^3.11.0",
"next": "^15.3.3",
"next-themes": "^0.3.0",
"onnxruntime-node": "^1.23.0",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding onnxruntime-node as a hard runtime dependency brings in a large native module that downloads platform-specific binaries during install, which can slow CI/CD and increase deployment sizes. Given your PR description indicates the ONNX model is accessed via an external API proxy (not local inference), this dependency appears unused and unnecessary right now. It can also create compatibility issues for Edge/serverless runtimes if accidentally imported.

If local inference isn’t being used in this PR, remove it. If you plan to use it later, prefer keeping it out of dependencies until that code lands, or move it to optionalDependencies and ensure it’s only dynamically imported in Node-only code paths.

Suggestion

Since the API route proxies to an external ONNX service and there’s no local inference in this diff, remove the dependency for now to avoid native binary install and deployment bloat.

Suggested change in package.json:

  • Remove this line from dependencies:
    "onnxruntime-node": "^1.23.0",

Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.

@charliecreates charliecreates bot removed the request for review from CharlieHelps October 2, 2025 09:57
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1b6d0e1 and 384b2b3.

⛔ Files ignored due to path filters (3)
  • dev.log is excluded by !**/*.log
  • jules-scratch/verification/error.png is excluded by !**/*.png
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (4)
  • app/api/onnx/route.ts (1 hunks)
  • components/chat-panel.tsx (4 hunks)
  • jules-scratch/verification/verify_onnx_button.py (1 hunks)
  • package.json (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
components/chat-panel.tsx (1)
components/user-message.tsx (1)
  • UserMessage (15-53)
🔇 Additional comments (7)
jules-scratch/verification/verify_onnx_button.py (4)

1-10: LGTM!

The script setup is appropriate. Browser configuration with headless mode is suitable for automated verification, and the console listener provides useful debugging output.


27-34: LGTM!

The button and response selectors correctly match the implementation in components/chat-panel.tsx. The 10-second timeout is appropriate for the simulated response.


36-39: Verify the screenshot path is correct for your execution context.

The screenshot path jules-scratch/verification/verification.png is relative to the working directory. Ensure the script is executed from the repository root, or adjust the path accordingly.


41-46: LGTM!

Error handling and cleanup are implemented correctly. The finally block ensures the browser is closed even if an exception occurs, and the error screenshot provides useful debugging context.

components/chat-panel.tsx (3)

10-10: LGTM!

The BrainCircuit icon import from lucide-react is appropriate for representing the ONNX model action.


282-282: LGTM!

The increased left padding (pl-28) appropriately accommodates both the attachment and ONNX buttons, preventing text overlap.


258-269: Verify ONNX button disabled state and handler guard

  • Confirm <Button> in components/chat-panel.tsx lacks disabled={!input}; if so, disable when input is empty.
  • Confirm handleOnnxClick checks for empty input before proceeding; if not, add the guard.
  • Confirm mobile omission of ONNX button is intentional; document or adjust as needed.

Comment on lines 11 to 20
// In a real implementation, you would forward the request to the Azure VM.
// For now, we will just simulate a response.
// const response = await fetch(azureVmEndpoint, {
// method: 'POST',
// headers: {
// 'Content-Type': 'application/json',
// },
// body: JSON.stringify(body),
// });
// const data = await response.json();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Move placeholder implementation to documentation or issue tracker.

Commented-out code is typically discouraged in production codebases. Consider:

  1. Remove the commented code and create a TODO comment referencing a tracking issue for the Azure VM integration.
  2. Move the implementation notes to API documentation or a design doc.

Additionally, when implementing the actual forwarding logic, ensure proper error handling, timeout, and retry logic are included.

Replace with a TODO reference:

-    // The Azure VM API endpoint URL will be placed here.
-    // For now, we will use a placeholder.
-    const azureVmEndpoint = 'https://your-azure-vm-api-endpoint.com/terramind.onnx';
-
-    // In a real implementation, you would forward the request to the Azure VM.
-    // For now, we will just simulate a response.
-    // const response = await fetch(azureVmEndpoint, {
-    //   method: 'POST',
-    //   headers: {
-    //     'Content-Type': 'application/json',
-    //   },
-    //   body: JSON.stringify(body),
-    // });
-    // const data = await response.json();
+    // TODO: Implement Azure VM ONNX model integration
+    // Track in issue #XXX: Forward request to Azure VM endpoint and return actual inference results
+    // Implementation notes:
+    // - Add proper error handling and timeouts for external API calls
+    // - Consider retry logic for transient failures
+    // - Validate response from Azure VM before returning to client
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// In a real implementation, you would forward the request to the Azure VM.
// For now, we will just simulate a response.
// const response = await fetch(azureVmEndpoint, {
// method: 'POST',
// headers: {
// 'Content-Type': 'application/json',
// },
// body: JSON.stringify(body),
// });
// const data = await response.json();
// TODO: Implement Azure VM ONNX model integration
// Track in issue #XXX: Forward request to Azure VM endpoint and return actual inference results
// Implementation notes:
// - Add proper error handling and timeouts for external API calls
// - Consider retry logic for transient failures
// - Validate response from Azure VM before returning to client
🤖 Prompt for AI Agents
In app/api/onnx/route.ts around lines 11 to 20, remove the commented-out
placeholder fetch implementation and replace it with a single TODO comment
referencing the tracker/issue ID (e.g. TODO: implement Azure VM forwarding - see
ISSUE-1234) so the codebase has no commented code; move the detailed
implementation notes into the repo design doc or API documentation and, when you
later implement the forwarding, ensure the request includes proper error
handling, configurable timeouts, and retry/backoff logic.

Comment on lines 22 to 29
// Simulated response for now
const simulatedData = {
prediction: "This is a simulated response from the ONNX model.",
confidence: 0.95,
input: body,
};

return NextResponse.json(simulatedData);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Document the response schema and consider sanitizing echoed input.

Two observations:

  1. Input echoing: The simulated response includes the entire request body in the input field. For a simulation, this is acceptable, but in production, ensure no sensitive information (PII, credentials) is echoed back in responses.

  2. Missing schema documentation: The response structure (prediction, confidence, input) is not documented. Consider adding JSDoc or TypeScript types to define the expected response format for maintainability.

Add type documentation:

+/**
+ * ONNX API Response
+ * @typedef {Object} OnnxResponse
+ * @property {string} prediction - Model prediction result
+ * @property {number} confidence - Confidence score (0-1)
+ * @property {any} input - Echoed input for reference (simulation only)
+ */
+
 export async function POST(request: Request) {
   try {
     const body = await request.json();
 
     // ... validation ...
 
     // Simulated response for now
+    /** @type {OnnxResponse} */
     const simulatedData = {
       prediction: "This is a simulated response from the ONNX model.",
       confidence: 0.95,
       input: body,
     };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Simulated response for now
const simulatedData = {
prediction: "This is a simulated response from the ONNX model.",
confidence: 0.95,
input: body,
};
return NextResponse.json(simulatedData);
/**
* ONNX API Response
* @typedef {Object} OnnxResponse
* @property {string} prediction - Model prediction result
* @property {number} confidence - Confidence score (0-1)
* @property {any} input - Echoed input for reference (simulation only)
*/
export async function POST(request: Request) {
try {
const body = await request.json();
// ... validation ...
// Simulated response for now
/** @type {OnnxResponse} */
const simulatedData = {
prediction: "This is a simulated response from the ONNX model.",
confidence: 0.95,
input: body,
};
return NextResponse.json(simulatedData);
🤖 Prompt for AI Agents
In app/api/onnx/route.ts around lines 22 to 29, the simulated response echoes
the entire request body and lacks a documented response schema; update the
handler to avoid returning raw user input by sanitizing or omitting sensitive
fields before including an echo, and add TypeScript types or JSDoc describing
the response shape (e.g., prediction: string, confidence: number, input?:
SanitizedInput) so the expected return type is explicit and maintainable; ensure
the sanitization step strips common PII/credentials and map the sanitized object
to the response type before returning NextResponse.json.

Comment on lines 89 to 102
try {
const response = await fetch('/api/onnx', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ query: currentInput }),
});

if (!response.ok) {
throw new Error('Failed to get response from ONNX API');
}

const data = await response.json();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add timeout and loading state for better UX.

The fetch call to /api/onnx has two issues:

  1. No timeout: If the backend hangs, the request will wait indefinitely with no user feedback.
  2. No loading indicator: Users have no visual feedback that the ONNX request is processing.

Add timeout using AbortController:

   try {
+    const controller = new AbortController();
+    const timeoutId = setTimeout(() => controller.abort(), 30000); // 30 second timeout
+
     const response = await fetch('/api/onnx', {
       method: 'POST',
       headers: {
         'Content-Type': 'application/json',
       },
       body: JSON.stringify({ query: currentInput }),
+      signal: controller.signal,
     });
 
+    clearTimeout(timeoutId);
+
     if (!response.ok) {
       throw new Error('Failed to get response from ONNX API');
     }

Add loading state (add near the top of the component):

const [isOnnxLoading, setIsOnnxLoading] = useState(false);

Then wrap the try-catch:

   try {
+    setIsOnnxLoading(true);
     const controller = new AbortController();
     // ... fetch logic ...
   } catch (error) {
     console.error('ONNX API call failed:', error);
     // ... error handling ...
+  } finally {
+    setIsOnnxLoading(false);
   }

And disable the button while loading (line 265):

   <Button
     type="button"
     variant={'ghost'}
     size={'icon'}
     className={cn(
       'absolute top-1/2 transform -translate-y-1/2 left-14'
     )}
     onClick={handleOnnxClick}
     title="Send to ONNX Model"
+    disabled={isOnnxLoading || !input}
   >
🤖 Prompt for AI Agents
In components/chat-panel.tsx around lines 89 to 102, the fetch to '/api/onnx'
needs a timeout and a loading state: add a new state near the component top
const [isOnnxLoading, setIsOnnxLoading] = useState(false); set isOnnxLoading
true before starting the request and false in a finally block, create an
AbortController and start a timer (e.g., setTimeout) that calls
controller.abort() after your timeout ms, pass controller.signal to fetch, catch
the abort error separately to surface a user-friendly message, and clear the
timer when the request finishes; finally, disable the ONNX trigger button at
line 265 when isOnnxLoading is true to prevent duplicate requests.

Comment on lines 116 to 129
} catch (error) {
console.error('ONNX API call failed:', error);
setMessages(currentMessages => [
...currentMessages,
{
id: nanoid(),
component: (
<div className="p-4 my-2 border rounded-lg bg-destructive/20 text-destructive">
<p><strong>Error:</strong> Failed to get ONNX model response.</p>
</div>
)
}
]);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Differentiate error types for better user feedback.

The catch block handles all errors generically. Consider providing more specific error messages for different failure scenarios (network errors, timeouts, API errors) to help users understand what went wrong.

   } catch (error) {
-    console.error('ONNX API call failed:', error);
+    let errorMessage = 'Failed to get ONNX model response.';
+    
+    if (error instanceof Error) {
+      if (error.name === 'AbortError') {
+        errorMessage = 'Request timed out. Please try again.';
+      } else if (error.message.includes('fetch')) {
+        errorMessage = 'Network error. Please check your connection.';
+      }
+      console.error('ONNX API call failed:', error.message);
+    } else {
+      console.error('ONNX API call failed:', error);
+    }
+
     setMessages(currentMessages => [
       ...currentMessages,
       {
         id: nanoid(),
         component: (
           <div className="p-4 my-2 border rounded-lg bg-destructive/20 text-destructive">
-            <p><strong>Error:</strong> Failed to get ONNX model response.</p>
+            <p><strong>Error:</strong> {errorMessage}</p>
           </div>
         )
       }
     ]);
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (error) {
console.error('ONNX API call failed:', error);
setMessages(currentMessages => [
...currentMessages,
{
id: nanoid(),
component: (
<div className="p-4 my-2 border rounded-lg bg-destructive/20 text-destructive">
<p><strong>Error:</strong> Failed to get ONNX model response.</p>
</div>
)
}
]);
}
} catch (error) {
let errorMessage = 'Failed to get ONNX model response.';
if (error instanceof Error) {
if (error.name === 'AbortError') {
errorMessage = 'Request timed out. Please try again.';
} else if (error.message.includes('fetch')) {
errorMessage = 'Network error. Please check your connection.';
}
console.error('ONNX API call failed:', error.message);
} else {
console.error('ONNX API call failed:', error);
}
setMessages(currentMessages => [
...currentMessages,
{
id: nanoid(),
component: (
<div className="p-4 my-2 border rounded-lg bg-destructive/20 text-destructive">
<p><strong>Error:</strong> {errorMessage}</p>
</div>
)
}
]);
}
🤖 Prompt for AI Agents
In components/chat-panel.tsx around lines 116 to 129, the catch block treats all
errors the same; update it to distinguish common failure types (network errors,
timeouts/AbortError, and API/HTTP errors) and present clearer messages to users.
Inspect the caught error: if error.name === 'AbortError' show a timeout-specific
message; if error is a TypeError or message includes 'Failed to fetch' show a
network error message; if the error has an HTTP response/status (e.g.
error.response?.status or a custom API error payload) show the status and any
returned message; otherwise fall back to the generic failure text. Ensure you
still log the full error for debugging (console.error) and push a message with
the existing message object/component structure but with the more specific
user-facing text.

Comment on lines 15 to 18
# Wait for the loading overlay to disappear before proceeding.
# The selector targets the div with a high z-index that was blocking clicks.
loading_overlay = page.locator('div[class*="z-[9999]"]')
expect(loading_overlay).to_be_hidden(timeout=60000)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider reducing the loading overlay timeout.

The 60-second timeout for the loading overlay is quite generous. Unless the application genuinely requires this long to initialize, consider reducing it to 15-20 seconds to fail faster if there's an actual problem.

Apply this diff if a shorter timeout is acceptable:

-            expect(loading_overlay).to_be_hidden(timeout=60000)
+            expect(loading_overlay).to_be_hidden(timeout=20000)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Wait for the loading overlay to disappear before proceeding.
# The selector targets the div with a high z-index that was blocking clicks.
loading_overlay = page.locator('div[class*="z-[9999]"]')
expect(loading_overlay).to_be_hidden(timeout=60000)
# Wait for the loading overlay to disappear before proceeding.
# The selector targets the div with a high z-index that was blocking clicks.
loading_overlay = page.locator('div[class*="z-[9999]"]')
expect(loading_overlay).to_be_hidden(timeout=20000)
🤖 Prompt for AI Agents
In jules-scratch/verification/verify_onnx_button.py around lines 15 to 18, the
loading overlay wait uses a 60000ms timeout which is overly long; change the
expect(...).to_be_hidden timeout to a shorter value (e.g., 15000–20000 ms) so
tests fail faster on real hangs—replace 60000 with 15000 or 20000 accordingly
and run tests to confirm stability.

This commit introduces a toggle switch in the chat panel, allowing users to choose whether to send their input to the integrated ONNX model or the default chat model.

Key changes include:
- A new `Switch` component from Radix UI has been added to the `ChatPanel` to control model selection.
- The submission logic in `ChatPanel` has been updated to conditionally route the user's request to either the `/api/onnx` endpoint or the standard chat submission action based on the toggle's state.
- The dedicated "Send to ONNX" button has been removed to streamline the user interface, with the main submit button now handling both cases.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
components/chat-panel.tsx (1)

78-130: Past review comments remain applicable.

The ONNX submission path has several concerns that were flagged in previous reviews:

  1. Attachment handling: selectedFile is neither sent nor cleared when ONNX mode is active
  2. No timeout: The fetch request lacks timeout/abort handling
  3. No loading state: Users receive no visual feedback during the request
  4. No response validation: The API response is rendered without structure validation
  5. Generic error handling: Error messages don't differentiate between network failures, timeouts, and API errors

Since these issues were already documented in detail in past review comments with proposed solutions, please ensure they are tracked and will be addressed.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 384b2b3 and ebc1982.

⛔ Files ignored due to path filters (1)
  • dev.log is excluded by !**/*.log
📒 Files selected for processing (1)
  • components/chat-panel.tsx (5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
components/chat-panel.tsx (3)
components/user-message.tsx (1)
  • UserMessage (15-53)
components/chat.tsx (1)
  • Chat (20-135)
app/actions.tsx (2)
  • aiState (461-608)
  • submit (35-347)

Comment on lines 218 to 225
<div className="flex items-center space-x-2 px-4 pb-2">
<Switch
id="onnx-model-toggle"
checked={useOnnxModel}
onCheckedChange={setUseOnnxModel}
/>
<Label htmlFor="onnx-model-toggle">Use ONNX Model</Label>
</div>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

ONNX toggle placement and UX.

The toggle is positioned above the form and is always visible (both mobile and desktop). Consider the following UX improvements:

  • Add a tooltip or help text explaining what "ONNX Model" means for users unfamiliar with the term
  • Consider disabling the file attachment button when ONNX mode is active (since attachments aren't currently supported in ONNX path)
  • Ensure the toggle state is visually clear and accessible
🤖 Prompt for AI Agents
In components/chat-panel.tsx around lines 218–225 the ONNX toggle is always
visible without explanation; add inline help and accessibility features and
disable attachments when ONNX mode is active. Update the Switch/Label block to
include a tooltip or small help text (or a question icon with aria-describedby)
that explains what "ONNX Model" means, ensure the Switch has an accessible
label, aria-checked, and a visible focus/checked style for clarity, and wire the
useOnnxModel state to disable (and visually indicate disabled) the file
attachment button elsewhere in the component so attachments cannot be used in
ONNX mode.

This commit introduces a new feature that allows users to select whether to use an external ONNX model for their chat queries. The selection is made via a toggle switch in the application's settings.

Key changes include:
- A new React context, `GeospatialModelProvider`, has been created to manage the state of the model selection across the application.
- A toggle switch has been added to the model selection settings UI, allowing users to enable or disable the "geospatial foundational model garden."
- The `ChatPanel` component has been updated to use the shared state from the context. It now conditionally routes user input to either the standard chat action or a new `/api/onnx` proxy endpoint.
- The `/api/onnx` endpoint has been implemented as a proxy to forward requests to a placeholder Azure VM endpoint.
- The unused `onnxruntime-node` dependency has been removed.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
app/api/onnx/route.ts (1)

5-18: Validate the incoming payload before forwarding.

We’re still forwarding arbitrary JSON to the upstream service without checking for query. Please reintroduce the body validation that was requested earlier—return 400 when query is missing/non-string so we don’t hammer the backend with bad requests.

     const body = await request.json();
 
+    if (!body || typeof body.query !== 'string' || !body.query.trim()) {
+      return NextResponse.json(
+        { error: 'Invalid request: query must be a non-empty string.' },
+        { status: 400 }
+      );
+    }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ebc1982 and 30ac4aa.

📒 Files selected for processing (5)
  • app/api/onnx/route.ts (1 hunks)
  • app/layout.tsx (2 hunks)
  • components/chat-panel.tsx (3 hunks)
  • components/settings/components/model-selection-form.tsx (3 hunks)
  • lib/geospatial-model-context.tsx (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
components/settings/components/model-selection-form.tsx (1)
lib/geospatial-model-context.tsx (1)
  • useGeospatialModel (30-36)
components/chat-panel.tsx (2)
lib/geospatial-model-context.tsx (1)
  • useGeospatialModel (30-36)
components/user-message.tsx (1)
  • UserMessage (15-53)
app/layout.tsx (1)
lib/geospatial-model-context.tsx (1)
  • GeospatialModelProvider (15-27)
🪛 Biome (2.1.2)
components/settings/components/model-selection-form.tsx

[error] 110-110: Unexpected token. Did you mean {'>'} or &gt;?

(parse)

This commit resolves a build failure caused by a JSX parsing error in `components/settings/components/model-selection-form.tsx`. The `>` character in the label `(->on)` was being misinterpreted by the parser.

The fix replaces `>` with its HTML entity `&gt;`, ensuring the code is valid JSX and allowing the application to build successfully.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 30ac4aa and 6d849f6.

📒 Files selected for processing (1)
  • components/settings/components/model-selection-form.tsx (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
components/settings/components/model-selection-form.tsx (1)
lib/geospatial-model-context.tsx (1)
  • useGeospatialModel (30-36)
🔇 Additional comments (1)
components/settings/components/model-selection-form.tsx (1)

16-18: LGTM!

The imports are correctly added to support the new geospatial model toggle feature.

This commit introduces a new feature that allows users to select whether to use an external ONNX model for their chat queries. The selection is made via a toggle switch in the application's settings.

Key changes include:
- A new React context, `GeospatialModelProvider`, has been created to manage the state of the model selection across the application.
- A toggle switch has been added to the model selection settings UI, allowing users to enable or disable the "geospatial foundational model garden."
- The `ChatPanel` component has been updated to use the shared state from the context. It now conditionally routes user input to either the standard chat action or a new `/api/onnx` proxy endpoint.
- The `/api/onnx` endpoint has been implemented as a proxy to forward requests to a placeholder Azure VM endpoint.
- The UI for the toggle switch has been simplified to be a clean switch on the model card, without any extra text.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
components/settings/components/model-selection-form.tsx (2)

60-60: Error boundary for context availability still missing.

The useGeospatialModel hook throws an error if invoked outside its provider. This concern was flagged in a previous review but remains unaddressed. While the provider may be wired in app/layout.tsx, an ErrorBoundary is needed to gracefully handle edge cases during SSR or hydration where the context might be unavailable.

Wrap the component (or its parent in app/layout.tsx) with a React ErrorBoundary to catch missing-context errors:

import { ErrorBoundary } from 'react-error-boundary';

function ErrorFallback({ error }: { error: Error }) {
  return (
    <div role="alert">
      <p>Something went wrong:</p>
      <pre>{error.message}</pre>
    </div>
  );
}

// In app/layout.tsx or parent component:
<ErrorBoundary FallbackComponent={ErrorFallback}>
  <ModelSelectionForm form={form} />
</ErrorBoundary>

100-106: Major UX issue: Toggle remains visible when other models are selected.

This issue was flagged in a previous review but remains unresolved. The condition model.id === "QCX-Terra" only checks whether we're rendering the QCX-Terra card, not whether QCX-Terra is the selected model. Users can toggle the geospatial model while a different model (Grok-3, Claude-4, Llama-4) is selected, creating ambiguous system state and unclear behavior.

Apply this diff to make the toggle conditional on QCX-Terra being selected:

-                            {model.id === "QCX-Terra" && (
+                            {model.id === "QCX-Terra" && field.value === "QCX-Terra" && (
                               <Switch
                                 id="geospatial-model-toggle"
                                 checked={isGeospatialModelEnabled}
                                 onCheckedChange={toggleGeospatialModel}
                               />
                             )}

Alternatively, if this toggle is meant to be a global feature flag independent of model selection, move it outside the RadioGroup into a separate form section and update the label to clarify its global nature.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6d849f6 and 0d86125.

📒 Files selected for processing (1)
  • components/settings/components/model-selection-form.tsx (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
components/settings/components/model-selection-form.tsx (1)
lib/geospatial-model-context.tsx (1)
  • useGeospatialModel (30-36)

This commit introduces a comprehensive feature to integrate an external ONNX model, `TerraMind`, into the application, complete with a user-facing toggle in the settings and a robust backend API.

Key changes include:
- A new API route at `/api/onnx/route.ts` that acts as a proxy to the ONNX model endpoint. This route now handles `FormData` to support both text and image inputs, uses an environment variable (`AZURE_ONNX_ENDPOINT`) for the endpoint URL, and includes a mock fallback for local development.
- A new React context, `GeospatialModelProvider`, has been created to manage the state of the model selection across the application.
- A toggle switch has been added to the model selection settings UI, allowing users to enable or disable the "geospatial foundational model garden."
- The `ChatPanel` component has been significantly enhanced to conditionally route user input (including images) to the ONNX API, validate API responses, and handle errors gracefully.
- An `ErrorBoundary` component has been added to the root layout to improve application stability.
- The `Switch` component in the settings has been made more accessible with a screen-reader-only label.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
components/chat-panel.tsx (1)

105-136: Consider adding timeout protection for the ONNX API call.

While the error handling is comprehensive, the fetch call lacks timeout protection. If the backend hangs, users will have no feedback until the browser's default timeout (which can be very long).

Apply this diff to add a timeout with AbortController:

       try {
+        const controller = new AbortController();
+        const timeoutId = setTimeout(() => controller.abort(), 30000); // 30 second timeout
+
         const response = await fetch('/api/onnx', {
           method: 'POST',
           body: onnxFormData,
+          signal: controller.signal,
         });
 
+        clearTimeout(timeoutId);
+
         if (!response.ok) {
           throw new Error(`ONNX API request failed with status ${response.status}`);
         }

Then update the error handling to catch AbortError:

       } catch (error) {
         console.error('ONNX API call failed:', error);
+        let errorMessage = 'Failed to get ONNX model response.';
+        if (error instanceof Error && error.name === 'AbortError') {
+          errorMessage = 'Request timed out. Please try again.';
+        } else if (error instanceof Error) {
+          errorMessage = error.message;
+        }
         setMessages(currentMessages => [
           ...currentMessages,
           {
             id: nanoid(),
             component: (
               <div className="p-4 my-2 border rounded-lg bg-destructive/20 text-destructive">
-                <p><strong>Error:</strong> {error instanceof Error ? error.message : 'Failed to get ONNX model response.'}</p>
+                <p><strong>Error:</strong> {errorMessage}</p>
               </div>
             )
           }
         ]);
       }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0d86125 and fe63b3f.

📒 Files selected for processing (5)
  • app/api/onnx/route.ts (1 hunks)
  • app/layout.tsx (2 hunks)
  • components/chat-panel.tsx (4 hunks)
  • components/error-boundary.tsx (1 hunks)
  • components/settings/components/model-selection-form.tsx (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
app/layout.tsx (1)
lib/geospatial-model-context.tsx (1)
  • GeospatialModelProvider (15-27)
components/chat-panel.tsx (2)
lib/geospatial-model-context.tsx (1)
  • useGeospatialModel (30-36)
components/user-message.tsx (1)
  • UserMessage (15-53)
components/settings/components/model-selection-form.tsx (1)
lib/geospatial-model-context.tsx (1)
  • useGeospatialModel (30-36)
🪛 Biome (2.1.2)
components/error-boundary.tsx

[error] 37-40: Provide an explicit type prop for the button element.

The default type of a button is submit, which causes the submission of a form when placed inside a form element. This is likely not the behaviour that you want inside a React application.
Allowed button types are: submit, button or reset

(lint/a11y/useButtonType)

🔇 Additional comments (12)
app/api/onnx/route.ts (3)

13-31: LGTM! Comprehensive input validation.

The FormData validation properly addresses the previous review feedback by:

  • Catching malformed FormData with a try-catch
  • Requiring at least one of query or file
  • Validating that query is a non-empty string if provided
  • Returning appropriate 400 status codes for invalid input

33-48: LGTM! Proper environment-based mocking.

The mock fallback implementation correctly addresses the critical issue from previous reviews by:

  • Checking for the AZURE_ONNX_ENDPOINT environment variable
  • Returning a structured mock response when the endpoint is not configured
  • Including helpful metadata (query, fileName, fileSize) in the mock response
  • Logging a warning for observability

This allows development and testing without a real Azure endpoint while maintaining the expected response structure.


69-90: LGTM! Robust error handling with appropriate status mapping.

The error handling implementation properly:

  • Extracts error status when available (from Azure response errors)
  • Defaults to 500 for unknown errors
  • Conditionally reports to monitoring service in production
  • Logs detailed errors in development
  • Returns structured error responses with status and details

This addresses previous review feedback about differentiating error types and integrating monitoring.

components/chat-panel.tsx (5)

6-6: LGTM! Clean geospatial context integration.

The geospatial model context is properly imported and used with the useGeospatialModel hook. The destructured isGeospatialModelEnabled state is used to conditionally branch between ONNX and default submission flows.

Also applies to: 27-27


74-77: LGTM! Proper input validation.

The trimmed input validation correctly addresses the previous review feedback by:

  • Trimming the input to remove whitespace
  • Validating that at least one of trimmedInput or selectedFile exists
  • Preventing empty submissions early

79-103: LGTM! Proper attachment handling and FormData construction.

The ONNX submission flow correctly addresses previous review feedback by:

  • Building user message content with both text and optional image preview
  • Constructing FormData with the trimmed input and file
  • Clearing both input and attachment before the fetch (preventing stale state)
  • Using the trimmed input for validation and submission

117-121: LGTM! Proper response validation.

The response validation correctly addresses previous review feedback by:

  • Checking that the response is an object
  • Validating the presence of the prediction field
  • Logging unexpected formats for debugging
  • Throwing a user-friendly error message

152-177: LGTM! Consistent default chat submission flow.

The default submission path maintains consistency with the ONNX flow by:

  • Building user message content with text and optional image preview
  • Constructing FormData from the form event
  • Appending the selected file when present
  • Clearing input and attachment before submission
  • Using the existing submit action for the default chat flow
components/settings/components/model-selection-form.tsx (2)

16-18: LGTM! Proper context integration and imports.

The geospatial model context is correctly integrated:

  • All imports are now used (Switch, Label, useGeospatialModel)
  • The useGeospatialModel hook properly destructures the context values
  • Label component is used for accessibility (see line 102-104)

This addresses the previous review comment about the unused Label import.

Also applies to: 60-60


102-104: LGTM! Proper accessibility with screen-reader label.

The Label component is correctly used with:

  • htmlFor matching the Switch id
  • sr-only class for screen-reader-only visibility
  • Clear descriptive text

This addresses the previous review comment about accessibility.

components/error-boundary.tsx (1)

1-27: LGTM! Standard error boundary implementation.

The ErrorBoundary component follows React best practices:

  • Proper TypeScript types for Props and State
  • getDerivedStateFromError for state updates (during render phase)
  • componentDidCatch for side effects (logging)
  • Clear comment about future monitoring service integration
app/layout.tsx (1)

14-15: LGTM! Proper provider nesting and error boundary placement.

The layout updates correctly integrate the new providers:

  • ErrorBoundary is the outermost wrapper, catching errors from all providers and UI components
  • GeospatialModelProvider is properly nested to make geospatial context available throughout the app
  • Provider order ensures all context dependencies are satisfied
  • The structure supports the ONNX feature flow introduced in chat-panel.tsx and settings

This addresses the architectural requirements for the geospatial model feature.

Also applies to: 59-82

Comment on lines +3 to +11
// Simulate sending an error to a monitoring service (e.g., Sentry, DataDog)
const reportErrorToMonitoringService = (error: Error, context: object) => {
// In a real application, this would send the error to a service like Sentry.
console.log("Reporting error to monitoring service:", {
errorMessage: error.message,
stack: error.stack,
context,
});
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Strengthen type safety for the monitoring helper.

The reportErrorToMonitoringService function accepts context: object, which is too permissive and loses type information. Consider defining a specific interface for the context to ensure consistent structure and improve maintainability.

Apply this diff to add type safety:

+interface ErrorContext {
+  query?: string | null;
+  fileName?: string;
+  fileSize?: number;
+  [key: string]: any;
+}
+
-const reportErrorToMonitoringService = (error: Error, context: object) => {
+const reportErrorToMonitoringService = (error: Error, context: ErrorContext) => {
   // In a real application, this would send the error to a service like Sentry.
   console.log("Reporting error to monitoring service:", {
     errorMessage: error.message,
     stack: error.stack,
     context,
   });
 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Simulate sending an error to a monitoring service (e.g., Sentry, DataDog)
const reportErrorToMonitoringService = (error: Error, context: object) => {
// In a real application, this would send the error to a service like Sentry.
console.log("Reporting error to monitoring service:", {
errorMessage: error.message,
stack: error.stack,
context,
});
};
// Define a structured context type for error reporting
interface ErrorContext {
query?: string | null;
fileName?: string;
fileSize?: number;
[key: string]: any;
}
// Simulate sending an error to a monitoring service (e.g., Sentry, DataDog)
const reportErrorToMonitoringService = (error: Error, context: ErrorContext) => {
// In a real application, this would send the error to a service like Sentry.
console.log("Reporting error to monitoring service:", {
errorMessage: error.message,
stack: error.stack,
context,
});
};
🤖 Prompt for AI Agents
In app/api/onnx/route.ts around lines 3 to 11, the monitoring helper currently
types context as a generic object; define a specific interface (e.g.,
MonitoringContext) with expected fields such as route?: string, requestId?:
string, userId?: string, and metadata?: Record<string, unknown> (adjust fields
to match real usage), update the function signature to
reportErrorToMonitoringService(error: Error, context: MonitoringContext),
replace loose object usages with the typed interface, and export the interface
if it will be reused elsewhere to enforce consistent structure and improve
maintainability.

Comment on lines +50 to +57
// 3. Forward Request and Handle Errors
try {
// Pass the FormData directly to the fetch call.
// The browser will automatically set the 'Content-Type' to 'multipart/form-data' with the correct boundary.
const response = await fetch(azureVmEndpoint, {
method: 'POST',
body: formData,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Correct the misleading comment about Content-Type.

The comment states "The browser will automatically set the 'Content-Type' to 'multipart/form-data' with the correct boundary," but this code runs server-side in a Next.js API route, not in a browser. The Node.js fetch (via undici) will handle the boundary, but the phrasing is incorrect.

Apply this diff to clarify:

   try {
-    // Pass the FormData directly to the fetch call.
-    // The browser will automatically set the 'Content-Type' to 'multipart/form-data' with the correct boundary.
+    // Pass the FormData directly to fetch.
+    // The fetch implementation will automatically set 'Content-Type: multipart/form-data' with the appropriate boundary.
     const response = await fetch(azureVmEndpoint, {
       method: 'POST',
       body: formData,
     });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// 3. Forward Request and Handle Errors
try {
// Pass the FormData directly to the fetch call.
// The browser will automatically set the 'Content-Type' to 'multipart/form-data' with the correct boundary.
const response = await fetch(azureVmEndpoint, {
method: 'POST',
body: formData,
});
// 3. Forward Request and Handle Errors
try {
// Pass the FormData directly to fetch.
// The fetch implementation will automatically set 'Content-Type: multipart/form-data' with the appropriate boundary.
const response = await fetch(azureVmEndpoint, {
method: 'POST',
body: formData,
});
🤖 Prompt for AI Agents
In app/api/onnx/route.ts around lines 50 to 57, the inline comment incorrectly
says "The browser will automatically set the 'Content-Type'..." even though this
runs server-side; update the comment to correctly state that the Node.js fetch
implementation (undici) will handle setting the multipart/form-data Content-Type
and boundary when passing FormData, and remove any browser-specific wording so
the comment accurately reflects server-side behavior.

Comment on lines +37 to +42
<button
onClick={() => this.setState({ hasError: false })}
className="mt-6 px-4 py-2 bg-primary text-primary-foreground rounded-md hover:bg-primary/90"
>
Try again
</button>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add explicit type="button" to prevent unintended form submission.

The button lacks an explicit type attribute. If this error boundary is rendered inside a form context, the button's default type="submit" could trigger form submission.

Apply this diff:

           <button
+            type="button"
             onClick={() => this.setState({ hasError: false })}
             className="mt-6 px-4 py-2 bg-primary text-primary-foreground rounded-md hover:bg-primary/90"
           >
             Try again
           </button>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<button
onClick={() => this.setState({ hasError: false })}
className="mt-6 px-4 py-2 bg-primary text-primary-foreground rounded-md hover:bg-primary/90"
>
Try again
</button>
<button
type="button"
onClick={() => this.setState({ hasError: false })}
className="mt-6 px-4 py-2 bg-primary text-primary-foreground rounded-md hover:bg-primary/90"
>
Try again
</button>
🧰 Tools
🪛 Biome (2.1.2)

[error] 37-40: Provide an explicit type prop for the button element.

The default type of a button is submit, which causes the submission of a form when placed inside a form element. This is likely not the behaviour that you want inside a React application.
Allowed button types are: submit, button or reset

(lint/a11y/useButtonType)

🤖 Prompt for AI Agents
In components/error-boundary.tsx around lines 37 to 42, the "Try again" button
is missing an explicit type which can default to "submit" inside a form; update
the button to include type="button" so clicking it does not trigger form
submission, leaving the existing onClick and classes unchanged.

Comment on lines +93 to +111
<div className="flex items-center justify-between">
<div className="flex items-center gap-2">
<h4 className="font-medium">{model.name}</h4>
<Badge variant={model.badgeVariant}>
{model.badge}
</Badge>
</div>
{model.id === "QCX-Terra" && (
<div className="flex items-center space-x-2">
<Label htmlFor="geospatial-model-toggle" className="sr-only">
Enable Geospatial Foundational Model
</Label>
<Switch
id="geospatial-model-toggle"
checked={isGeospatialModelEnabled}
onCheckedChange={toggleGeospatialModel}
/>
</div>
)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

Verify the toggle visibility behavior matches UX expectations.

The toggle is rendered when model.id === "QCX-Terra" but does not check if QCX-Terra is actually selected (field.value === "QCX-Terra"). This means:

  1. The toggle appears for the QCX-Terra card regardless of which model is selected
  2. Users can toggle the geospatial model while a different model is selected
  3. The PR objectives and chat-panel.tsx suggest this is a global feature flag that branches the submission flow

If this is intended to be a global override (affecting submission regardless of selected model), the current implementation is correct but could benefit from clearer UX:

  • Move the toggle outside the model cards to a dedicated section
  • Add explanatory text about its global effect
  • Consider renaming to "Enable ONNX Mode" or similar

If this should only be active when QCX-Terra is selected, add the selection check.

Run the following script to check how isGeospatialModelEnabled is used in the chat flow:


🏁 Script executed:

#!/bin/bash
# Description: Verify if geospatial toggle is meant to be global or model-specific

# Check usage of isGeospatialModelEnabled in chat-panel
rg -n -A5 -B5 'isGeospatialModelEnabled' components/chat-panel.tsx

# Check if selected model is checked alongside geospatial flag
rg -n -A3 'selectedModel.*isGeospatialModel|isGeospatialModel.*selectedModel' components/

Length of output: 1079


Align geospatial toggle scope with its behavior

  • If it’s model-specific, guard submissions in chat-panel.tsx with selectedModel === "QCX-Terra".
  • Otherwise, move the toggle to a global settings section and update its label (e.g. “Enable Geospatial (ONNX) mode”) to clarify its effect.
🤖 Prompt for AI Agents
components/settings/components/model-selection-form.tsx around lines 93 to 111:
the geospatial toggle is shown next to the QCX-Terra row but its scope/label
isn’t aligned with its runtime effect; either make it truly model-specific by
keeping UI here and updating chat-panel submission logic to only enable
geospatial behavior when selectedModel === "QCX-Terra" &&
isGeospatialModelEnabled, or treat it as a global setting by moving the toggle
out of the model row into a global settings page and renaming the label to
clarify effect (e.g. “Enable Geospatial (ONNX) mode”); pick one approach and
implement the corresponding code changes and label updates so UI location, label
and runtime guard are consistent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants