-
-
Notifications
You must be signed in to change notification settings - Fork 6
feat: Integrate ONNX model via API endpoint #311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This commit introduces the necessary infrastructure to integrate an ONNX model hosted on an external server. Key changes include: - A new API route at `/api/onnx/route.ts` that acts as a proxy to the ONNX model endpoint. It currently returns a simulated response for development purposes. - A new button in the `ChatPanel` component that allows users to send input to the ONNX model. - The `onnxruntime-node` package has been added as a dependency. - The UI has been updated to display the model's response in the chat history.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughAdds a GeospatialModel context/provider and ErrorBoundary, wires a geospatial toggle into settings and chat, branches chat submission to call a new Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant Settings
participant Context as GeospatialModelContext
participant Chat as ChatPanel
participant API as /api/onnx
participant Azure as Azure VM
User->>Settings: toggle geospatial model
Settings->>Context: toggleGeospatialModel()
Note right of Context #DDEBF7: isGeospatialModelEnabled updated
User->>Chat: submit message
Chat->>Context: read isGeospatialModelEnabled
alt geospatial enabled
Chat->>API: POST FormData (query/file)
API->>Azure: forward FormData via POST (multipart/form-data) OR return mock if env absent
Azure-->>API: JSON response (status)
alt success
API-->>Chat: JSON payload
Chat-->>User: render ONNX response block
else failure
API-->>Chat: error/status
Chat-->>User: render ONNX error block
end
else geospatial disabled
Chat-->>User: proceed with existing submission flow (text/image path)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25–30 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
|
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
||||||||||||||||||
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
onnxruntime-nodeis a heavy native dependency that appears unused given the external API proxy approach; it adds installation overhead and potential deployment issues.- Keeping it in
dependenciesrisks accidental client/edge imports and incompatible runtime bundling. - Remove it now or move to
optionalDependenciesand guard usage via dynamic import in Node-only server code when local inference is actually implemented.
Additional notes (1)
- Maintainability |
package.json:66-66
onnxruntime-nodeships prebuilt native binaries and can be sensitive to ABI changes. Using a caret range (^1.23.0) risks pulling in breaking changes in minor updates for native modules. Pinning or using a tilde is safer for stability until you’ve tested newer builds.
Summary of changes
- Added
onnxruntime-node@^1.23.0to thedependenciessection ofpackage.json.
package.json
Outdated
| "mapbox-gl": "^3.11.0", | ||
| "next": "^15.3.3", | ||
| "next-themes": "^0.3.0", | ||
| "onnxruntime-node": "^1.23.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding onnxruntime-node as a hard runtime dependency brings in a large native module that downloads platform-specific binaries during install, which can slow CI/CD and increase deployment sizes. Given your PR description indicates the ONNX model is accessed via an external API proxy (not local inference), this dependency appears unused and unnecessary right now. It can also create compatibility issues for Edge/serverless runtimes if accidentally imported.
If local inference isn’t being used in this PR, remove it. If you plan to use it later, prefer keeping it out of dependencies until that code lands, or move it to optionalDependencies and ensure it’s only dynamically imported in Node-only code paths.
Suggestion
Since the API route proxies to an external ONNX service and there’s no local inference in this diff, remove the dependency for now to avoid native binary install and deployment bloat.
Suggested change in package.json:
- Remove this line from
dependencies:
"onnxruntime-node": "^1.23.0",
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (3)
dev.logis excluded by!**/*.logjules-scratch/verification/error.pngis excluded by!**/*.pngpackage-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (4)
app/api/onnx/route.ts(1 hunks)components/chat-panel.tsx(4 hunks)jules-scratch/verification/verify_onnx_button.py(1 hunks)package.json(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
components/chat-panel.tsx (1)
components/user-message.tsx (1)
UserMessage(15-53)
🔇 Additional comments (7)
jules-scratch/verification/verify_onnx_button.py (4)
1-10: LGTM!The script setup is appropriate. Browser configuration with headless mode is suitable for automated verification, and the console listener provides useful debugging output.
27-34: LGTM!The button and response selectors correctly match the implementation in
components/chat-panel.tsx. The 10-second timeout is appropriate for the simulated response.
36-39: Verify the screenshot path is correct for your execution context.The screenshot path
jules-scratch/verification/verification.pngis relative to the working directory. Ensure the script is executed from the repository root, or adjust the path accordingly.
41-46: LGTM!Error handling and cleanup are implemented correctly. The finally block ensures the browser is closed even if an exception occurs, and the error screenshot provides useful debugging context.
components/chat-panel.tsx (3)
10-10: LGTM!The
BrainCircuiticon import fromlucide-reactis appropriate for representing the ONNX model action.
282-282: LGTM!The increased left padding (
pl-28) appropriately accommodates both the attachment and ONNX buttons, preventing text overlap.
258-269: Verify ONNX button disabled state and handler guard
- Confirm
<Button>in components/chat-panel.tsx lacksdisabled={!input}; if so, disable when input is empty.- Confirm
handleOnnxClickchecks for empty input before proceeding; if not, add the guard.- Confirm mobile omission of ONNX button is intentional; document or adjust as needed.
app/api/onnx/route.ts
Outdated
| // In a real implementation, you would forward the request to the Azure VM. | ||
| // For now, we will just simulate a response. | ||
| // const response = await fetch(azureVmEndpoint, { | ||
| // method: 'POST', | ||
| // headers: { | ||
| // 'Content-Type': 'application/json', | ||
| // }, | ||
| // body: JSON.stringify(body), | ||
| // }); | ||
| // const data = await response.json(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Move placeholder implementation to documentation or issue tracker.
Commented-out code is typically discouraged in production codebases. Consider:
- Remove the commented code and create a TODO comment referencing a tracking issue for the Azure VM integration.
- Move the implementation notes to API documentation or a design doc.
Additionally, when implementing the actual forwarding logic, ensure proper error handling, timeout, and retry logic are included.
Replace with a TODO reference:
- // The Azure VM API endpoint URL will be placed here.
- // For now, we will use a placeholder.
- const azureVmEndpoint = 'https://your-azure-vm-api-endpoint.com/terramind.onnx';
-
- // In a real implementation, you would forward the request to the Azure VM.
- // For now, we will just simulate a response.
- // const response = await fetch(azureVmEndpoint, {
- // method: 'POST',
- // headers: {
- // 'Content-Type': 'application/json',
- // },
- // body: JSON.stringify(body),
- // });
- // const data = await response.json();
+ // TODO: Implement Azure VM ONNX model integration
+ // Track in issue #XXX: Forward request to Azure VM endpoint and return actual inference results
+ // Implementation notes:
+ // - Add proper error handling and timeouts for external API calls
+ // - Consider retry logic for transient failures
+ // - Validate response from Azure VM before returning to client📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // In a real implementation, you would forward the request to the Azure VM. | |
| // For now, we will just simulate a response. | |
| // const response = await fetch(azureVmEndpoint, { | |
| // method: 'POST', | |
| // headers: { | |
| // 'Content-Type': 'application/json', | |
| // }, | |
| // body: JSON.stringify(body), | |
| // }); | |
| // const data = await response.json(); | |
| // TODO: Implement Azure VM ONNX model integration | |
| // Track in issue #XXX: Forward request to Azure VM endpoint and return actual inference results | |
| // Implementation notes: | |
| // - Add proper error handling and timeouts for external API calls | |
| // - Consider retry logic for transient failures | |
| // - Validate response from Azure VM before returning to client |
🤖 Prompt for AI Agents
In app/api/onnx/route.ts around lines 11 to 20, remove the commented-out
placeholder fetch implementation and replace it with a single TODO comment
referencing the tracker/issue ID (e.g. TODO: implement Azure VM forwarding - see
ISSUE-1234) so the codebase has no commented code; move the detailed
implementation notes into the repo design doc or API documentation and, when you
later implement the forwarding, ensure the request includes proper error
handling, configurable timeouts, and retry/backoff logic.
app/api/onnx/route.ts
Outdated
| // Simulated response for now | ||
| const simulatedData = { | ||
| prediction: "This is a simulated response from the ONNX model.", | ||
| confidence: 0.95, | ||
| input: body, | ||
| }; | ||
|
|
||
| return NextResponse.json(simulatedData); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Document the response schema and consider sanitizing echoed input.
Two observations:
-
Input echoing: The simulated response includes the entire request body in the
inputfield. For a simulation, this is acceptable, but in production, ensure no sensitive information (PII, credentials) is echoed back in responses. -
Missing schema documentation: The response structure (
prediction,confidence,input) is not documented. Consider adding JSDoc or TypeScript types to define the expected response format for maintainability.
Add type documentation:
+/**
+ * ONNX API Response
+ * @typedef {Object} OnnxResponse
+ * @property {string} prediction - Model prediction result
+ * @property {number} confidence - Confidence score (0-1)
+ * @property {any} input - Echoed input for reference (simulation only)
+ */
+
export async function POST(request: Request) {
try {
const body = await request.json();
// ... validation ...
// Simulated response for now
+ /** @type {OnnxResponse} */
const simulatedData = {
prediction: "This is a simulated response from the ONNX model.",
confidence: 0.95,
input: body,
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Simulated response for now | |
| const simulatedData = { | |
| prediction: "This is a simulated response from the ONNX model.", | |
| confidence: 0.95, | |
| input: body, | |
| }; | |
| return NextResponse.json(simulatedData); | |
| /** | |
| * ONNX API Response | |
| * @typedef {Object} OnnxResponse | |
| * @property {string} prediction - Model prediction result | |
| * @property {number} confidence - Confidence score (0-1) | |
| * @property {any} input - Echoed input for reference (simulation only) | |
| */ | |
| export async function POST(request: Request) { | |
| try { | |
| const body = await request.json(); | |
| // ... validation ... | |
| // Simulated response for now | |
| /** @type {OnnxResponse} */ | |
| const simulatedData = { | |
| prediction: "This is a simulated response from the ONNX model.", | |
| confidence: 0.95, | |
| input: body, | |
| }; | |
| return NextResponse.json(simulatedData); |
🤖 Prompt for AI Agents
In app/api/onnx/route.ts around lines 22 to 29, the simulated response echoes
the entire request body and lacks a documented response schema; update the
handler to avoid returning raw user input by sanitizing or omitting sensitive
fields before including an echo, and add TypeScript types or JSDoc describing
the response shape (e.g., prediction: string, confidence: number, input?:
SanitizedInput) so the expected return type is explicit and maintainable; ensure
the sanitization step strips common PII/credentials and map the sanitized object
to the response type before returning NextResponse.json.
components/chat-panel.tsx
Outdated
| try { | ||
| const response = await fetch('/api/onnx', { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify({ query: currentInput }), | ||
| }); | ||
|
|
||
| if (!response.ok) { | ||
| throw new Error('Failed to get response from ONNX API'); | ||
| } | ||
|
|
||
| const data = await response.json(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add timeout and loading state for better UX.
The fetch call to /api/onnx has two issues:
- No timeout: If the backend hangs, the request will wait indefinitely with no user feedback.
- No loading indicator: Users have no visual feedback that the ONNX request is processing.
Add timeout using AbortController:
try {
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), 30000); // 30 second timeout
+
const response = await fetch('/api/onnx', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ query: currentInput }),
+ signal: controller.signal,
});
+ clearTimeout(timeoutId);
+
if (!response.ok) {
throw new Error('Failed to get response from ONNX API');
}Add loading state (add near the top of the component):
const [isOnnxLoading, setIsOnnxLoading] = useState(false);Then wrap the try-catch:
try {
+ setIsOnnxLoading(true);
const controller = new AbortController();
// ... fetch logic ...
} catch (error) {
console.error('ONNX API call failed:', error);
// ... error handling ...
+ } finally {
+ setIsOnnxLoading(false);
}And disable the button while loading (line 265):
<Button
type="button"
variant={'ghost'}
size={'icon'}
className={cn(
'absolute top-1/2 transform -translate-y-1/2 left-14'
)}
onClick={handleOnnxClick}
title="Send to ONNX Model"
+ disabled={isOnnxLoading || !input}
>🤖 Prompt for AI Agents
In components/chat-panel.tsx around lines 89 to 102, the fetch to '/api/onnx'
needs a timeout and a loading state: add a new state near the component top
const [isOnnxLoading, setIsOnnxLoading] = useState(false); set isOnnxLoading
true before starting the request and false in a finally block, create an
AbortController and start a timer (e.g., setTimeout) that calls
controller.abort() after your timeout ms, pass controller.signal to fetch, catch
the abort error separately to surface a user-friendly message, and clear the
timer when the request finishes; finally, disable the ONNX trigger button at
line 265 when isOnnxLoading is true to prevent duplicate requests.
components/chat-panel.tsx
Outdated
| } catch (error) { | ||
| console.error('ONNX API call failed:', error); | ||
| setMessages(currentMessages => [ | ||
| ...currentMessages, | ||
| { | ||
| id: nanoid(), | ||
| component: ( | ||
| <div className="p-4 my-2 border rounded-lg bg-destructive/20 text-destructive"> | ||
| <p><strong>Error:</strong> Failed to get ONNX model response.</p> | ||
| </div> | ||
| ) | ||
| } | ||
| ]); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Differentiate error types for better user feedback.
The catch block handles all errors generically. Consider providing more specific error messages for different failure scenarios (network errors, timeouts, API errors) to help users understand what went wrong.
} catch (error) {
- console.error('ONNX API call failed:', error);
+ let errorMessage = 'Failed to get ONNX model response.';
+
+ if (error instanceof Error) {
+ if (error.name === 'AbortError') {
+ errorMessage = 'Request timed out. Please try again.';
+ } else if (error.message.includes('fetch')) {
+ errorMessage = 'Network error. Please check your connection.';
+ }
+ console.error('ONNX API call failed:', error.message);
+ } else {
+ console.error('ONNX API call failed:', error);
+ }
+
setMessages(currentMessages => [
...currentMessages,
{
id: nanoid(),
component: (
<div className="p-4 my-2 border rounded-lg bg-destructive/20 text-destructive">
- <p><strong>Error:</strong> Failed to get ONNX model response.</p>
+ <p><strong>Error:</strong> {errorMessage}</p>
</div>
)
}
]);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (error) { | |
| console.error('ONNX API call failed:', error); | |
| setMessages(currentMessages => [ | |
| ...currentMessages, | |
| { | |
| id: nanoid(), | |
| component: ( | |
| <div className="p-4 my-2 border rounded-lg bg-destructive/20 text-destructive"> | |
| <p><strong>Error:</strong> Failed to get ONNX model response.</p> | |
| </div> | |
| ) | |
| } | |
| ]); | |
| } | |
| } catch (error) { | |
| let errorMessage = 'Failed to get ONNX model response.'; | |
| if (error instanceof Error) { | |
| if (error.name === 'AbortError') { | |
| errorMessage = 'Request timed out. Please try again.'; | |
| } else if (error.message.includes('fetch')) { | |
| errorMessage = 'Network error. Please check your connection.'; | |
| } | |
| console.error('ONNX API call failed:', error.message); | |
| } else { | |
| console.error('ONNX API call failed:', error); | |
| } | |
| setMessages(currentMessages => [ | |
| ...currentMessages, | |
| { | |
| id: nanoid(), | |
| component: ( | |
| <div className="p-4 my-2 border rounded-lg bg-destructive/20 text-destructive"> | |
| <p><strong>Error:</strong> {errorMessage}</p> | |
| </div> | |
| ) | |
| } | |
| ]); | |
| } |
🤖 Prompt for AI Agents
In components/chat-panel.tsx around lines 116 to 129, the catch block treats all
errors the same; update it to distinguish common failure types (network errors,
timeouts/AbortError, and API/HTTP errors) and present clearer messages to users.
Inspect the caught error: if error.name === 'AbortError' show a timeout-specific
message; if error is a TypeError or message includes 'Failed to fetch' show a
network error message; if the error has an HTTP response/status (e.g.
error.response?.status or a custom API error payload) show the status and any
returned message; otherwise fall back to the generic failure text. Ensure you
still log the full error for debugging (console.error) and push a message with
the existing message object/component structure but with the more specific
user-facing text.
| # Wait for the loading overlay to disappear before proceeding. | ||
| # The selector targets the div with a high z-index that was blocking clicks. | ||
| loading_overlay = page.locator('div[class*="z-[9999]"]') | ||
| expect(loading_overlay).to_be_hidden(timeout=60000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider reducing the loading overlay timeout.
The 60-second timeout for the loading overlay is quite generous. Unless the application genuinely requires this long to initialize, consider reducing it to 15-20 seconds to fail faster if there's an actual problem.
Apply this diff if a shorter timeout is acceptable:
- expect(loading_overlay).to_be_hidden(timeout=60000)
+ expect(loading_overlay).to_be_hidden(timeout=20000)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Wait for the loading overlay to disappear before proceeding. | |
| # The selector targets the div with a high z-index that was blocking clicks. | |
| loading_overlay = page.locator('div[class*="z-[9999]"]') | |
| expect(loading_overlay).to_be_hidden(timeout=60000) | |
| # Wait for the loading overlay to disappear before proceeding. | |
| # The selector targets the div with a high z-index that was blocking clicks. | |
| loading_overlay = page.locator('div[class*="z-[9999]"]') | |
| expect(loading_overlay).to_be_hidden(timeout=20000) |
🤖 Prompt for AI Agents
In jules-scratch/verification/verify_onnx_button.py around lines 15 to 18, the
loading overlay wait uses a 60000ms timeout which is overly long; change the
expect(...).to_be_hidden timeout to a shorter value (e.g., 15000–20000 ms) so
tests fail faster on real hangs—replace 60000 with 15000 or 20000 accordingly
and run tests to confirm stability.
This commit introduces a toggle switch in the chat panel, allowing users to choose whether to send their input to the integrated ONNX model or the default chat model. Key changes include: - A new `Switch` component from Radix UI has been added to the `ChatPanel` to control model selection. - The submission logic in `ChatPanel` has been updated to conditionally route the user's request to either the `/api/onnx` endpoint or the standard chat submission action based on the toggle's state. - The dedicated "Send to ONNX" button has been removed to streamline the user interface, with the main submit button now handling both cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
components/chat-panel.tsx (1)
78-130: Past review comments remain applicable.The ONNX submission path has several concerns that were flagged in previous reviews:
- Attachment handling:
selectedFileis neither sent nor cleared when ONNX mode is active- No timeout: The fetch request lacks timeout/abort handling
- No loading state: Users receive no visual feedback during the request
- No response validation: The API response is rendered without structure validation
- Generic error handling: Error messages don't differentiate between network failures, timeouts, and API errors
Since these issues were already documented in detail in past review comments with proposed solutions, please ensure they are tracked and will be addressed.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
dev.logis excluded by!**/*.log
📒 Files selected for processing (1)
components/chat-panel.tsx(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
components/chat-panel.tsx (3)
components/user-message.tsx (1)
UserMessage(15-53)components/chat.tsx (1)
Chat(20-135)app/actions.tsx (2)
aiState(461-608)submit(35-347)
components/chat-panel.tsx
Outdated
| <div className="flex items-center space-x-2 px-4 pb-2"> | ||
| <Switch | ||
| id="onnx-model-toggle" | ||
| checked={useOnnxModel} | ||
| onCheckedChange={setUseOnnxModel} | ||
| /> | ||
| <Label htmlFor="onnx-model-toggle">Use ONNX Model</Label> | ||
| </div> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
ONNX toggle placement and UX.
The toggle is positioned above the form and is always visible (both mobile and desktop). Consider the following UX improvements:
- Add a tooltip or help text explaining what "ONNX Model" means for users unfamiliar with the term
- Consider disabling the file attachment button when ONNX mode is active (since attachments aren't currently supported in ONNX path)
- Ensure the toggle state is visually clear and accessible
🤖 Prompt for AI Agents
In components/chat-panel.tsx around lines 218–225 the ONNX toggle is always
visible without explanation; add inline help and accessibility features and
disable attachments when ONNX mode is active. Update the Switch/Label block to
include a tooltip or small help text (or a question icon with aria-describedby)
that explains what "ONNX Model" means, ensure the Switch has an accessible
label, aria-checked, and a visible focus/checked style for clarity, and wire the
useOnnxModel state to disable (and visually indicate disabled) the file
attachment button elsewhere in the component so attachments cannot be used in
ONNX mode.
This commit introduces a new feature that allows users to select whether to use an external ONNX model for their chat queries. The selection is made via a toggle switch in the application's settings. Key changes include: - A new React context, `GeospatialModelProvider`, has been created to manage the state of the model selection across the application. - A toggle switch has been added to the model selection settings UI, allowing users to enable or disable the "geospatial foundational model garden." - The `ChatPanel` component has been updated to use the shared state from the context. It now conditionally routes user input to either the standard chat action or a new `/api/onnx` proxy endpoint. - The `/api/onnx` endpoint has been implemented as a proxy to forward requests to a placeholder Azure VM endpoint. - The unused `onnxruntime-node` dependency has been removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
app/api/onnx/route.ts (1)
5-18: Validate the incoming payload before forwarding.We’re still forwarding arbitrary JSON to the upstream service without checking for
query. Please reintroduce the body validation that was requested earlier—return 400 whenqueryis missing/non-string so we don’t hammer the backend with bad requests.const body = await request.json(); + if (!body || typeof body.query !== 'string' || !body.query.trim()) { + return NextResponse.json( + { error: 'Invalid request: query must be a non-empty string.' }, + { status: 400 } + ); + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (5)
app/api/onnx/route.ts(1 hunks)app/layout.tsx(2 hunks)components/chat-panel.tsx(3 hunks)components/settings/components/model-selection-form.tsx(3 hunks)lib/geospatial-model-context.tsx(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
components/settings/components/model-selection-form.tsx (1)
lib/geospatial-model-context.tsx (1)
useGeospatialModel(30-36)
components/chat-panel.tsx (2)
lib/geospatial-model-context.tsx (1)
useGeospatialModel(30-36)components/user-message.tsx (1)
UserMessage(15-53)
app/layout.tsx (1)
lib/geospatial-model-context.tsx (1)
GeospatialModelProvider(15-27)
🪛 Biome (2.1.2)
components/settings/components/model-selection-form.tsx
[error] 110-110: Unexpected token. Did you mean {'>'} or >?
(parse)
This commit resolves a build failure caused by a JSX parsing error in `components/settings/components/model-selection-form.tsx`. The `>` character in the label `(->on)` was being misinterpreted by the parser. The fix replaces `>` with its HTML entity `>`, ensuring the code is valid JSX and allowing the application to build successfully.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
components/settings/components/model-selection-form.tsx(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
components/settings/components/model-selection-form.tsx (1)
lib/geospatial-model-context.tsx (1)
useGeospatialModel(30-36)
🔇 Additional comments (1)
components/settings/components/model-selection-form.tsx (1)
16-18: LGTM!The imports are correctly added to support the new geospatial model toggle feature.
This commit introduces a new feature that allows users to select whether to use an external ONNX model for their chat queries. The selection is made via a toggle switch in the application's settings. Key changes include: - A new React context, `GeospatialModelProvider`, has been created to manage the state of the model selection across the application. - A toggle switch has been added to the model selection settings UI, allowing users to enable or disable the "geospatial foundational model garden." - The `ChatPanel` component has been updated to use the shared state from the context. It now conditionally routes user input to either the standard chat action or a new `/api/onnx` proxy endpoint. - The `/api/onnx` endpoint has been implemented as a proxy to forward requests to a placeholder Azure VM endpoint. - The UI for the toggle switch has been simplified to be a clean switch on the model card, without any extra text.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
components/settings/components/model-selection-form.tsx (2)
60-60: Error boundary for context availability still missing.The
useGeospatialModelhook throws an error if invoked outside its provider. This concern was flagged in a previous review but remains unaddressed. While the provider may be wired inapp/layout.tsx, an ErrorBoundary is needed to gracefully handle edge cases during SSR or hydration where the context might be unavailable.Wrap the component (or its parent in
app/layout.tsx) with a React ErrorBoundary to catch missing-context errors:import { ErrorBoundary } from 'react-error-boundary'; function ErrorFallback({ error }: { error: Error }) { return ( <div role="alert"> <p>Something went wrong:</p> <pre>{error.message}</pre> </div> ); } // In app/layout.tsx or parent component: <ErrorBoundary FallbackComponent={ErrorFallback}> <ModelSelectionForm form={form} /> </ErrorBoundary>
100-106: Major UX issue: Toggle remains visible when other models are selected.This issue was flagged in a previous review but remains unresolved. The condition
model.id === "QCX-Terra"only checks whether we're rendering the QCX-Terra card, not whether QCX-Terra is the selected model. Users can toggle the geospatial model while a different model (Grok-3, Claude-4, Llama-4) is selected, creating ambiguous system state and unclear behavior.Apply this diff to make the toggle conditional on QCX-Terra being selected:
- {model.id === "QCX-Terra" && ( + {model.id === "QCX-Terra" && field.value === "QCX-Terra" && ( <Switch id="geospatial-model-toggle" checked={isGeospatialModelEnabled} onCheckedChange={toggleGeospatialModel} /> )}Alternatively, if this toggle is meant to be a global feature flag independent of model selection, move it outside the RadioGroup into a separate form section and update the label to clarify its global nature.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
components/settings/components/model-selection-form.tsx(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
components/settings/components/model-selection-form.tsx (1)
lib/geospatial-model-context.tsx (1)
useGeospatialModel(30-36)
This commit introduces a comprehensive feature to integrate an external ONNX model, `TerraMind`, into the application, complete with a user-facing toggle in the settings and a robust backend API. Key changes include: - A new API route at `/api/onnx/route.ts` that acts as a proxy to the ONNX model endpoint. This route now handles `FormData` to support both text and image inputs, uses an environment variable (`AZURE_ONNX_ENDPOINT`) for the endpoint URL, and includes a mock fallback for local development. - A new React context, `GeospatialModelProvider`, has been created to manage the state of the model selection across the application. - A toggle switch has been added to the model selection settings UI, allowing users to enable or disable the "geospatial foundational model garden." - The `ChatPanel` component has been significantly enhanced to conditionally route user input (including images) to the ONNX API, validate API responses, and handle errors gracefully. - An `ErrorBoundary` component has been added to the root layout to improve application stability. - The `Switch` component in the settings has been made more accessible with a screen-reader-only label.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (1)
components/chat-panel.tsx (1)
105-136: Consider adding timeout protection for the ONNX API call.While the error handling is comprehensive, the fetch call lacks timeout protection. If the backend hangs, users will have no feedback until the browser's default timeout (which can be very long).
Apply this diff to add a timeout with AbortController:
try { + const controller = new AbortController(); + const timeoutId = setTimeout(() => controller.abort(), 30000); // 30 second timeout + const response = await fetch('/api/onnx', { method: 'POST', body: onnxFormData, + signal: controller.signal, }); + clearTimeout(timeoutId); + if (!response.ok) { throw new Error(`ONNX API request failed with status ${response.status}`); }Then update the error handling to catch
AbortError:} catch (error) { console.error('ONNX API call failed:', error); + let errorMessage = 'Failed to get ONNX model response.'; + if (error instanceof Error && error.name === 'AbortError') { + errorMessage = 'Request timed out. Please try again.'; + } else if (error instanceof Error) { + errorMessage = error.message; + } setMessages(currentMessages => [ ...currentMessages, { id: nanoid(), component: ( <div className="p-4 my-2 border rounded-lg bg-destructive/20 text-destructive"> - <p><strong>Error:</strong> {error instanceof Error ? error.message : 'Failed to get ONNX model response.'}</p> + <p><strong>Error:</strong> {errorMessage}</p> </div> ) } ]); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (5)
app/api/onnx/route.ts(1 hunks)app/layout.tsx(2 hunks)components/chat-panel.tsx(4 hunks)components/error-boundary.tsx(1 hunks)components/settings/components/model-selection-form.tsx(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
app/layout.tsx (1)
lib/geospatial-model-context.tsx (1)
GeospatialModelProvider(15-27)
components/chat-panel.tsx (2)
lib/geospatial-model-context.tsx (1)
useGeospatialModel(30-36)components/user-message.tsx (1)
UserMessage(15-53)
components/settings/components/model-selection-form.tsx (1)
lib/geospatial-model-context.tsx (1)
useGeospatialModel(30-36)
🪛 Biome (2.1.2)
components/error-boundary.tsx
[error] 37-40: Provide an explicit type prop for the button element.
The default type of a button is submit, which causes the submission of a form when placed inside a form element. This is likely not the behaviour that you want inside a React application.
Allowed button types are: submit, button or reset
(lint/a11y/useButtonType)
🔇 Additional comments (12)
app/api/onnx/route.ts (3)
13-31: LGTM! Comprehensive input validation.The FormData validation properly addresses the previous review feedback by:
- Catching malformed FormData with a try-catch
- Requiring at least one of
queryorfile- Validating that
queryis a non-empty string if provided- Returning appropriate 400 status codes for invalid input
33-48: LGTM! Proper environment-based mocking.The mock fallback implementation correctly addresses the critical issue from previous reviews by:
- Checking for the
AZURE_ONNX_ENDPOINTenvironment variable- Returning a structured mock response when the endpoint is not configured
- Including helpful metadata (query, fileName, fileSize) in the mock response
- Logging a warning for observability
This allows development and testing without a real Azure endpoint while maintaining the expected response structure.
69-90: LGTM! Robust error handling with appropriate status mapping.The error handling implementation properly:
- Extracts error status when available (from Azure response errors)
- Defaults to 500 for unknown errors
- Conditionally reports to monitoring service in production
- Logs detailed errors in development
- Returns structured error responses with status and details
This addresses previous review feedback about differentiating error types and integrating monitoring.
components/chat-panel.tsx (5)
6-6: LGTM! Clean geospatial context integration.The geospatial model context is properly imported and used with the
useGeospatialModelhook. The destructuredisGeospatialModelEnabledstate is used to conditionally branch between ONNX and default submission flows.Also applies to: 27-27
74-77: LGTM! Proper input validation.The trimmed input validation correctly addresses the previous review feedback by:
- Trimming the input to remove whitespace
- Validating that at least one of
trimmedInputorselectedFileexists- Preventing empty submissions early
79-103: LGTM! Proper attachment handling and FormData construction.The ONNX submission flow correctly addresses previous review feedback by:
- Building user message content with both text and optional image preview
- Constructing FormData with the trimmed input and file
- Clearing both input and attachment before the fetch (preventing stale state)
- Using the trimmed input for validation and submission
117-121: LGTM! Proper response validation.The response validation correctly addresses previous review feedback by:
- Checking that the response is an object
- Validating the presence of the
predictionfield- Logging unexpected formats for debugging
- Throwing a user-friendly error message
152-177: LGTM! Consistent default chat submission flow.The default submission path maintains consistency with the ONNX flow by:
- Building user message content with text and optional image preview
- Constructing FormData from the form event
- Appending the selected file when present
- Clearing input and attachment before submission
- Using the existing
submitaction for the default chat flowcomponents/settings/components/model-selection-form.tsx (2)
16-18: LGTM! Proper context integration and imports.The geospatial model context is correctly integrated:
- All imports are now used (Switch, Label, useGeospatialModel)
- The
useGeospatialModelhook properly destructures the context values- Label component is used for accessibility (see line 102-104)
This addresses the previous review comment about the unused Label import.
Also applies to: 60-60
102-104: LGTM! Proper accessibility with screen-reader label.The Label component is correctly used with:
htmlFormatching the Switchidsr-onlyclass for screen-reader-only visibility- Clear descriptive text
This addresses the previous review comment about accessibility.
components/error-boundary.tsx (1)
1-27: LGTM! Standard error boundary implementation.The ErrorBoundary component follows React best practices:
- Proper TypeScript types for Props and State
getDerivedStateFromErrorfor state updates (during render phase)componentDidCatchfor side effects (logging)- Clear comment about future monitoring service integration
app/layout.tsx (1)
14-15: LGTM! Proper provider nesting and error boundary placement.The layout updates correctly integrate the new providers:
ErrorBoundaryis the outermost wrapper, catching errors from all providers and UI componentsGeospatialModelProvideris properly nested to make geospatial context available throughout the app- Provider order ensures all context dependencies are satisfied
- The structure supports the ONNX feature flow introduced in chat-panel.tsx and settings
This addresses the architectural requirements for the geospatial model feature.
Also applies to: 59-82
| // Simulate sending an error to a monitoring service (e.g., Sentry, DataDog) | ||
| const reportErrorToMonitoringService = (error: Error, context: object) => { | ||
| // In a real application, this would send the error to a service like Sentry. | ||
| console.log("Reporting error to monitoring service:", { | ||
| errorMessage: error.message, | ||
| stack: error.stack, | ||
| context, | ||
| }); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Strengthen type safety for the monitoring helper.
The reportErrorToMonitoringService function accepts context: object, which is too permissive and loses type information. Consider defining a specific interface for the context to ensure consistent structure and improve maintainability.
Apply this diff to add type safety:
+interface ErrorContext {
+ query?: string | null;
+ fileName?: string;
+ fileSize?: number;
+ [key: string]: any;
+}
+
-const reportErrorToMonitoringService = (error: Error, context: object) => {
+const reportErrorToMonitoringService = (error: Error, context: ErrorContext) => {
// In a real application, this would send the error to a service like Sentry.
console.log("Reporting error to monitoring service:", {
errorMessage: error.message,
stack: error.stack,
context,
});
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Simulate sending an error to a monitoring service (e.g., Sentry, DataDog) | |
| const reportErrorToMonitoringService = (error: Error, context: object) => { | |
| // In a real application, this would send the error to a service like Sentry. | |
| console.log("Reporting error to monitoring service:", { | |
| errorMessage: error.message, | |
| stack: error.stack, | |
| context, | |
| }); | |
| }; | |
| // Define a structured context type for error reporting | |
| interface ErrorContext { | |
| query?: string | null; | |
| fileName?: string; | |
| fileSize?: number; | |
| [key: string]: any; | |
| } | |
| // Simulate sending an error to a monitoring service (e.g., Sentry, DataDog) | |
| const reportErrorToMonitoringService = (error: Error, context: ErrorContext) => { | |
| // In a real application, this would send the error to a service like Sentry. | |
| console.log("Reporting error to monitoring service:", { | |
| errorMessage: error.message, | |
| stack: error.stack, | |
| context, | |
| }); | |
| }; |
🤖 Prompt for AI Agents
In app/api/onnx/route.ts around lines 3 to 11, the monitoring helper currently
types context as a generic object; define a specific interface (e.g.,
MonitoringContext) with expected fields such as route?: string, requestId?:
string, userId?: string, and metadata?: Record<string, unknown> (adjust fields
to match real usage), update the function signature to
reportErrorToMonitoringService(error: Error, context: MonitoringContext),
replace loose object usages with the typed interface, and export the interface
if it will be reused elsewhere to enforce consistent structure and improve
maintainability.
| // 3. Forward Request and Handle Errors | ||
| try { | ||
| // Pass the FormData directly to the fetch call. | ||
| // The browser will automatically set the 'Content-Type' to 'multipart/form-data' with the correct boundary. | ||
| const response = await fetch(azureVmEndpoint, { | ||
| method: 'POST', | ||
| body: formData, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Correct the misleading comment about Content-Type.
The comment states "The browser will automatically set the 'Content-Type' to 'multipart/form-data' with the correct boundary," but this code runs server-side in a Next.js API route, not in a browser. The Node.js fetch (via undici) will handle the boundary, but the phrasing is incorrect.
Apply this diff to clarify:
try {
- // Pass the FormData directly to the fetch call.
- // The browser will automatically set the 'Content-Type' to 'multipart/form-data' with the correct boundary.
+ // Pass the FormData directly to fetch.
+ // The fetch implementation will automatically set 'Content-Type: multipart/form-data' with the appropriate boundary.
const response = await fetch(azureVmEndpoint, {
method: 'POST',
body: formData,
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // 3. Forward Request and Handle Errors | |
| try { | |
| // Pass the FormData directly to the fetch call. | |
| // The browser will automatically set the 'Content-Type' to 'multipart/form-data' with the correct boundary. | |
| const response = await fetch(azureVmEndpoint, { | |
| method: 'POST', | |
| body: formData, | |
| }); | |
| // 3. Forward Request and Handle Errors | |
| try { | |
| // Pass the FormData directly to fetch. | |
| // The fetch implementation will automatically set 'Content-Type: multipart/form-data' with the appropriate boundary. | |
| const response = await fetch(azureVmEndpoint, { | |
| method: 'POST', | |
| body: formData, | |
| }); |
🤖 Prompt for AI Agents
In app/api/onnx/route.ts around lines 50 to 57, the inline comment incorrectly
says "The browser will automatically set the 'Content-Type'..." even though this
runs server-side; update the comment to correctly state that the Node.js fetch
implementation (undici) will handle setting the multipart/form-data Content-Type
and boundary when passing FormData, and remove any browser-specific wording so
the comment accurately reflects server-side behavior.
| <button | ||
| onClick={() => this.setState({ hasError: false })} | ||
| className="mt-6 px-4 py-2 bg-primary text-primary-foreground rounded-md hover:bg-primary/90" | ||
| > | ||
| Try again | ||
| </button> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add explicit type="button" to prevent unintended form submission.
The button lacks an explicit type attribute. If this error boundary is rendered inside a form context, the button's default type="submit" could trigger form submission.
Apply this diff:
<button
+ type="button"
onClick={() => this.setState({ hasError: false })}
className="mt-6 px-4 py-2 bg-primary text-primary-foreground rounded-md hover:bg-primary/90"
>
Try again
</button>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <button | |
| onClick={() => this.setState({ hasError: false })} | |
| className="mt-6 px-4 py-2 bg-primary text-primary-foreground rounded-md hover:bg-primary/90" | |
| > | |
| Try again | |
| </button> | |
| <button | |
| type="button" | |
| onClick={() => this.setState({ hasError: false })} | |
| className="mt-6 px-4 py-2 bg-primary text-primary-foreground rounded-md hover:bg-primary/90" | |
| > | |
| Try again | |
| </button> |
🧰 Tools
🪛 Biome (2.1.2)
[error] 37-40: Provide an explicit type prop for the button element.
The default type of a button is submit, which causes the submission of a form when placed inside a form element. This is likely not the behaviour that you want inside a React application.
Allowed button types are: submit, button or reset
(lint/a11y/useButtonType)
🤖 Prompt for AI Agents
In components/error-boundary.tsx around lines 37 to 42, the "Try again" button
is missing an explicit type which can default to "submit" inside a form; update
the button to include type="button" so clicking it does not trigger form
submission, leaving the existing onClick and classes unchanged.
| <div className="flex items-center justify-between"> | ||
| <div className="flex items-center gap-2"> | ||
| <h4 className="font-medium">{model.name}</h4> | ||
| <Badge variant={model.badgeVariant}> | ||
| {model.badge} | ||
| </Badge> | ||
| </div> | ||
| {model.id === "QCX-Terra" && ( | ||
| <div className="flex items-center space-x-2"> | ||
| <Label htmlFor="geospatial-model-toggle" className="sr-only"> | ||
| Enable Geospatial Foundational Model | ||
| </Label> | ||
| <Switch | ||
| id="geospatial-model-toggle" | ||
| checked={isGeospatialModelEnabled} | ||
| onCheckedChange={toggleGeospatialModel} | ||
| /> | ||
| </div> | ||
| )} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
Verify the toggle visibility behavior matches UX expectations.
The toggle is rendered when model.id === "QCX-Terra" but does not check if QCX-Terra is actually selected (field.value === "QCX-Terra"). This means:
- The toggle appears for the QCX-Terra card regardless of which model is selected
- Users can toggle the geospatial model while a different model is selected
- The PR objectives and chat-panel.tsx suggest this is a global feature flag that branches the submission flow
If this is intended to be a global override (affecting submission regardless of selected model), the current implementation is correct but could benefit from clearer UX:
- Move the toggle outside the model cards to a dedicated section
- Add explanatory text about its global effect
- Consider renaming to "Enable ONNX Mode" or similar
If this should only be active when QCX-Terra is selected, add the selection check.
Run the following script to check how isGeospatialModelEnabled is used in the chat flow:
🏁 Script executed:
#!/bin/bash
# Description: Verify if geospatial toggle is meant to be global or model-specific
# Check usage of isGeospatialModelEnabled in chat-panel
rg -n -A5 -B5 'isGeospatialModelEnabled' components/chat-panel.tsx
# Check if selected model is checked alongside geospatial flag
rg -n -A3 'selectedModel.*isGeospatialModel|isGeospatialModel.*selectedModel' components/Length of output: 1079
Align geospatial toggle scope with its behavior
- If it’s model-specific, guard submissions in
chat-panel.tsxwithselectedModel === "QCX-Terra". - Otherwise, move the toggle to a global settings section and update its label (e.g. “Enable Geospatial (ONNX) mode”) to clarify its effect.
🤖 Prompt for AI Agents
components/settings/components/model-selection-form.tsx around lines 93 to 111:
the geospatial toggle is shown next to the QCX-Terra row but its scope/label
isn’t aligned with its runtime effect; either make it truly model-specific by
keeping UI here and updating chat-panel submission logic to only enable
geospatial behavior when selectedModel === "QCX-Terra" &&
isGeospatialModelEnabled, or treat it as a global setting by moving the toggle
out of the model row into a global settings page and renaming the label to
clarify effect (e.g. “Enable Geospatial (ONNX) mode”); pick one approach and
implement the corresponding code changes and label updates so UI location, label
and runtime guard are consistent.
User description
This commit introduces the necessary infrastructure to integrate an ONNX model hosted on an external server.
Key changes include:
/api/onnx/route.tsthat acts as a proxy to the ONNX model endpoint. It currently returns a simulated response for development purposes.ChatPanelcomponent that allows users to send input to the ONNX model.onnxruntime-nodepackage has been added as a dependency.PR Type
Enhancement
Description
Add ONNX model integration via API proxy endpoint
Implement new chat UI button for ONNX model interaction
Add automated verification testing for ONNX functionality
Include onnxruntime-node dependency for model support
Diagram Walkthrough
File Walkthrough
route.ts
New ONNX API proxy endpointapp/api/onnx/route.ts
chat-panel.tsx
ONNX integration in chat interfacecomponents/chat-panel.tsx
handleOnnxClickfunction for API callsverify_onnx_button.py
Automated ONNX button verification testjules-scratch/verification/verify_onnx_button.py
package.json
Add ONNX runtime dependencypackage.json
onnxruntime-nodeversion 1.23.0 dependencySummary by CodeRabbit