Skip to content

feat: add Jetson Orin Nano support#405

Open
realkim93 wants to merge 17 commits intoNVIDIA:mainfrom
realkim93:feat/jetson-orin-nano-support
Open

feat: add Jetson Orin Nano support#405
realkim93 wants to merge 17 commits intoNVIDIA:mainfrom
realkim93:feat/jetson-orin-nano-support

Conversation

@realkim93
Copy link
Copy Markdown

@realkim93 realkim93 commented Mar 19, 2026

Closes #404
Also fixes #539, relates to #65, #300, #511, #1076

Summary

Adds end-to-end Jetson Orin Nano support: from GPU detection through to a working OpenClaw agent with local Ollama inference.

Verified working

$ openclaw agent --agent main --message 'Say hello in Korean'
안녕하세요! (Hello in Korean)

Inference path: OpenClaw (sandbox) → host.openshell.internal:11434 → Ollama → nemotron-3-nano:4b

Test plan

# tests 70
# pass 68
# fail 0
# skipped 2  (discrete NVIDIA + Apple — expected on Jetson)
  • GPU detection: "NVIDIA Jetson Orin Nano ... Super", 7619 MB
  • Gateway: iptables-legacy patched, k3s starts without panic
  • Sandbox: Phase Ready
  • Inference: openclaw agent responds via nemotron-3-nano:4b
  • Re-run onboard: no port conflict errors
  • Cloud inference (nvidia-nim): unaffected (still routes via inference.local)
  • Onboarding lifecycle: healthy gateway preserved on rerun, stale gateway cleaned up, foreign gateway left for port-check surfacing
  • Local provider routing: ollama-local and vllm-local route through host.openshell.internal
  • Node.js version check in setup-jetson.sh (>=22.16.0)

Tested on: Jetson Orin Nano Super (8GB, JetPack 6.x, kernel 5.15.148-tegra)

Co-Authored-By: Claude Opus 4.6 (1M context) noreply@anthropic.com

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 19, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Detect Jetson unified-memory GPUs, prefer a smaller Jetson-specific Ollama model, patch the OpenShell gateway image for iptables-legacy on Jetson, add a host setup script and CLI command for Jetson preparation, and update local Ollama routing and tests for Jetson behavior.

Changes

Cohort / File(s) Summary
GPU detection & model selection
bin/lib/nim.js, bin/lib/local-inference.js
detectGpu() now recognizes Jetson unified‑memory devices (jetson: true, nimCapable: false, memory from system RAM). Added DEFAULT_OLLAMA_MODEL_JETSON = "nemotron-3-nano:4b". getOllamaModelOptions(runCapture, gpu) and getDefaultOllamaModel(runCapture, gpu) signatures updated to accept gpu and prefer Jetson-specific model/fallbacks when gpu.jetson.
Onboard workflow & gateway patching
bin/lib/onboard.js
promptOllamaModel() forwards gpu. preflight() frees gateway resources, treats Jetson as unified‑memory (no NIM), and performs gateway cleanup. Added getGatewayImageTag() and patchGatewayImageForJetson() to rebuild/tag the gateway image using iptables-legacy when needed; startGateway(gpu) invokes patching on Jetson. Writes sandbox ~/.nemoclaw/config.json for openclaw sandbox.
CLI & host setup script
bin/nemoclaw.js, scripts/setup-jetson.sh
New nemoclaw setup-jetson command. Added scripts/setup-jetson.sh to detect Jetson, add non‑root user to docker group, ensure nvidia-container-runtime is configured and set as Docker default-runtime, load/persist kernel modules, restart Docker with retries, and print next steps.
Local inference routing & config
bin/lib/inference-config.js, bin/lib/local-inference.js
Exported host gateway URL and getLocalProviderBaseUrl(...). inference-config now uses provider-specific direct endpoints (OLLAMA_DIRECT_URL / VLLM_DIRECT_URL) for ollama-local/vllm-local instead of the shared INFERENCE_ROUTE_URL.
Sandbox network policy
nemoclaw-blueprint/policies/openclaw-sandbox.yaml
Added network_policies.ollama_local allowing sandbox access to host.openshell.internal:11434 for /usr/local/bin/openclaw.
Tests
test/local-inference.test.js, test/nim.test.js, test/inference-config.test.js
Added Jetson model tests and DEFAULT_OLLAMA_MODEL_JETSON import; nim tests probe GPU once and conditionally run Jetson/discrete/Apple checks; ollama-local provider test simplified to assert endpoint and model fields.
New file
scripts/setup-jetson.sh
New executable script (host-side Jetson prep) added.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI as "nemoclaw setup-jetson"
    participant Setup as "scripts/setup-jetson.sh"
    participant Docker as "Docker Daemon"
    User->>CLI: run setup-jetson
    CLI->>Setup: sudo -E bash setup-jetson.sh
    Setup->>Setup: validate OS/arch/root and detect Jetson
    Setup->>Docker: ensure nvidia-container-runtime in daemon.json
    Setup->>Docker: restart docker (systemctl/service)
    Docker-->>Setup: docker recovers
    Setup->>User: print completion & next steps
Loading
sequenceDiagram
    participant User
    participant Onboard as "nemoclaw onboard"
    participant Preflight as preflight()
    participant GPUDetect as nim.detectGpu()
    participant ModelSelect as local-inference
    participant GatewayPatch as patchGatewayImageForJetson()
    participant StartGW as startGateway()
    User->>Onboard: run onboard
    Onboard->>GPUDetect: detectGpu()
    GPUDetect-->>Onboard: { jetson: true, nimCapable: false, totalMemoryMB, name }
    Onboard->>Preflight: preflight(gpu)
    Preflight->>ModelSelect: getDefaultOllamaModel(runCapture, gpu)
    ModelSelect-->>Preflight: return nemotron-3-nano:4b (or first available)
    Preflight->>StartGW: startGateway(gpu)
    StartGW->>GatewayPatch: patchGatewayImageForJetson(gpu)
    GatewayPatch-->>StartGW: patched/tagged image
    StartGW->>StartGW: start gateway container
    StartGW-->>Onboard: gateway ready
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐇 I sniffed a Jetson in the night,
Unified memory, cozy and light.
A smaller model snug in my pack,
Gateway patched—no k3s attack,
Hop! The warren's working right.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 40.91% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'feat: add Jetson Orin Nano support' directly and clearly summarizes the main objective of the changeset, which adds comprehensive end-to-end Jetson Orin Nano support.
Linked Issues check ✅ Passed The PR successfully addresses all coding requirements from issue #404: GPU detection fallback for Jetson devices, gateway image patching for iptables-legacy, Jetson-specific default Ollama model, local inference routing fix, and network policy updates.
Out of Scope Changes check ✅ Passed All changes are directly related to issue #404 requirements: GPU detection, gateway patching, model selection, inference routing, port conflict handling, and host setup commands. No unrelated changes detected.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
scripts/setup-jetson.sh (1)

179-194: Consider extending Docker restart timeout.

The restart polling loop allows 20 seconds (10 × 2s) for Docker to recover. On resource-constrained Jetson devices, Docker might take longer to restart, especially if it needs to pull the NVIDIA runtime shim.

Consider extending to 30 seconds for more headroom on slower devices, though 20s is likely sufficient for most cases.

♻️ Optional: Increase restart timeout
-  for i in 1 2 3 4 5 6 7 8 9 10; do
+  for i in $(seq 1 15); do
     if docker info > /dev/null 2>&1; then
       break
     fi
-    [ "$i" -eq 10 ] && fail "Docker didn't come back after restart. Check 'systemctl status docker'."
+    [ "$i" -eq 15 ] && fail "Docker didn't come back after restart. Check 'systemctl status docker'."
     sleep 2
   done
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/setup-jetson.sh` around lines 179 - 194, The restart polling
currently loops 10 times with a 2s sleep (≈20s total); increase the timeout to
~30s by changing the loop from 1..10 to 1..15 and update the failure check to
compare against 15 (keep the same docker info check and logging around
NEEDS_RESTART, systemctl restart docker, service docker restart/dockerd). This
ensures the existing restart logic and messages remain but gives Docker more
time on slow Jetson devices.
bin/lib/onboard.js (1)

351-406: Well-designed idempotent image patching.

The implementation is solid:

  • Uses Docker label (io.nemoclaw.jetson-patched) for idempotency check
  • Falls back to symlinks if update-alternatives fails
  • Fails explicitly if iptables-legacy is unavailable (important for debugging)
  • Cleans up temp directory after build

One minor note: os is required locally on line 381, but this is fine since it's a core module and keeps the import close to usage. However, os is already imported at line 8 of this file, so this local require is redundant.

♻️ Optional: Remove redundant local require
-  const os = require("os");
-  const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-jetson-"));
+  const tmpDir = fs.mkdtempSync(path.join(require("os").tmpdir(), "nemoclaw-jetson-"));

Or simply use the already-imported os from line 8. But since os is not currently imported at the top of this file (only fs and path from core modules), the local require is actually necessary. Disregard this suggestion.

Actually, looking more carefully at the file - line 8 shows const fs = require("fs"); and line 9 shows const path = require("path");, but os is not imported at the top. So the local require is necessary.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 351 - 406, The local require("os") inside
patchGatewayImageForJetson is redundant only if a top-level const os =
require("os") already exists; check the file imports and if os is already
imported at top remove the local const os = require("os") from
patchGatewayImageForJetson and use the top-level os, otherwise leave the local
require in place to avoid breaking the function.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 351-406: The local require("os") inside patchGatewayImageForJetson
is redundant only if a top-level const os = require("os") already exists; check
the file imports and if os is already imported at top remove the local const os
= require("os") from patchGatewayImageForJetson and use the top-level os,
otherwise leave the local require in place to avoid breaking the function.

In `@scripts/setup-jetson.sh`:
- Around line 179-194: The restart polling currently loops 10 times with a 2s
sleep (≈20s total); increase the timeout to ~30s by changing the loop from 1..10
to 1..15 and update the failure check to compare against 15 (keep the same
docker info check and logging around NEEDS_RESTART, systemctl restart docker,
service docker restart/dockerd). This ensures the existing restart logic and
messages remain but gives Docker more time on slow Jetson devices.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 4b79d258-4025-40eb-a63d-c6475c23cd92

📥 Commits

Reviewing files that changed from the base of the PR and between 3ba517d and e311944.

📒 Files selected for processing (7)
  • bin/lib/local-inference.js
  • bin/lib/nim.js
  • bin/lib/onboard.js
  • bin/nemoclaw.js
  • scripts/setup-jetson.sh
  • test/local-inference.test.js
  • test/nim.test.js

@wscurran wscurran added enhancement New feature or request Platform: AGX Thor/Orin Support for Jetson AGX Thor and Orin labels Mar 19, 2026
@wscurran
Copy link
Copy Markdown
Contributor

I've taken a look at the changes and see that this PR adds support for the Jetson Orin Nano, which includes implementing a fallback for GPU detection and resolving an issue with iptables-legacy that was causing a k3s panic.

@wscurran wscurran added the priority: medium Issue that should be addressed in upcoming releases label Mar 19, 2026
@realkim93
Copy link
Copy Markdown
Author

Thanks for reviewing! This was tested on my Jetson Orin Nano Super (8GB, JetPack 6.x, kernel 5.15.148-tegra) — the gateway starts cleanly and the sandbox reaches Ready state with Ollama + nemotron-3-nano:4b. Let me know if there's anything I should adjust or if you'd like me to run additional tests on the device.

@realkim93
Copy link
Copy Markdown
Author

Added a commit to fix local Ollama inference routing inside the sandbox.

Problem: OpenShell 0.0.10 does not register inference.local in CoreDNS, so DNS resolution fails inside the sandbox. This is the same issue reported in #314, #385, #417 — OpenClaw inside the sandbox cannot reach the host Ollama.

Fix: For ollama-local provider, route directly through host.openshell.internal:11434 (Docker host-gateway alias) instead of inference.local. Also added ollama_local network policy endpoint to openclaw-sandbox.yaml.

All 35 tests pass (33 pass, 2 skipped).

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
test/inference-config.test.js (1)

33-40: Consider: tighten the URL assertion for completeness.

The regex /host\.openshell\.internal:11434/ validates the host and port but doesn't verify the protocol (http://) or path (/v1). A malformed URL like host.openshell.internal:11434 (missing protocol) would still pass.

If full validation is desired, consider a more specific match:

📝 Optional: stricter URL assertion
-    assert.match(cfg.endpointUrl, /host\.openshell\.internal:11434/);
+    assert.equal(cfg.endpointUrl, "http://host.openshell.internal:11434/v1");

Alternatively, restore full deep equality to catch regressions in ncpPartner, profile, or credentialEnv if those properties are expected to remain stable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/inference-config.test.js` around lines 33 - 40, Tighten the URL
assertion in the test that calls getProviderSelectionConfig("ollama-local") by
asserting the full endpoint URL (including protocol and path) rather than just
matching host:port; update the check on cfg.endpointUrl to require
"http://host.openshell.internal:11434/v1" (or the canonical expected URL) and/or
replace the shallow property assertions with a deep equality check of the
returned config object to catch regressions in properties like ncpPartner,
profile, and credentialEnv while still keeping the expected values for
endpointType, model (DEFAULT_OLLAMA_MODEL), provider, and providerLabel.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@test/inference-config.test.js`:
- Around line 33-40: Tighten the URL assertion in the test that calls
getProviderSelectionConfig("ollama-local") by asserting the full endpoint URL
(including protocol and path) rather than just matching host:port; update the
check on cfg.endpointUrl to require "http://host.openshell.internal:11434/v1"
(or the canonical expected URL) and/or replace the shallow property assertions
with a deep equality check of the returned config object to catch regressions in
properties like ncpPartner, profile, and credentialEnv while still keeping the
expected values for endpointType, model (DEFAULT_OLLAMA_MODEL), provider, and
providerLabel.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e2affc8e-3a9b-49c3-8c5e-e87f67d07302

📥 Commits

Reviewing files that changed from the base of the PR and between e311944 and 313c356.

📒 Files selected for processing (3)
  • bin/lib/inference-config.js
  • nemoclaw-blueprint/policies/openclaw-sandbox.yaml
  • test/inference-config.test.js

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
bin/lib/onboard.js (1)

381-404: Clean up the temporary build context in a finally.

docker build throws here on failure, so Line 404 never runs and nemoclaw-jetson-* directories accumulate under the temp dir on repeated failures. A try/finally with fs.rmSync(..., { recursive: true, force: true }) makes the cleanup deterministic and avoids the extra shell-out.

♻️ Suggested change
   const os = require("os");
   const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-jetson-"));
   const dockerfile = path.join(tmpDir, "Dockerfile");
-  fs.writeFileSync(
-    dockerfile,
-    [
-      `FROM ${image}`,
-      `RUN if command -v update-alternatives >/dev/null 2>&1 && \\`,
-      `      update-alternatives --set iptables /usr/sbin/iptables-legacy 2>/dev/null && \\`,
-      `      update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy 2>/dev/null; then \\`,
-      `      :; \\`,
-      `    elif [ -f /usr/sbin/iptables-legacy ] && [ -f /usr/sbin/ip6tables-legacy ]; then \\`,
-      `      ln -sf /usr/sbin/iptables-legacy /usr/sbin/iptables; \\`,
-      `      ln -sf /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables; \\`,
-      `    else \\`,
-      `      echo "iptables-legacy not available in base image" >&2; exit 1; \\`,
-      `    fi`,
-      `LABEL io.nemoclaw.jetson-patched="true"`,
-      "",
-    ].join("\n")
-  );
-
-  run(`docker build --quiet -t "${image}" "${tmpDir}"`, { ignoreError: false });
-  run(`rm -rf "${tmpDir}"`, { ignoreError: true });
+  try {
+    fs.writeFileSync(
+      dockerfile,
+      [
+        `FROM ${image}`,
+        `RUN if command -v update-alternatives >/dev/null 2>&1 && \\`,
+        `      update-alternatives --set iptables /usr/sbin/iptables-legacy 2>/dev/null && \\`,
+        `      update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy 2>/dev/null; then \\`,
+        `      :; \\`,
+        `    elif [ -f /usr/sbin/iptables-legacy ] && [ -f /usr/sbin/ip6tables-legacy ]; then \\`,
+        `      ln -sf /usr/sbin/iptables-legacy /usr/sbin/iptables; \\`,
+        `      ln -sf /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables; \\`,
+        `    else \\`,
+        `      echo "iptables-legacy not available in base image" >&2; exit 1; \\`,
+        `    fi`,
+        `LABEL io.nemoclaw.jetson-patched="true"`,
+        "",
+      ].join("\n")
+    );
+
+    run(`docker build --quiet -t "${image}" "${tmpDir}"`, { ignoreError: false });
+  } finally {
+    fs.rmSync(tmpDir, { recursive: true, force: true });
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 381 - 404, The temp build context created
via fs.mkdtempSync (tmpDir) and populated (dockerfile via fs.writeFileSync) can
leak when run(`docker build...`) throws; wrap the build and any operations that
may throw in a try/finally around the run call(s) so cleanup always happens, and
in the finally use fs.rmSync(tmpDir, { recursive: true, force: true }) (instead
of run(`rm -rf ...`)) to deterministically remove the nemoclaw-jetson-*
directory; adjust ordering so tmpDir is removed after the build attempt and keep
existing ignoreError semantics removed in favor of the synchronous rmSync call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bin/lib/onboard.js`:
- Around line 146-148: The sandbox config builder incorrectly maps non-default
local Ollama models to "nvidia-nim" by relying on DEFAULT_OLLAMA_MODEL; update
buildSandboxConfigSyncScript to determine provider type from the selected
provider or endpoint metadata (e.g., inspect the selected provider object or
endpoint.provider/type field returned by promptOllamaModel or
getOllamaModelOptions) instead of comparing model names to DEFAULT_OLLAMA_MODEL,
and apply the same metadata-driven logic to the other mapping block that mirrors
this behavior (the later duplicate mapping). Locate references to
DEFAULT_OLLAMA_MODEL and replace the conditional mapping with logic that reads
provider.type/providerName or endpoint metadata and sets the sandbox provider to
that derived value.
- Around line 279-285: Move the "kill $(lsof -ti :18789 -c openclaw) ..."
cleanup so it runs after the sandbox recreate decision in createSandbox()
instead of before it: currently the port-18789 shutdown is executed
unconditionally near the top of onboard.js (adjacent to the existing run(...)
for gateway destroy), which can stop a valid dashboard forward before
createSandbox() chooses the existing-sandbox path; update the control flow to
keep the existing early cleanup for the nemoclaw gateway but defer the openclaw
port-18789 kill until after createSandbox() determines whether to recreate
(i.e., run the kill only in the branch that actually recreates/restarts the
sandbox/dashboard, not in the pre-check path).

---

Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 381-404: The temp build context created via fs.mkdtempSync
(tmpDir) and populated (dockerfile via fs.writeFileSync) can leak when
run(`docker build...`) throws; wrap the build and any operations that may throw
in a try/finally around the run call(s) so cleanup always happens, and in the
finally use fs.rmSync(tmpDir, { recursive: true, force: true }) (instead of
run(`rm -rf ...`)) to deterministically remove the nemoclaw-jetson-* directory;
adjust ordering so tmpDir is removed after the build attempt and keep existing
ignoreError semantics removed in favor of the synchronous rmSync call.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2fc76141-d389-4c2e-88ba-76308543f6a0

📥 Commits

Reviewing files that changed from the base of the PR and between 78d320c and 980e3ab.

📒 Files selected for processing (3)
  • bin/lib/inference-config.js
  • bin/lib/onboard.js
  • nemoclaw-blueprint/policies/openclaw-sandbox.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
  • nemoclaw-blueprint/policies/openclaw-sandbox.yaml
  • bin/lib/inference-config.js

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
bin/lib/onboard.js (2)

685-700: ⚠️ Potential issue | 🟠 Major

Jetson default model selection still conflicts with sandbox provider serialization.

Good call passing gpu here, but this now more reliably picks a Jetson-specific Ollama default, which still collides with the model-name heuristic in buildSandboxConfigSyncScript() (local Ollama can be written as nvidia-nim).

Suggested fix at the root cause (provider derivation)
 function buildSandboxConfigSyncScript(selectionConfig) {
-  const providerType =
-    selectionConfig.profile === "inference-local"
-      ? selectionConfig.model === DEFAULT_OLLAMA_MODEL
-        ? "ollama-local"
-        : "nvidia-nim"
-      : selectionConfig.endpointType === "vllm"
-        ? "vllm-local"
-        : "nvidia-nim";
+  const localKind = String(
+    selectionConfig.provider || selectionConfig.endpointType || ""
+  ).toLowerCase();
+  const providerType =
+    selectionConfig.profile === "inference-local"
+      ? localKind.includes("ollama")
+        ? "ollama-local"
+        : localKind.includes("vllm")
+          ? "vllm-local"
+          : "nvidia-nim"
+      : "nvidia-nim";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 685 - 700, The chosen Jetson-specific
default model (via getDefaultOllamaModel / promptOllamaModel) can still conflict
with the provider-detection heuristic in buildSandboxConfigSyncScript; fix this
by normalizing model names or deriving provider from the explicit provider
variable instead of the model string: update buildSandboxConfigSyncScript to use
the provider variable (provider === "ollama-local") or add a normalization
helper (e.g., normalizeOllamaModelForSerialization) that maps Jetson-specific
names like "nvidia-nim" to the canonical form used by sandbox serialization, and
ensure getDefaultOllamaModel and promptOllamaModel return/consult that
normalized value so provider serialization remains consistent with the selected
model.

279-285: ⚠️ Potential issue | 🟠 Major

Defer port 18789 process cleanup until recreate/create is confirmed.

Line 284 still unconditionally kills the dashboard-forward process before createSandbox() decides whether to recreate. If the user keeps an existing sandbox, onboarding can exit with the dashboard no longer forwarded.

Proposed control-flow fix
-  run("kill $(lsof -ti :18789 -c openclaw) 2>/dev/null || true", { ignoreError: true });
   sleep(2);
 async function createSandbox(gpu) {
   step(3, 7, "Creating sandbox");
@@
   if (existing) {
@@
       if (recreate.toLowerCase() !== "y") {
         console.log("  Keeping existing sandbox.");
         return sandboxName;
       }
@@
   }
+
+  // Only clean stale dashboard forwards/processes when we are actually creating/recreating.
+  run("kill $(lsof -ti :18789 -c openclaw) 2>/dev/null || true", { ignoreError: true });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 279 - 285, The unconditional kill of the
dashboard-forward process (the run invocation that executes "kill $(lsof -ti
:18789 -c openclaw) ...") should be deferred until after createSandbox()
determines a recreate/create is required; change control flow so createSandbox()
(or the caller that decides recreate) runs first and only when it indicates a
new sandbox will be created/recreated do we execute the run("kill $(lsof -ti
:18789 -c openclaw) 2>/dev/null || true", { ignoreError: true }) cleanup (and
keep sleep(2) paired with that cleanup if still needed).
🧹 Nitpick comments (1)
bin/lib/onboard.js (1)

812-824: Remove unreachable duplicate config-write block.

buildSandboxConfigSyncScript() already writes ~/.nemoclaw/config.json and exits; the appended nemoClawConfigScript after ${script} never executes.

Cleanup diff
-    // Also write ~/.nemoclaw/config.json inside the sandbox so the NemoClaw
-    // plugin displays the correct endpoint/model in its banner instead of
-    // falling back to the cloud defaults.
-    const nemoClawConfigScript = `
-mkdir -p ~/.nemoclaw
-cat > ~/.nemoclaw/config.json <<'EOF_NEMOCLAW_CFG'
-${JSON.stringify(sandboxConfig, null, 2)}
-EOF_NEMOCLAW_CFG
-`;
     run(`cat <<'EOF_NEMOCLAW_SYNC' | openshell sandbox connect "${sandboxName}"
 ${script}
-${nemoClawConfigScript}
 exit
 EOF_NEMOCLAW_SYNC`, { stdio: ["ignore", "ignore", "inherit"] });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 812 - 824, The appended nemoClawConfigScript
block is redundant because buildSandboxConfigSyncScript() already writes
~/.nemoclaw/config.json and exits, so remove the duplicate: delete the
nemoClawConfigScript constant and stop injecting ${nemoClawConfigScript} into
the run(...) payload (the run call that uses openshell sandbox connect
"${sandboxName}"), leaving only the previously built ${script} so that the
remote script exits as intended; reference buildSandboxConfigSyncScript(),
nemoClawConfigScript, and the run(...) invocation to locate the code to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@bin/lib/onboard.js`:
- Around line 685-700: The chosen Jetson-specific default model (via
getDefaultOllamaModel / promptOllamaModel) can still conflict with the
provider-detection heuristic in buildSandboxConfigSyncScript; fix this by
normalizing model names or deriving provider from the explicit provider variable
instead of the model string: update buildSandboxConfigSyncScript to use the
provider variable (provider === "ollama-local") or add a normalization helper
(e.g., normalizeOllamaModelForSerialization) that maps Jetson-specific names
like "nvidia-nim" to the canonical form used by sandbox serialization, and
ensure getDefaultOllamaModel and promptOllamaModel return/consult that
normalized value so provider serialization remains consistent with the selected
model.
- Around line 279-285: The unconditional kill of the dashboard-forward process
(the run invocation that executes "kill $(lsof -ti :18789 -c openclaw) ...")
should be deferred until after createSandbox() determines a recreate/create is
required; change control flow so createSandbox() (or the caller that decides
recreate) runs first and only when it indicates a new sandbox will be
created/recreated do we execute the run("kill $(lsof -ti :18789 -c openclaw)
2>/dev/null || true", { ignoreError: true }) cleanup (and keep sleep(2) paired
with that cleanup if still needed).

---

Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 812-824: The appended nemoClawConfigScript block is redundant
because buildSandboxConfigSyncScript() already writes ~/.nemoclaw/config.json
and exits, so remove the duplicate: delete the nemoClawConfigScript constant and
stop injecting ${nemoClawConfigScript} into the run(...) payload (the run call
that uses openshell sandbox connect "${sandboxName}"), leaving only the
previously built ${script} so that the remote script exits as intended;
reference buildSandboxConfigSyncScript(), nemoClawConfigScript, and the run(...)
invocation to locate the code to change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f6b98de2-873f-4e85-a20b-901f36fef869

📥 Commits

Reviewing files that changed from the base of the PR and between 980e3ab and dc3c12c.

📒 Files selected for processing (1)
  • bin/lib/onboard.js

@ericksoa
Copy link
Copy Markdown
Contributor

Hey @realkim93 — we independently hit the same issue (#539) and arrived at the same iptables-legacy image-patching approach, so we're confident the core fix here is sound.

We did a full review of the diff and everything looks clean. One thing we noticed (also flagged by CodeRabbit): the port-18789 cleanup in preflight() runs before createSandbox() decides whether to recreate, which could leave a healthy sandbox without its dashboard forward on a no-op rerun. Could you take a look at that timing issue?

Once that's addressed we're happy to approve from our side. Nice work getting this tested on real hardware.

@realkim93
Copy link
Copy Markdown
Author

realkim93 commented Mar 23, 2026

Thanks @ericksoa and @wscurran for the thoughtful review. Really appreciate you taking the time to go through the diff carefully.

@ericksoa Thank you for raising the port-18789 timing issue. I've pushed a fix that defers the dashboard-forward cleanup into createSandbox(), so it only runs when we're actually recreating or creating a new sandbox. The port-18789 availability check in preflight is also removed since that port is now fully managed inside createSandbox(). Also addressed the provider-type mapping that CodeRabbit flagged: it now derives from provider metadata instead of comparing model names, which should handle the Jetson default model correctly.

Other cleanup in this commit:

  • patchGatewayImageForJetson tmpDir wrapped in try/finally
  • Removed unreachable duplicate config-write block in setupOpenclaw
  • Extended Docker restart timeout in setup-jetson.sh for slower Jetson devices

Let me know if there's anything else that needs attention!

@ACCGAGTT
Copy link
Copy Markdown

ACCGAGTT commented Mar 25, 2026

fyi none of these work currently with nemoclaw 0.0.1.0 and openshell 0.0.0.16
keeps pulling the wrong images , can someone make sure your containers are tagged correctly if these fixes were merged and pushed

been banging my head with my Jetson Orin NAno super, same as realkim93 system

@realkim93
Copy link
Copy Markdown
Author

@ACCGAGTT Thanks for reporting this — just to clarify, this PR has not been merged yet, so the fixes here are not included in the current NemoClaw 0.0.1.0 / OpenShell 0.0.0.16 release. That's why you're seeing wrong images being pulled on your Jetson Orin Nano Super.

I'm working on rebasing this branch against the latest main (there have been several recent merges that introduced conflicts). Once this PR is merged and a new release is cut, the Jetson support should be available.

I'll update this thread once the rebase is done — happy to have another Jetson Orin Nano Super user to help verify once it lands!

@realkim93
Copy link
Copy Markdown
Author

Hi @ericksoa @kjw3 @cv — gentle ping on this PR. I've addressed all the review feedback from March 21st:

  • ✅ Port-18789 cleanup deferred into createSandbox() (timing issue fixed)
  • ✅ Provider-type mapping now derives from metadata, not model name comparison
  • ✅ tmpDir safety, duplicate code removal, Docker timeout for Jetson

The branch currently has merge conflicts against main — mostly from #1037 and other recent merges to onboard.js, nim.js, and nemoclaw.js. I'll rebase and resolve these shortly.

There's also a Jetson user (@ACCGAGTT) waiting on this — would appreciate a re-review once the rebase is up. Thank you!

@ACCGAGTT
Copy link
Copy Markdown

@ACCGAGTT Thanks for reporting this — just to clarify, this PR has not been merged yet, so the fixes here are not included in the current NemoClaw 0.0.1.0 / OpenShell 0.0.0.16 release. That's why you're seeing wrong images being pulled on your Jetson Orin Nano Super.

I'm working on rebasing this branch against the latest main (there have been several recent merges that introduced conflicts). Once this PR is merged and a new release is cut, the Jetson support should be available.

I'll update this thread once the rebase is done — happy to have another Jetson Orin Nano Super user to help verify once it lands!

Thank you appreciate the update.

@realkim93 realkim93 force-pushed the feat/jetson-orin-nano-support branch 2 times, most recently from 77f2d6f to c4d41dd Compare March 30, 2026 00:15
@cv
Copy link
Copy Markdown
Contributor

cv commented Mar 30, 2026

Thanks for pushing this — the Jetson-specific pieces look valuable, especially the unified-memory GPU detection, the smaller Ollama default, and the dedicated setup-jetson flow.

That said, I’m not comfortable approving this as-is because I think it introduces a potentially serious onboarding regression in bin/lib/onboard.js.

Main blocker: onboard becomes destructive on reruns

In preflight(), this PR now unconditionally destroys the nemoclaw gateway before port checks:

run("openshell gateway destroy -g nemoclaw 2>/dev/null || true", { ignoreError: true });

My concern is that this defeats the current “safe rerun / reuse healthy state” behavior. Today, onboarding tries to distinguish between:

  • stale/broken NemoClaw state that should be cleaned up, vs
  • a healthy named gateway/sandbox that should be reused

With this change, even a healthy existing NemoClaw gateway gets torn down before we’ve decided whether we actually need to recreate anything.

Why that feels dangerous

This changes nemoclaw onboard from a mostly idempotent re-entry path into something that can be unexpectedly destructive. Practical risks:

  • rerunning onboarding can tear down a healthy existing session
  • live gateway state may be lost even when the user just wanted to re-run or inspect setup
  • healthy sandboxes that could have been reused now get forced through a more disruptive path

Given how sensitive onboarding is, that feels like too much blast radius for a Jetson support PR.

Secondary concern: port 18789 handling

I understand the motivation for removing 18789 from preflight to avoid the re-run regression, but I think the current version swings too far the other way.

Right now the PR:

  • removes 18789 from required port checks in preflight()
  • defers cleanup until createSandbox()

That avoids breaking a healthy dashboard forward on no-op reruns, which is good. But it also means a real conflict on 18789 from some unrelated process may no longer be surfaced early and clearly. Since later forward setup is fairly tolerant, onboarding can appear to succeed while the dashboard is actually broken.

So I think this change trades a visible failure for a potentially silent one.

Testing concern

The new tests around:

  • Jetson GPU detection
  • Jetson Ollama default model selection

look good.

But I don’t see coverage for the riskiest part of this PR, which is the onboarding lifecycle behavior:

  • healthy gateway reuse
  • rerun idempotency
  • port-conflict behavior for the dashboard forward

Those are exactly the areas where I’m most worried about regressions.

What I’d suggest

I’d be happy with this direction if we tighten the lifecycle changes:

  1. Do not unconditionally destroy the named NemoClaw gateway in preflight()

    • only clean up stale / unnamed / broken state
    • preserve healthy gateway reuse
  2. Keep explicit 18789 conflict handling

    • but special-case the healthy NemoClaw dashboard forward
    • don’t fully drop the preflight visibility for that port
  3. Optionally, consider splitting this into:

    • Jetson support itself (GPU detection, model selection, setup script)
    • onboarding lifecycle changes

The Jetson-specific work looks useful, but the onboarding behavior change is a blocker for me right now.

@realkim93
Copy link
Copy Markdown
Author

realkim93 commented Mar 30, 2026

Thank you @cv for taking the time to review this so carefully. Your feedback is very helpful — you are right on all three points, and I appreciate you explaining the reasoning behind the existing design.

I clearly did not study the preflight() and getGatewayReuseState() flow thoroughly enough before making changes to it. That was my mistake, and I have learned a lot from your review about how the onboarding lifecycle is designed to work.

I have pushed a fix that addresses the issues you raised:

1. Removed unconditional gateway destroy in preflight()

Reverted to the existing getGatewayReuseState() cleanup flow, which correctly distinguishes stale/unnamed state from healthy gateways. My previous change bypassed that logic entirely, which could tear down a running session on rerun.

2. Restored port 18789 to requiredPorts

The healthy-gateway skip logic was already handling the re-run case correctly. Removing 18789 from the check just masked real conflicts from unrelated processes, as you pointed out.

3. Fixed provider mapping for local providers

Added explicit ollama-local and vllm-local cases to getSandboxInferenceConfig() so the Jetson default model gets the correct direct endpoint URL instead of falling through to the default path. Added tests for both.

I also want to be transparent: my earlier commit (9601a71) claimed in the message to fix this provider mapping, but reviewing the actual diff, that change was not included. I apologize for the confusion — it is now properly addressed in this commit.

Scope of this fix

  • createSandbox() port 18789 cleanup remains in the recreate-only path (after the early return for ready sandboxes).
  • All Jetson-specific code (nim.js, local-inference.js, setup-jetson.sh, gateway image patch) is unchanged.

All 82 relevant tests pass (onboard + nim + local-inference + inference-config). Rebase against latest main is also done — no conflicts.

Thank you again for the guidance. Please let me know if there is anything else I should address.

@realkim93 realkim93 force-pushed the feat/jetson-orin-nano-support branch from 79af0ff to 33f0ead Compare March 30, 2026 02:57
@realkim93
Copy link
Copy Markdown
Author

Rebase against latest main is done — no conflicts. All CI checks should re-run shortly.

realkim93 and others added 7 commits March 30, 2026 18:51
Add GPU detection, iptables-legacy fix, and nemotron-3-nano:4b default
for Jetson Orin Nano Super (8GB, JetPack 6.x).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…htening

- Guard runCapture().trim() against null in patchGatewayImageForJetson
- Apply same inference.local bypass to vllm-local (same DNS bug affects
  both local providers, not just Ollama)
- Use getLocalProviderBaseUrl() as single source of truth for direct URLs
- Add TODO to remove direct URLs when OpenShell fixes inference.local
- Remove overly broad /usr/local/bin/node from ollama_local network
  policy (openclaw binary alone is sufficient)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… and cleanup safety

- Defer port-18789 kill to createSandbox() after recreate decision so
  no-op reruns don't break a healthy dashboard forward
- Derive provider type from selectionConfig.provider metadata instead of
  comparing model names to DEFAULT_OLLAMA_MODEL (fixes Jetson misclassification)
- Wrap patchGatewayImageForJetson tmpDir in try/finally with fs.rmSync
- Remove unreachable duplicate nemoClawConfigScript in setupOpenclaw
- Extend Docker restart timeout to 30s for slower Jetson devices

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The previous commit deferred the port-18789 kill to createSandbox(), but
left the port availability check in preflight. This caused a hard exit
when re-running onboard with an existing dashboard forward still active.
Port 18789 is now fully managed inside createSandbox().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- inference-config.test: use getLocalProviderBaseUrl() for ollama-local
  endpoint URL (host-gateway bypass for OpenShell 0.0.10 DNS issue)
- local-inference.test: convert assert → expect (vitest) for jetson tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- inference-config.test.js: use INFERENCE_ROUTE_URL for ollama-local
  (PR NVIDIA#1037 fixed inference.local routing, host-gateway bypass removed)
- local-inference.test.js: getOllamaModelOptions no longer takes gpu
  param; Jetson fallback moved to getBootstrapOllamaModelOptions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Extract detectJetson() and getUnifiedMemoryMB() helper functions to
bring detectGpu() cyclomatic complexity under the lint threshold (20).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
realkim93 and others added 2 commits March 30, 2026 18:51
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Address review feedback from cv and CodeRabbit:

1. Remove unconditional gateway destroy in preflight() — the existing
   getGatewayReuseState() logic already handles stale/unnamed cleanup
   while preserving healthy gateways. The unconditional destroy broke
   safe re-run behavior and could tear down a running session.

2. Restore port 18789 (dashboard) to requiredPorts — the existing
   healthy-gateway skip logic already handles the re-run case correctly.
   Removing it entirely masked conflicts from unrelated processes.

3. Add ollama-local and vllm-local cases to getSandboxInferenceConfig()
   so that Jetson's default model (nemotron-3-nano:4b) gets the correct
   direct endpoint URL instead of falling through to the nvidia-nim
   default path.

4. Add tests for ollama-local and vllm-local sandbox inference config
   to prevent future regressions in provider mapping.
@realkim93 realkim93 force-pushed the feat/jetson-orin-nano-support branch from 33f0ead to 9d933d6 Compare March 30, 2026 09:51
rwipfelnv and others added 4 commits March 31, 2026 01:10
* feat: add Kubernetes testing infrastructure

Add k8s-testing/ directory with scripts and manifests for testing NemoClaw
on Kubernetes with Dynamo vLLM inference.

Includes:
- test-installer.sh: Public installer test (requires unattended install support)
- setup.sh: Manual setup from source for development
- Pod manifests for Docker-in-Docker execution

Architecture: OpenShell runs k3s inside Docker, so we use DinD pods
to provide the Docker daemon on Kubernetes.

Signed-off-by: rwipfelnv

* fix: add socat proxy for K8s DNS isolation workaround

OpenShell's nested k3s cluster cannot resolve Kubernetes DNS names,
so inference requests fail with 502 Bad Gateway. This adds:

- socat TCP proxy setup in setup.sh to forward localhost:8000 to the
  K8s vLLM service endpoint
- Provider configuration using host.openshell.internal:8000 which
  resolves to the workspace container from inside k3s
- Documentation explaining the network architecture and workaround
- Updated env var names to match PR NVIDIA#318 (NEMOCLAW_NON_INTERACTIVE)
- cgroup v2 compatibility fix for Docker daemon
- Removed memory limits that caused OOM

Tested: Inference requests from sandboxes now route correctly through
the socat proxy to the Dynamo vLLM endpoint.

Depends on: NVIDIA#318 (non-interactive mode), NVIDIA#365 (Dynamo provider)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* feat: NemoKlaw - NemoClaw on Kubernetes with Dynamo support

Complete K8s deployment solution for NemoClaw:
- nemoklaw.yaml: Pod manifest with DinD, init containers, hostPath storage
- install.sh: Interactive installer with preflight checks
- Rename k8s-testing -> k8s, move old files to dev/

Key learnings:
- hostPath storage (/mnt/k8s-disks) avoids ephemeral storage eviction
- Init containers for docker config, openshell CLI, NemoClaw build
- Workspace container installs apt packages at runtime (can't share via volumes)
- socat proxy bridges K8s DNS to nested k3s (host.openshell.internal)

Tested successfully with Dynamo vLLM backend on EKS.

Signed-off-by: Robert Wipfel <rwipfel@nvidia.com>

* fix: rename NemoKlaw to NemoClaw and document known limitations

Address PR feedback:
- Rename NemoKlaw -> NemoClaw (avoid confusing naming)
- Rename nemoklaw.yaml -> nemoclaw-k8s.yaml
- Fix hardcoded endpoint to use generic example
- Remove log file from repo
- Document known limitations (HTTPS proxy issue)
- Update README with accurate status of what works/doesn't work

Signed-off-by: rwipfelnv
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: update DYNAMO_HOST to vllm-agg-frontend

The aggregated frontend service is the correct endpoint for
Dynamo vLLM inference.

Signed-off-by: Robert Wipfel <rwipfel@nvidia.com>

* docs: add Using NemoClaw section with CLI commands

- Add workspace shell access command
- Add sandbox status/logs/list commands
- Add chat completion test example
- Rename section from "What Can You Do?" to "Using NemoClaw"

Signed-off-by: Robert Wipfel <rwipfel@nvidia.com>

* refactor(k8s): simplify deployment to use official installer

- Use official NemoClaw installer (`curl | bash`) instead of git clone/build
- Switch to `custom` provider from PR NVIDIA#648 (supersedes dynamo-specific provider)
- Remove k8s/dev/ directory (no longer needed for testing)
- Use emptyDir volumes for portability across clusters
- Add /etc/hosts workaround for endpoint validation during onboarding
- Update README with verification steps for local inference

Tested end-to-end with Dynamo vLLM backend.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(k8s): resolve lint errors in yaml and markdown

- Remove multi-document YAML (move namespace creation to README)
- Add language specifier to fenced code block (```text)
- Add blank lines before lists per markdownlint rules

Signed-off-by: Robert Wipfel <rwipfel@nvidia.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* docs(k8s): add experimental warning and clarify requirements

- Add explicit experimental warning at top of README
- Clarify this is for trying NemoClaw on k8s, not production
- Document privileged pod and DinD requirements upfront
- Add resource requirements to prerequisites

Signed-off-by: Robert Wipfel <rwipfel@nvidia.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

---------

Signed-off-by: rwipfelnv
Signed-off-by: Robert Wipfel <rwipfel@nvidia.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Carlos Villela <cvillela@nvidia.com>
Co-authored-by: KJ <kejones@nvidia.com>
…version check

- Add ollama-local/vllm-local cases to getSandboxInferenceConfig() so
  local providers route through host.openshell.internal instead of
  falling through to the default inference.local path.
- Add onboarding lifecycle tests covering healthy gateway preservation
  on rerun, stale vs foreign gateway cleanup, and local provider routing.
- Add Node.js >= 22.16.0 version check to setup-jetson.sh since JetPack
  does not ship Node.js by default.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@realkim93
Copy link
Copy Markdown
Author

realkim93 commented Mar 30, 2026

Latest changes (1f59809)

1. Onboarding lifecycle tests

  • Healthy gateway preservation on idempotent reruns
  • Stale gateway cleanup vs foreign gateway conflict surfacing
  • ollama-local / vllm-local provider routing through host.openshell.internal

2. Local provider sandbox config fix
Added ollama-local and vllm-local cases to getSandboxInferenceConfig() — local providers now get direct endpoint URLs instead of falling through to the default inference.local path.

3. Node.js version check in setup-jetson.sh
JetPack doesn't ship Node.js. Added >=22.16.0 check (matching package.json engines) with clear error messaging.

Verified on actual Jetson hardware

All changes tested on Jetson Orin Nano Super (8GB, JetPack 6.x, kernel 5.15.148-tegra):

$ npx vitest run test/onboard.test.js test/nim.test.js test/local-inference.test.js

 ✓ test/local-inference.test.js (23 tests)  27ms
 ✓ test/nim.test.js (10 tests | 2 skipped)  250ms
 ✓ test/onboard.test.js (38 tests)  5186ms

 Test Files  3 passed (3)
      Tests  69 passed | 2 skipped (71)

$ npx vitest run  # full suite
 Test Files  37 passed | 1 skipped (39)
      Tests  622 passed | 4 skipped (628)

The 2 install-preflight failures in the full suite are pre-existing and unrelated to this PR (npm path detection on the Jetson's Node.js installation).

@realkim93 realkim93 force-pushed the feat/jetson-orin-nano-support branch from 5e0e386 to 61f7bfa Compare March 30, 2026 23:44
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@realkim93 realkim93 force-pushed the feat/jetson-orin-nano-support branch from 61f7bfa to 4934467 Compare March 31, 2026 00:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request Platform: AGX Thor/Orin Support for Jetson AGX Thor and Orin priority: medium Issue that should be addressed in upcoming releases

Projects

None yet

6 participants