Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughDetect Jetson unified-memory GPUs, prefer a smaller Jetson-specific Ollama model, patch the OpenShell gateway image for iptables-legacy on Jetson, add a host setup script and CLI command for Jetson preparation, and update local Ollama routing and tests for Jetson behavior. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as "nemoclaw setup-jetson"
participant Setup as "scripts/setup-jetson.sh"
participant Docker as "Docker Daemon"
User->>CLI: run setup-jetson
CLI->>Setup: sudo -E bash setup-jetson.sh
Setup->>Setup: validate OS/arch/root and detect Jetson
Setup->>Docker: ensure nvidia-container-runtime in daemon.json
Setup->>Docker: restart docker (systemctl/service)
Docker-->>Setup: docker recovers
Setup->>User: print completion & next steps
sequenceDiagram
participant User
participant Onboard as "nemoclaw onboard"
participant Preflight as preflight()
participant GPUDetect as nim.detectGpu()
participant ModelSelect as local-inference
participant GatewayPatch as patchGatewayImageForJetson()
participant StartGW as startGateway()
User->>Onboard: run onboard
Onboard->>GPUDetect: detectGpu()
GPUDetect-->>Onboard: { jetson: true, nimCapable: false, totalMemoryMB, name }
Onboard->>Preflight: preflight(gpu)
Preflight->>ModelSelect: getDefaultOllamaModel(runCapture, gpu)
ModelSelect-->>Preflight: return nemotron-3-nano:4b (or first available)
Preflight->>StartGW: startGateway(gpu)
StartGW->>GatewayPatch: patchGatewayImageForJetson(gpu)
GatewayPatch-->>StartGW: patched/tagged image
StartGW->>StartGW: start gateway container
StartGW-->>Onboard: gateway ready
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
scripts/setup-jetson.sh (1)
179-194: Consider extending Docker restart timeout.The restart polling loop allows 20 seconds (10 × 2s) for Docker to recover. On resource-constrained Jetson devices, Docker might take longer to restart, especially if it needs to pull the NVIDIA runtime shim.
Consider extending to 30 seconds for more headroom on slower devices, though 20s is likely sufficient for most cases.
♻️ Optional: Increase restart timeout
- for i in 1 2 3 4 5 6 7 8 9 10; do + for i in $(seq 1 15); do if docker info > /dev/null 2>&1; then break fi - [ "$i" -eq 10 ] && fail "Docker didn't come back after restart. Check 'systemctl status docker'." + [ "$i" -eq 15 ] && fail "Docker didn't come back after restart. Check 'systemctl status docker'." sleep 2 done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/setup-jetson.sh` around lines 179 - 194, The restart polling currently loops 10 times with a 2s sleep (≈20s total); increase the timeout to ~30s by changing the loop from 1..10 to 1..15 and update the failure check to compare against 15 (keep the same docker info check and logging around NEEDS_RESTART, systemctl restart docker, service docker restart/dockerd). This ensures the existing restart logic and messages remain but gives Docker more time on slow Jetson devices.bin/lib/onboard.js (1)
351-406: Well-designed idempotent image patching.The implementation is solid:
- Uses Docker label (
io.nemoclaw.jetson-patched) for idempotency check- Falls back to symlinks if
update-alternativesfails- Fails explicitly if iptables-legacy is unavailable (important for debugging)
- Cleans up temp directory after build
One minor note:
osis required locally on line 381, but this is fine since it's a core module and keeps the import close to usage. However,osis already imported at line 8 of this file, so this local require is redundant.♻️ Optional: Remove redundant local require
- const os = require("os"); - const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-jetson-")); + const tmpDir = fs.mkdtempSync(path.join(require("os").tmpdir(), "nemoclaw-jetson-"));Or simply use the already-imported
osfrom line 8. But sinceosis not currently imported at the top of this file (onlyfsandpathfrom core modules), the local require is actually necessary. Disregard this suggestion.Actually, looking more carefully at the file - line 8 shows
const fs = require("fs");and line 9 showsconst path = require("path");, butosis not imported at the top. So the local require is necessary.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 351 - 406, The local require("os") inside patchGatewayImageForJetson is redundant only if a top-level const os = require("os") already exists; check the file imports and if os is already imported at top remove the local const os = require("os") from patchGatewayImageForJetson and use the top-level os, otherwise leave the local require in place to avoid breaking the function.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 351-406: The local require("os") inside patchGatewayImageForJetson
is redundant only if a top-level const os = require("os") already exists; check
the file imports and if os is already imported at top remove the local const os
= require("os") from patchGatewayImageForJetson and use the top-level os,
otherwise leave the local require in place to avoid breaking the function.
In `@scripts/setup-jetson.sh`:
- Around line 179-194: The restart polling currently loops 10 times with a 2s
sleep (≈20s total); increase the timeout to ~30s by changing the loop from 1..10
to 1..15 and update the failure check to compare against 15 (keep the same
docker info check and logging around NEEDS_RESTART, systemctl restart docker,
service docker restart/dockerd). This ensures the existing restart logic and
messages remain but gives Docker more time on slow Jetson devices.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 4b79d258-4025-40eb-a63d-c6475c23cd92
📒 Files selected for processing (7)
bin/lib/local-inference.jsbin/lib/nim.jsbin/lib/onboard.jsbin/nemoclaw.jsscripts/setup-jetson.shtest/local-inference.test.jstest/nim.test.js
|
I've taken a look at the changes and see that this PR adds support for the Jetson Orin Nano, which includes implementing a fallback for GPU detection and resolving an issue with iptables-legacy that was causing a k3s panic. |
|
Thanks for reviewing! This was tested on my Jetson Orin Nano Super (8GB, JetPack 6.x, kernel 5.15.148-tegra) — the gateway starts cleanly and the sandbox reaches Ready state with Ollama + nemotron-3-nano:4b. Let me know if there's anything I should adjust or if you'd like me to run additional tests on the device. |
|
Added a commit to fix local Ollama inference routing inside the sandbox. Problem: OpenShell 0.0.10 does not register Fix: For All 35 tests pass (33 pass, 2 skipped). |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
test/inference-config.test.js (1)
33-40: Consider: tighten the URL assertion for completeness.The regex
/host\.openshell\.internal:11434/validates the host and port but doesn't verify the protocol (http://) or path (/v1). A malformed URL likehost.openshell.internal:11434(missing protocol) would still pass.If full validation is desired, consider a more specific match:
📝 Optional: stricter URL assertion
- assert.match(cfg.endpointUrl, /host\.openshell\.internal:11434/); + assert.equal(cfg.endpointUrl, "http://host.openshell.internal:11434/v1");Alternatively, restore full deep equality to catch regressions in
ncpPartner,profile, orcredentialEnvif those properties are expected to remain stable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/inference-config.test.js` around lines 33 - 40, Tighten the URL assertion in the test that calls getProviderSelectionConfig("ollama-local") by asserting the full endpoint URL (including protocol and path) rather than just matching host:port; update the check on cfg.endpointUrl to require "http://host.openshell.internal:11434/v1" (or the canonical expected URL) and/or replace the shallow property assertions with a deep equality check of the returned config object to catch regressions in properties like ncpPartner, profile, and credentialEnv while still keeping the expected values for endpointType, model (DEFAULT_OLLAMA_MODEL), provider, and providerLabel.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@test/inference-config.test.js`:
- Around line 33-40: Tighten the URL assertion in the test that calls
getProviderSelectionConfig("ollama-local") by asserting the full endpoint URL
(including protocol and path) rather than just matching host:port; update the
check on cfg.endpointUrl to require "http://host.openshell.internal:11434/v1"
(or the canonical expected URL) and/or replace the shallow property assertions
with a deep equality check of the returned config object to catch regressions in
properties like ncpPartner, profile, and credentialEnv while still keeping the
expected values for endpointType, model (DEFAULT_OLLAMA_MODEL), provider, and
providerLabel.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e2affc8e-3a9b-49c3-8c5e-e87f67d07302
📒 Files selected for processing (3)
bin/lib/inference-config.jsnemoclaw-blueprint/policies/openclaw-sandbox.yamltest/inference-config.test.js
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
bin/lib/onboard.js (1)
381-404: Clean up the temporary build context in afinally.
docker buildthrows here on failure, so Line 404 never runs andnemoclaw-jetson-*directories accumulate under the temp dir on repeated failures. Atry/finallywithfs.rmSync(..., { recursive: true, force: true })makes the cleanup deterministic and avoids the extra shell-out.♻️ Suggested change
const os = require("os"); const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), "nemoclaw-jetson-")); const dockerfile = path.join(tmpDir, "Dockerfile"); - fs.writeFileSync( - dockerfile, - [ - `FROM ${image}`, - `RUN if command -v update-alternatives >/dev/null 2>&1 && \\`, - ` update-alternatives --set iptables /usr/sbin/iptables-legacy 2>/dev/null && \\`, - ` update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy 2>/dev/null; then \\`, - ` :; \\`, - ` elif [ -f /usr/sbin/iptables-legacy ] && [ -f /usr/sbin/ip6tables-legacy ]; then \\`, - ` ln -sf /usr/sbin/iptables-legacy /usr/sbin/iptables; \\`, - ` ln -sf /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables; \\`, - ` else \\`, - ` echo "iptables-legacy not available in base image" >&2; exit 1; \\`, - ` fi`, - `LABEL io.nemoclaw.jetson-patched="true"`, - "", - ].join("\n") - ); - - run(`docker build --quiet -t "${image}" "${tmpDir}"`, { ignoreError: false }); - run(`rm -rf "${tmpDir}"`, { ignoreError: true }); + try { + fs.writeFileSync( + dockerfile, + [ + `FROM ${image}`, + `RUN if command -v update-alternatives >/dev/null 2>&1 && \\`, + ` update-alternatives --set iptables /usr/sbin/iptables-legacy 2>/dev/null && \\`, + ` update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy 2>/dev/null; then \\`, + ` :; \\`, + ` elif [ -f /usr/sbin/iptables-legacy ] && [ -f /usr/sbin/ip6tables-legacy ]; then \\`, + ` ln -sf /usr/sbin/iptables-legacy /usr/sbin/iptables; \\`, + ` ln -sf /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables; \\`, + ` else \\`, + ` echo "iptables-legacy not available in base image" >&2; exit 1; \\`, + ` fi`, + `LABEL io.nemoclaw.jetson-patched="true"`, + "", + ].join("\n") + ); + + run(`docker build --quiet -t "${image}" "${tmpDir}"`, { ignoreError: false }); + } finally { + fs.rmSync(tmpDir, { recursive: true, force: true }); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 381 - 404, The temp build context created via fs.mkdtempSync (tmpDir) and populated (dockerfile via fs.writeFileSync) can leak when run(`docker build...`) throws; wrap the build and any operations that may throw in a try/finally around the run call(s) so cleanup always happens, and in the finally use fs.rmSync(tmpDir, { recursive: true, force: true }) (instead of run(`rm -rf ...`)) to deterministically remove the nemoclaw-jetson-* directory; adjust ordering so tmpDir is removed after the build attempt and keep existing ignoreError semantics removed in favor of the synchronous rmSync call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/onboard.js`:
- Around line 146-148: The sandbox config builder incorrectly maps non-default
local Ollama models to "nvidia-nim" by relying on DEFAULT_OLLAMA_MODEL; update
buildSandboxConfigSyncScript to determine provider type from the selected
provider or endpoint metadata (e.g., inspect the selected provider object or
endpoint.provider/type field returned by promptOllamaModel or
getOllamaModelOptions) instead of comparing model names to DEFAULT_OLLAMA_MODEL,
and apply the same metadata-driven logic to the other mapping block that mirrors
this behavior (the later duplicate mapping). Locate references to
DEFAULT_OLLAMA_MODEL and replace the conditional mapping with logic that reads
provider.type/providerName or endpoint metadata and sets the sandbox provider to
that derived value.
- Around line 279-285: Move the "kill $(lsof -ti :18789 -c openclaw) ..."
cleanup so it runs after the sandbox recreate decision in createSandbox()
instead of before it: currently the port-18789 shutdown is executed
unconditionally near the top of onboard.js (adjacent to the existing run(...)
for gateway destroy), which can stop a valid dashboard forward before
createSandbox() chooses the existing-sandbox path; update the control flow to
keep the existing early cleanup for the nemoclaw gateway but defer the openclaw
port-18789 kill until after createSandbox() determines whether to recreate
(i.e., run the kill only in the branch that actually recreates/restarts the
sandbox/dashboard, not in the pre-check path).
---
Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 381-404: The temp build context created via fs.mkdtempSync
(tmpDir) and populated (dockerfile via fs.writeFileSync) can leak when
run(`docker build...`) throws; wrap the build and any operations that may throw
in a try/finally around the run call(s) so cleanup always happens, and in the
finally use fs.rmSync(tmpDir, { recursive: true, force: true }) (instead of
run(`rm -rf ...`)) to deterministically remove the nemoclaw-jetson-* directory;
adjust ordering so tmpDir is removed after the build attempt and keep existing
ignoreError semantics removed in favor of the synchronous rmSync call.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2fc76141-d389-4c2e-88ba-76308543f6a0
📒 Files selected for processing (3)
bin/lib/inference-config.jsbin/lib/onboard.jsnemoclaw-blueprint/policies/openclaw-sandbox.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
- nemoclaw-blueprint/policies/openclaw-sandbox.yaml
- bin/lib/inference-config.js
There was a problem hiding this comment.
♻️ Duplicate comments (2)
bin/lib/onboard.js (2)
685-700:⚠️ Potential issue | 🟠 MajorJetson default model selection still conflicts with sandbox provider serialization.
Good call passing
gpuhere, but this now more reliably picks a Jetson-specific Ollama default, which still collides with the model-name heuristic inbuildSandboxConfigSyncScript()(local Ollama can be written asnvidia-nim).Suggested fix at the root cause (provider derivation)
function buildSandboxConfigSyncScript(selectionConfig) { - const providerType = - selectionConfig.profile === "inference-local" - ? selectionConfig.model === DEFAULT_OLLAMA_MODEL - ? "ollama-local" - : "nvidia-nim" - : selectionConfig.endpointType === "vllm" - ? "vllm-local" - : "nvidia-nim"; + const localKind = String( + selectionConfig.provider || selectionConfig.endpointType || "" + ).toLowerCase(); + const providerType = + selectionConfig.profile === "inference-local" + ? localKind.includes("ollama") + ? "ollama-local" + : localKind.includes("vllm") + ? "vllm-local" + : "nvidia-nim" + : "nvidia-nim";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 685 - 700, The chosen Jetson-specific default model (via getDefaultOllamaModel / promptOllamaModel) can still conflict with the provider-detection heuristic in buildSandboxConfigSyncScript; fix this by normalizing model names or deriving provider from the explicit provider variable instead of the model string: update buildSandboxConfigSyncScript to use the provider variable (provider === "ollama-local") or add a normalization helper (e.g., normalizeOllamaModelForSerialization) that maps Jetson-specific names like "nvidia-nim" to the canonical form used by sandbox serialization, and ensure getDefaultOllamaModel and promptOllamaModel return/consult that normalized value so provider serialization remains consistent with the selected model.
279-285:⚠️ Potential issue | 🟠 MajorDefer port 18789 process cleanup until recreate/create is confirmed.
Line 284 still unconditionally kills the dashboard-forward process before
createSandbox()decides whether to recreate. If the user keeps an existing sandbox, onboarding can exit with the dashboard no longer forwarded.Proposed control-flow fix
- run("kill $(lsof -ti :18789 -c openclaw) 2>/dev/null || true", { ignoreError: true }); sleep(2);async function createSandbox(gpu) { step(3, 7, "Creating sandbox"); @@ if (existing) { @@ if (recreate.toLowerCase() !== "y") { console.log(" Keeping existing sandbox."); return sandboxName; } @@ } + + // Only clean stale dashboard forwards/processes when we are actually creating/recreating. + run("kill $(lsof -ti :18789 -c openclaw) 2>/dev/null || true", { ignoreError: true });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 279 - 285, The unconditional kill of the dashboard-forward process (the run invocation that executes "kill $(lsof -ti :18789 -c openclaw) ...") should be deferred until after createSandbox() determines a recreate/create is required; change control flow so createSandbox() (or the caller that decides recreate) runs first and only when it indicates a new sandbox will be created/recreated do we execute the run("kill $(lsof -ti :18789 -c openclaw) 2>/dev/null || true", { ignoreError: true }) cleanup (and keep sleep(2) paired with that cleanup if still needed).
🧹 Nitpick comments (1)
bin/lib/onboard.js (1)
812-824: Remove unreachable duplicate config-write block.
buildSandboxConfigSyncScript()already writes~/.nemoclaw/config.jsonand exits; the appendednemoClawConfigScriptafter${script}never executes.Cleanup diff
- // Also write ~/.nemoclaw/config.json inside the sandbox so the NemoClaw - // plugin displays the correct endpoint/model in its banner instead of - // falling back to the cloud defaults. - const nemoClawConfigScript = ` -mkdir -p ~/.nemoclaw -cat > ~/.nemoclaw/config.json <<'EOF_NEMOCLAW_CFG' -${JSON.stringify(sandboxConfig, null, 2)} -EOF_NEMOCLAW_CFG -`; run(`cat <<'EOF_NEMOCLAW_SYNC' | openshell sandbox connect "${sandboxName}" ${script} -${nemoClawConfigScript} exit EOF_NEMOCLAW_SYNC`, { stdio: ["ignore", "ignore", "inherit"] });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 812 - 824, The appended nemoClawConfigScript block is redundant because buildSandboxConfigSyncScript() already writes ~/.nemoclaw/config.json and exits, so remove the duplicate: delete the nemoClawConfigScript constant and stop injecting ${nemoClawConfigScript} into the run(...) payload (the run call that uses openshell sandbox connect "${sandboxName}"), leaving only the previously built ${script} so that the remote script exits as intended; reference buildSandboxConfigSyncScript(), nemoClawConfigScript, and the run(...) invocation to locate the code to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@bin/lib/onboard.js`:
- Around line 685-700: The chosen Jetson-specific default model (via
getDefaultOllamaModel / promptOllamaModel) can still conflict with the
provider-detection heuristic in buildSandboxConfigSyncScript; fix this by
normalizing model names or deriving provider from the explicit provider variable
instead of the model string: update buildSandboxConfigSyncScript to use the
provider variable (provider === "ollama-local") or add a normalization helper
(e.g., normalizeOllamaModelForSerialization) that maps Jetson-specific names
like "nvidia-nim" to the canonical form used by sandbox serialization, and
ensure getDefaultOllamaModel and promptOllamaModel return/consult that
normalized value so provider serialization remains consistent with the selected
model.
- Around line 279-285: The unconditional kill of the dashboard-forward process
(the run invocation that executes "kill $(lsof -ti :18789 -c openclaw) ...")
should be deferred until after createSandbox() determines a recreate/create is
required; change control flow so createSandbox() (or the caller that decides
recreate) runs first and only when it indicates a new sandbox will be
created/recreated do we execute the run("kill $(lsof -ti :18789 -c openclaw)
2>/dev/null || true", { ignoreError: true }) cleanup (and keep sleep(2) paired
with that cleanup if still needed).
---
Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 812-824: The appended nemoClawConfigScript block is redundant
because buildSandboxConfigSyncScript() already writes ~/.nemoclaw/config.json
and exits, so remove the duplicate: delete the nemoClawConfigScript constant and
stop injecting ${nemoClawConfigScript} into the run(...) payload (the run call
that uses openshell sandbox connect "${sandboxName}"), leaving only the
previously built ${script} so that the remote script exits as intended;
reference buildSandboxConfigSyncScript(), nemoClawConfigScript, and the run(...)
invocation to locate the code to change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f6b98de2-873f-4e85-a20b-901f36fef869
📒 Files selected for processing (1)
bin/lib/onboard.js
|
Hey @realkim93 — we independently hit the same issue (#539) and arrived at the same iptables-legacy image-patching approach, so we're confident the core fix here is sound. We did a full review of the diff and everything looks clean. One thing we noticed (also flagged by CodeRabbit): the port-18789 cleanup in Once that's addressed we're happy to approve from our side. Nice work getting this tested on real hardware. |
|
Thanks @ericksoa and @wscurran for the thoughtful review. Really appreciate you taking the time to go through the diff carefully. @ericksoa Thank you for raising the port-18789 timing issue. I've pushed a fix that defers the dashboard-forward cleanup into Other cleanup in this commit:
Let me know if there's anything else that needs attention! |
|
fyi none of these work currently with nemoclaw 0.0.1.0 and openshell 0.0.0.16 been banging my head with my Jetson Orin NAno super, same as realkim93 system |
|
@ACCGAGTT Thanks for reporting this — just to clarify, this PR has not been merged yet, so the fixes here are not included in the current NemoClaw 0.0.1.0 / OpenShell 0.0.0.16 release. That's why you're seeing wrong images being pulled on your Jetson Orin Nano Super. I'm working on rebasing this branch against the latest I'll update this thread once the rebase is done — happy to have another Jetson Orin Nano Super user to help verify once it lands! |
|
Hi @ericksoa @kjw3 @cv — gentle ping on this PR. I've addressed all the review feedback from March 21st:
The branch currently has merge conflicts against There's also a Jetson user (@ACCGAGTT) waiting on this — would appreciate a re-review once the rebase is up. Thank you! |
Thank you appreciate the update. |
77f2d6f to
c4d41dd
Compare
|
Thanks for pushing this — the Jetson-specific pieces look valuable, especially the unified-memory GPU detection, the smaller Ollama default, and the dedicated That said, I’m not comfortable approving this as-is because I think it introduces a potentially serious onboarding regression in Main blocker:
|
|
Thank you @cv for taking the time to review this so carefully. Your feedback is very helpful — you are right on all three points, and I appreciate you explaining the reasoning behind the existing design. I clearly did not study the I have pushed a fix that addresses the issues you raised: 1. Removed unconditional gateway destroy in
|
79af0ff to
33f0ead
Compare
|
Rebase against latest |
Add GPU detection, iptables-legacy fix, and nemotron-3-nano:4b default for Jetson Orin Nano Super (8GB, JetPack 6.x). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…htening - Guard runCapture().trim() against null in patchGatewayImageForJetson - Apply same inference.local bypass to vllm-local (same DNS bug affects both local providers, not just Ollama) - Use getLocalProviderBaseUrl() as single source of truth for direct URLs - Add TODO to remove direct URLs when OpenShell fixes inference.local - Remove overly broad /usr/local/bin/node from ollama_local network policy (openclaw binary alone is sufficient) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… and cleanup safety - Defer port-18789 kill to createSandbox() after recreate decision so no-op reruns don't break a healthy dashboard forward - Derive provider type from selectionConfig.provider metadata instead of comparing model names to DEFAULT_OLLAMA_MODEL (fixes Jetson misclassification) - Wrap patchGatewayImageForJetson tmpDir in try/finally with fs.rmSync - Remove unreachable duplicate nemoClawConfigScript in setupOpenclaw - Extend Docker restart timeout to 30s for slower Jetson devices Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The previous commit deferred the port-18789 kill to createSandbox(), but left the port availability check in preflight. This caused a hard exit when re-running onboard with an existing dashboard forward still active. Port 18789 is now fully managed inside createSandbox(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- inference-config.test: use getLocalProviderBaseUrl() for ollama-local endpoint URL (host-gateway bypass for OpenShell 0.0.10 DNS issue) - local-inference.test: convert assert → expect (vitest) for jetson tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- inference-config.test.js: use INFERENCE_ROUTE_URL for ollama-local (PR NVIDIA#1037 fixed inference.local routing, host-gateway bypass removed) - local-inference.test.js: getOllamaModelOptions no longer takes gpu param; Jetson fallback moved to getBootstrapOllamaModelOptions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Extract detectJetson() and getUnifiedMemoryMB() helper functions to bring detectGpu() cyclomatic complexity under the lint threshold (20). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Address review feedback from cv and CodeRabbit: 1. Remove unconditional gateway destroy in preflight() — the existing getGatewayReuseState() logic already handles stale/unnamed cleanup while preserving healthy gateways. The unconditional destroy broke safe re-run behavior and could tear down a running session. 2. Restore port 18789 (dashboard) to requiredPorts — the existing healthy-gateway skip logic already handles the re-run case correctly. Removing it entirely masked conflicts from unrelated processes. 3. Add ollama-local and vllm-local cases to getSandboxInferenceConfig() so that Jetson's default model (nemotron-3-nano:4b) gets the correct direct endpoint URL instead of falling through to the nvidia-nim default path. 4. Add tests for ollama-local and vllm-local sandbox inference config to prevent future regressions in provider mapping.
33f0ead to
9d933d6
Compare
* feat: add Kubernetes testing infrastructure Add k8s-testing/ directory with scripts and manifests for testing NemoClaw on Kubernetes with Dynamo vLLM inference. Includes: - test-installer.sh: Public installer test (requires unattended install support) - setup.sh: Manual setup from source for development - Pod manifests for Docker-in-Docker execution Architecture: OpenShell runs k3s inside Docker, so we use DinD pods to provide the Docker daemon on Kubernetes. Signed-off-by: rwipfelnv * fix: add socat proxy for K8s DNS isolation workaround OpenShell's nested k3s cluster cannot resolve Kubernetes DNS names, so inference requests fail with 502 Bad Gateway. This adds: - socat TCP proxy setup in setup.sh to forward localhost:8000 to the K8s vLLM service endpoint - Provider configuration using host.openshell.internal:8000 which resolves to the workspace container from inside k3s - Documentation explaining the network architecture and workaround - Updated env var names to match PR NVIDIA#318 (NEMOCLAW_NON_INTERACTIVE) - cgroup v2 compatibility fix for Docker daemon - Removed memory limits that caused OOM Tested: Inference requests from sandboxes now route correctly through the socat proxy to the Dynamo vLLM endpoint. Depends on: NVIDIA#318 (non-interactive mode), NVIDIA#365 (Dynamo provider) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * feat: NemoKlaw - NemoClaw on Kubernetes with Dynamo support Complete K8s deployment solution for NemoClaw: - nemoklaw.yaml: Pod manifest with DinD, init containers, hostPath storage - install.sh: Interactive installer with preflight checks - Rename k8s-testing -> k8s, move old files to dev/ Key learnings: - hostPath storage (/mnt/k8s-disks) avoids ephemeral storage eviction - Init containers for docker config, openshell CLI, NemoClaw build - Workspace container installs apt packages at runtime (can't share via volumes) - socat proxy bridges K8s DNS to nested k3s (host.openshell.internal) Tested successfully with Dynamo vLLM backend on EKS. Signed-off-by: Robert Wipfel <rwipfel@nvidia.com> * fix: rename NemoKlaw to NemoClaw and document known limitations Address PR feedback: - Rename NemoKlaw -> NemoClaw (avoid confusing naming) - Rename nemoklaw.yaml -> nemoclaw-k8s.yaml - Fix hardcoded endpoint to use generic example - Remove log file from repo - Document known limitations (HTTPS proxy issue) - Update README with accurate status of what works/doesn't work Signed-off-by: rwipfelnv Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: update DYNAMO_HOST to vllm-agg-frontend The aggregated frontend service is the correct endpoint for Dynamo vLLM inference. Signed-off-by: Robert Wipfel <rwipfel@nvidia.com> * docs: add Using NemoClaw section with CLI commands - Add workspace shell access command - Add sandbox status/logs/list commands - Add chat completion test example - Rename section from "What Can You Do?" to "Using NemoClaw" Signed-off-by: Robert Wipfel <rwipfel@nvidia.com> * refactor(k8s): simplify deployment to use official installer - Use official NemoClaw installer (`curl | bash`) instead of git clone/build - Switch to `custom` provider from PR NVIDIA#648 (supersedes dynamo-specific provider) - Remove k8s/dev/ directory (no longer needed for testing) - Use emptyDir volumes for portability across clusters - Add /etc/hosts workaround for endpoint validation during onboarding - Update README with verification steps for local inference Tested end-to-end with Dynamo vLLM backend. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix(k8s): resolve lint errors in yaml and markdown - Remove multi-document YAML (move namespace creation to README) - Add language specifier to fenced code block (```text) - Add blank lines before lists per markdownlint rules Signed-off-by: Robert Wipfel <rwipfel@nvidia.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * docs(k8s): add experimental warning and clarify requirements - Add explicit experimental warning at top of README - Clarify this is for trying NemoClaw on k8s, not production - Document privileged pod and DinD requirements upfront - Add resource requirements to prerequisites Signed-off-by: Robert Wipfel <rwipfel@nvidia.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> --------- Signed-off-by: rwipfelnv Signed-off-by: Robert Wipfel <rwipfel@nvidia.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> Co-authored-by: Carlos Villela <cvillela@nvidia.com> Co-authored-by: KJ <kejones@nvidia.com>
…version check - Add ollama-local/vllm-local cases to getSandboxInferenceConfig() so local providers route through host.openshell.internal instead of falling through to the default inference.local path. - Add onboarding lifecycle tests covering healthy gateway preservation on rerun, stale vs foreign gateway cleanup, and local provider routing. - Add Node.js >= 22.16.0 version check to setup-jetson.sh since JetPack does not ship Node.js by default. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Latest changes (1f59809)1. Onboarding lifecycle tests
2. Local provider sandbox config fix 3. Node.js version check in Verified on actual Jetson hardwareAll changes tested on Jetson Orin Nano Super (8GB, JetPack 6.x, kernel 5.15.148-tegra): The 2 install-preflight failures in the full suite are pre-existing and unrelated to this PR (npm path detection on the Jetson's Node.js installation). |
5e0e386 to
61f7bfa
Compare
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
61f7bfa to
4934467
Compare
Closes #404
Also fixes #539, relates to #65, #300, #511, #1076
Summary
Adds end-to-end Jetson Orin Nano support: from GPU detection through to a working OpenClaw agent with local Ollama inference.
[N/A]for memory.total on Jetson (unified memory). Added fallback via GPU name +/proc/device-tree/model, reading system RAM withfree(1).nft_chain_filtermodules → k3s panic. Auto-rebuilds gateway image with iptables-legacy (idempotent via Docker label).nemotron-3-nano:4bfor Jetson (30B default exceeds 8GB).inference.localin CoreDNS ([Bug] inference.local returns HTTP 403 inside sandbox when using Ollama local inference on DGX Spark #314, Local Ollama inference routing fails from sandbox #385, Sandbox cannot reach local Ollama instance on host due to enforced internal proxy (403 Forbidden) #417). Local providers (Ollama, vLLM) now route directly viahost.openshell.internalinstead.ollama_localendpoint to sandbox baseline policy.setup-jetsoncommand: Host-level prep (NVIDIA runtime, kernel modules, Node.js version check).ollama-localandvllm-localcases togetSandboxInferenceConfig()so local providers get direct endpoint URLs instead of falling through to the defaultinference.localpath.Verified working
Inference path: OpenClaw (sandbox) → host.openshell.internal:11434 → Ollama → nemotron-3-nano:4b
Test plan
"NVIDIA Jetson Orin Nano ... Super", 7619 MBopenclaw agentresponds via nemotron-3-nano:4bollama-localandvllm-localroute throughhost.openshell.internalsetup-jetson.sh(>=22.16.0)Tested on: Jetson Orin Nano Super (8GB, JetPack 6.x, kernel 5.15.148-tegra)
Co-Authored-By: Claude Opus 4.6 (1M context) noreply@anthropic.com