Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# NemoClaw port configuration — copy to .env and edit as needed.
# Ports must be integers in range 1024–65535.
# Run scripts/check-ports.sh to find port conflicts

NEMOCLAW_DASHBOARD_PORT=18789
NEMOCLAW_GATEWAY_PORT=8080
NEMOCLAW_VLLM_PORT=8000
NEMOCLAW_OLLAMA_PORT=11434
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,8 @@ docs/_build/
coverage/
vdr-notes/
draft_newsletter_*
tmp/
.env
.env.local
.venv/
uv.lock
30 changes: 30 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,6 +160,36 @@ curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uni

---

## Port Configuration

NemoClaw uses four network ports. All are configurable via environment variables or a `.env` file at the project root (copy `.env.example` to get started).

| Port | Default | Env var | Purpose | Conflict risk |
|------|---------|---------|---------|---------------|
| Dashboard | 18789 | `NEMOCLAW_DASHBOARD_PORT` | OpenClaw web UI, forwarded from sandbox to host | Low |
| Gateway | 8080 | `NEMOCLAW_GATEWAY_PORT` | OpenShell gateway signal channel | **High** — Jenkins, Tomcat, K8s dashboard |
| vLLM/NIM | 8000 | `NEMOCLAW_VLLM_PORT` | Local vLLM or NIM inference endpoint | **High** — Django, PHP dev server |
| Ollama | 11434 | `NEMOCLAW_OLLAMA_PORT` | Local Ollama inference endpoint | Low |

To use non-default ports, set the environment variables before running `nemoclaw onboard`:

```bash
export NEMOCLAW_GATEWAY_PORT=9080
export NEMOCLAW_VLLM_PORT=9000
nemoclaw onboard
```

Or create a `.env` file at the project root (see `.env.example`). For personal overrides that should never be committed, use `.env.local` — it is loaded after `.env` and takes precedence.

> **ℹ️ Note**
>
> Changing the dashboard port requires rebuilding the sandbox image because the CORS origin is baked in at build time. Re-run `nemoclaw onboard` after changing `NEMOCLAW_DASHBOARD_PORT`.

> **⚠️ Network exposure**
>
> When using local inference (Ollama or vLLM), the inference service binds to `0.0.0.0` so that containers can reach it via `host.openshell.internal`. This means the service is reachable from your local network, not just localhost. This is required for the sandbox architecture but should be considered in shared or untrusted network environments.
---

Comment on lines +188 to +192
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix markdown lint warning: missing blank line after blockquote.

The static analysis tool flagged a blank line issue inside/after the blockquote. Add a blank line between the warning block (line 190) and the horizontal rule (line 191) to fix the MD028 violation.

📝 Proposed fix
 > When using local inference (Ollama or vLLM), the inference service binds to `0.0.0.0` so that containers can reach it via `host.openshell.internal`. This means the service is reachable from your local network, not just localhost. This is required for the sandbox architecture but should be considered in shared or untrusted network environments.
+
 ---
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
> **⚠️ Network exposure**
>
> When using local inference (Ollama or vLLM), the inference service binds to `0.0.0.0` so that containers can reach it via `host.openshell.internal`. This means the service is reachable from your local network, not just localhost. This is required for the sandbox architecture but should be considered in shared or untrusted network environments.
---
> **⚠️ Network exposure**
>
> When using local inference (Ollama or vLLM), the inference service binds to `0.0.0.0` so that containers can reach it via `host.openshell.internal`. This means the service is reachable from your local network, not just localhost. This is required for the sandbox architecture but should be considered in shared or untrusted network environments.
---
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 188 - 192, Add a single blank line after the
blockquote that starts with "**⚠️ Network exposure**" and before the horizontal
rule ("---") to satisfy the markdown linter (MD028); edit the README block
containing that blockquote header and ensure there is an empty line between the
end of the quoted paragraph and the "---" separator.

## How It Works

NemoClaw installs the NVIDIA OpenShell runtime and Nemotron models, then uses a versioned blueprint to create a sandboxed environment where every network request, file access, and inference call is governed by declarative policy. The `nemoclaw` CLI orchestrates the full stack: OpenShell gateway, sandbox, inference provider, and network policy.
Expand Down
95 changes: 95 additions & 0 deletions bin/lib/env.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
// SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
//
// Lightweight .env loader — reads .env files from the project root and populates
// process.env. Existing environment variables are never overwritten, so shell
// exports always take precedence over file values.
//
// Supports:
// - Multiple files (loaded in order; first file's values win over later files)
// - Comments (#) and blank lines
// - KEY=VALUE, KEY="VALUE", KEY='VALUE'
// - Inline comments after unquoted values

const fs = require("fs");
const path = require("path");

const ROOT = path.resolve(__dirname, "..", "..");
const CWD = process.cwd();

// Walk up from a directory looking for a .git marker to find the repo root.
function findGitRoot(start) {
let dir = start;
while (true) {
try {
fs.statSync(path.join(dir, ".git"));
return dir;
} catch {
const parent = path.dirname(dir);
if (parent === dir) return null;
dir = parent;
}
}
}

const GIT_ROOT = findGitRoot(CWD);

function parseEnvFile(filePath) {
let content;
try {
content = fs.readFileSync(filePath, "utf-8");
} catch {
return; // file doesn't exist or isn't readable — skip silently
}

for (const raw of content.split("\n")) {
const line = raw.trim();
if (!line || line.startsWith("#")) continue;

const eqIndex = line.indexOf("=");
if (eqIndex === -1) continue;

const key = line.slice(0, eqIndex).trim();
if (!key) continue;

let value = line.slice(eqIndex + 1).trim();

// Remove inline comments for unquoted values first, then strip quotes.
// This handles cases like KEY='value' # comment correctly.
const hashIndex = value.indexOf(" #");
if (hashIndex !== -1) {
value = value.slice(0, hashIndex).trim();
}

// Strip surrounding quotes
if (
(value.startsWith('"') && value.endsWith('"')) ||
(value.startsWith("'") && value.endsWith("'"))
) {
value = value.slice(1, -1);
}
Comment on lines +57 to +70
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inline comment stripping may truncate quoted values containing #.

The current logic strips inline comments before stripping quotes. For KEY="value # inside", this results in "value (malformed). Most .env parsers preserve content within quotes.

This edge case may be acceptable if your .env files never contain # within quoted values, but it's worth documenting or handling.

🔧 Proposed fix: Strip comments only for unquoted values
     let value = line.slice(eqIndex + 1).trim();
 
-    // Remove inline comments for unquoted values first, then strip quotes.
-    // This handles cases like KEY='value' # comment correctly.
-    const hashIndex = value.indexOf(" #");
-    if (hashIndex !== -1) {
-      value = value.slice(0, hashIndex).trim();
-    }
-
     // Strip surrounding quotes
     if (
       (value.startsWith('"') && value.endsWith('"')) ||
       (value.startsWith("'") && value.endsWith("'"))
     ) {
       value = value.slice(1, -1);
+    } else {
+      // Remove inline comments only for unquoted values
+      const hashIndex = value.indexOf(" #");
+      if (hashIndex !== -1) {
+        value = value.slice(0, hashIndex).trim();
+      }
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Remove inline comments for unquoted values first, then strip quotes.
// This handles cases like KEY='value' # comment correctly.
const hashIndex = value.indexOf(" #");
if (hashIndex !== -1) {
value = value.slice(0, hashIndex).trim();
}
// Strip surrounding quotes
if (
(value.startsWith('"') && value.endsWith('"')) ||
(value.startsWith("'") && value.endsWith("'"))
) {
value = value.slice(1, -1);
}
// Strip surrounding quotes
if (
(value.startsWith('"') && value.endsWith('"')) ||
(value.startsWith("'") && value.endsWith("'"))
) {
value = value.slice(1, -1);
} else {
// Remove inline comments only for unquoted values
const hashIndex = value.indexOf(" #");
if (hashIndex !== -1) {
value = value.slice(0, hashIndex).trim();
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/env.js` around lines 57 - 70, The inline-comment logic currently
removes " #..." before stripping quotes, which truncates quoted values like
KEY="value # inside"; change the order and condition so comments are stripped
only when the raw value is not quoted: first detect if value starts and ends
with matching quotes (value.startsWith('"') && value.endsWith('"') or same for
single quotes) and when it is quoted, skip the hash-trimming and only remove
surrounding quotes; when it is not quoted, perform the existing hashIndex check
(value.indexOf(" #")) and trim the comment, then trim whitespace. Locate the
code handling the variable named value in bin/lib/env.js to implement this
conditional flow.


// Never overwrite existing env vars
if (process.env[key] === undefined) {
process.env[key] = value;
}
}
}

// Collect unique directories to search for .env files. The git repo root and
// CWD are checked in addition to the __dirname-relative ROOT so that a user's
// .env.local (which is gitignored and therefore not synced into the sandbox
// source directory) is still picked up on a fresh install.
const SEARCH_DIRS = [...new Set([ROOT, GIT_ROOT, CWD].filter(Boolean))];

// Load .env files in priority order — first file wins for any given key
// because we never overwrite once set.
const ENV_FILES = [".env.local", ".env"];

for (const file of ENV_FILES) {
for (const dir of SEARCH_DIRS) {
parseEnvFile(path.join(dir, file));
}
}

module.exports = { parseEnvFile, findGitRoot };
25 changes: 13 additions & 12 deletions bin/lib/local-inference.js
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
// SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
// SPDX-License-Identifier: Apache-2.0

const { VLLM_PORT, OLLAMA_PORT } = require("./ports");
const { shellQuote } = require("./runner");

const HOST_GATEWAY_URL = "http://host.openshell.internal";
Expand All @@ -10,9 +11,9 @@ const DEFAULT_OLLAMA_MODEL = "nemotron-3-nano:30b";
function getLocalProviderBaseUrl(provider) {
switch (provider) {
case "vllm-local":
return `${HOST_GATEWAY_URL}:8000/v1`;
return `${HOST_GATEWAY_URL}:${VLLM_PORT}/v1`;
case "ollama-local":
return `${HOST_GATEWAY_URL}:11434/v1`;
return `${HOST_GATEWAY_URL}:${OLLAMA_PORT}/v1`;
default:
return null;
}
Expand All @@ -21,9 +22,9 @@ function getLocalProviderBaseUrl(provider) {
function getLocalProviderHealthCheck(provider) {
switch (provider) {
case "vllm-local":
return "curl -sf http://localhost:8000/v1/models 2>/dev/null";
return `curl -sf http://localhost:${VLLM_PORT}/v1/models 2>/dev/null`;
case "ollama-local":
return "curl -sf http://localhost:11434/api/tags 2>/dev/null";
return `curl -sf http://localhost:${OLLAMA_PORT}/api/tags 2>/dev/null`;
default:
return null;
}
Expand All @@ -32,9 +33,9 @@ function getLocalProviderHealthCheck(provider) {
function getLocalProviderContainerReachabilityCheck(provider) {
switch (provider) {
case "vllm-local":
return `docker run --rm --add-host host.openshell.internal:host-gateway ${CONTAINER_REACHABILITY_IMAGE} -sf http://host.openshell.internal:8000/v1/models 2>/dev/null`;
return `docker run --rm --add-host host.openshell.internal:host-gateway ${CONTAINER_REACHABILITY_IMAGE} -sf http://host.openshell.internal:${VLLM_PORT}/v1/models 2>/dev/null`;
case "ollama-local":
return `docker run --rm --add-host host.openshell.internal:host-gateway ${CONTAINER_REACHABILITY_IMAGE} -sf http://host.openshell.internal:11434/api/tags 2>/dev/null`;
return `docker run --rm --add-host host.openshell.internal:host-gateway ${CONTAINER_REACHABILITY_IMAGE} -sf http://host.openshell.internal:${OLLAMA_PORT}/api/tags 2>/dev/null`;
default:
return null;
}
Expand All @@ -52,12 +53,12 @@ function validateLocalProvider(provider, runCapture) {
case "vllm-local":
return {
ok: false,
message: "Local vLLM was selected, but nothing is responding on http://localhost:8000.",
message: `Local vLLM was selected, but nothing is responding on http://localhost:${VLLM_PORT}.`,
};
case "ollama-local":
return {
ok: false,
message: "Local Ollama was selected, but nothing is responding on http://localhost:11434.",
message: `Local Ollama was selected, but nothing is responding on http://localhost:${OLLAMA_PORT}.`,
};
default:
return { ok: false, message: "The selected local inference provider is unavailable." };
Expand All @@ -79,13 +80,13 @@ function validateLocalProvider(provider, runCapture) {
return {
ok: false,
message:
"Local vLLM is responding on localhost, but containers cannot reach http://host.openshell.internal:8000. Ensure the server is reachable from containers, not only from the host shell.",
`Local vLLM is responding on localhost, but containers cannot reach http://host.openshell.internal:${VLLM_PORT}. Ensure the server is reachable from containers, not only from the host shell.`,
};
case "ollama-local":
return {
ok: false,
message:
"Local Ollama is responding on localhost, but containers cannot reach http://host.openshell.internal:11434. Ensure Ollama listens on 0.0.0.0:11434 instead of 127.0.0.1 so sandboxes can reach it.",
`Local Ollama is responding on localhost, but containers cannot reach http://host.openshell.internal:${OLLAMA_PORT}. Ensure Ollama listens on 0.0.0.0:${OLLAMA_PORT} instead of 127.0.0.1 so sandboxes can reach it.`,
};
default:
return { ok: false, message: "The selected local inference provider is unavailable from containers." };
Expand Down Expand Up @@ -123,7 +124,7 @@ function getOllamaWarmupCommand(model, keepAlive = "15m") {
stream: false,
keep_alive: keepAlive,
});
return `nohup curl -s http://localhost:11434/api/generate -H 'Content-Type: application/json' -d ${shellQuote(payload)} >/dev/null 2>&1 &`;
return `nohup curl -s http://localhost:${OLLAMA_PORT}/api/generate -H 'Content-Type: application/json' -d ${shellQuote(payload)} >/dev/null 2>&1 &`;
}

function getOllamaProbeCommand(model, timeoutSeconds = 120, keepAlive = "15m") {
Expand All @@ -133,7 +134,7 @@ function getOllamaProbeCommand(model, timeoutSeconds = 120, keepAlive = "15m") {
stream: false,
keep_alive: keepAlive,
});
return `curl -sS --max-time ${timeoutSeconds} http://localhost:11434/api/generate -H 'Content-Type: application/json' -d ${shellQuote(payload)} 2>/dev/null`;
return `curl -sS --max-time ${timeoutSeconds} http://localhost:${OLLAMA_PORT}/api/generate -H 'Content-Type: application/json' -d ${shellQuote(payload)} 2>/dev/null`;
}

function validateOllamaModel(model, runCapture) {
Expand Down
10 changes: 6 additions & 4 deletions bin/lib/nim.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
// NIM container management — pull, start, stop, health-check NIM images.

const { run, runCapture, shellQuote } = require("./runner");
const { VLLM_PORT } = require("./ports");
const nimImages = require("./nim-images.json");

function containerName(sandboxName) {
Expand Down Expand Up @@ -125,7 +126,7 @@ function pullNimImage(model) {
return image;
}

function startNimContainer(sandboxName, model, port = 8000) {
function startNimContainer(sandboxName, model, port = VLLM_PORT) {
const name = containerName(sandboxName);
const image = getImageForModel(model);
if (!image) {
Expand All @@ -139,12 +140,13 @@ function startNimContainer(sandboxName, model, port = 8000) {

console.log(` Starting NIM container: ${name}`);
run(
// Right-hand :8000 is the NIM image's internal port — fixed by the image, not configurable.
`docker run -d --gpus all -p ${Number(port)}:8000 --name ${qn} --shm-size 16g ${shellQuote(image)}`
);
return name;
}

function waitForNimHealth(port = 8000, timeout = 300) {
function waitForNimHealth(port = VLLM_PORT, timeout = 300) {
const start = Date.now();
const interval = 5000;
const safePort = Number(port);
Expand Down Expand Up @@ -175,7 +177,7 @@ function stopNimContainer(sandboxName) {
run(`docker rm ${qn} 2>/dev/null || true`, { ignoreError: true });
}

function nimStatus(sandboxName) {
function nimStatus(sandboxName, port = VLLM_PORT) {
const name = containerName(sandboxName);
try {
const state = runCapture(
Expand All @@ -186,7 +188,7 @@ function nimStatus(sandboxName) {

let healthy = false;
if (state === "running") {
const health = runCapture(`curl -sf http://localhost:8000/v1/models 2>/dev/null`, {
const health = runCapture(`curl -sf http://localhost:${port}/v1/models 2>/dev/null`, {
ignoreError: true,
});
healthy = !!health;
Expand Down
Loading