-
Notifications
You must be signed in to change notification settings - Fork 2.1k
feat(policies): add Ollama local inference preset #1047
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,62 @@ | ||
| # nemoclaw-blueprint/policies/presets/ollama.yaml | ||
| # | ||
| # Policy preset — Local Ollama inference endpoint | ||
| # | ||
| # Allows the sandboxed agent to reach a locally running Ollama instance | ||
| # for self-hosted LLM inference, as an alternative to the NVIDIA Endpoint API. | ||
| # | ||
| # Usage: | ||
| # openshell policy set ollama.yaml | ||
| # | ||
| # Prerequisites: | ||
| # Ollama must be running on the host machine: | ||
| # ollama serve | ||
| # | ||
| # Supported models (examples): | ||
| # ollama pull llama3.2:3b | ||
| # ollama pull mistral | ||
| # ollama pull gemma3 | ||
| # | ||
| # Notes: | ||
| # - Ollama runs on host port 11434 by default | ||
| # - The sandbox reaches the host via host-gateway | ||
| # - Change the port if you run Ollama on a non-default port | ||
| # - This preset enables local/offline inference with no API key required | ||
| # | ||
| # Security: | ||
| # - Scoped to openclaw binary only (binaries field) | ||
| # - Only GET and POST methods allowed (list models + generate) | ||
| # - No other host access granted by this preset | ||
|
|
||
| version: "1" | ||
| name: ollama | ||
| description: > | ||
| Allows the sandboxed OpenClaw agent to reach a locally running Ollama | ||
| instance for self-hosted LLM inference. Use this as an alternative to | ||
| the NVIDIA Endpoint API when running models locally. | ||
|
|
||
| egress: | ||
| # Ollama REST API — local inference endpoint | ||
| - host: host-gateway # resolves to the Docker host from inside sandbox | ||
| ports: [11434] | ||
| methods: [GET, POST] | ||
| binaries: [openclaw] | ||
| description: > | ||
| Ollama local inference API. GET /api/tags lists available models. | ||
| POST /api/generate and POST /api/chat run inference. | ||
|
|
||
| # Optional: Ollama model registry (only needed if agent pulls models) | ||
| # Uncomment if you want the agent to pull models autonomously. | ||
| # Recommendation: pull models manually before sandboxing the agent. | ||
| # | ||
| # - host: registry.ollama.ai | ||
| # ports: [443] | ||
| # methods: [GET] | ||
| # binaries: [openclaw] | ||
| # description: Ollama model registry for pulling models inside sandbox. | ||
| # | ||
| # - host: ollama.com | ||
| # ports: [443] | ||
| # methods: [GET] | ||
| # binaries: [openclaw] | ||
| # description: Ollama website and model library. | ||
|
Comment on lines
+38
to
+62
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: #!/bin/bash
# Description: Check the structure of other preset files to confirm the expected schema
echo "=== Checking for network_policies in existing presets ==="
rg -n "^network_policies:" nemoclaw-blueprint/policies/presets/ --type yaml
echo -e "\n=== Sample structure from discord preset ==="
head -n 50 nemoclaw-blueprint/policies/presets/discord.yaml | rg -A 10 "^network_policies:"
echo -e "\n=== Sample structure from pypi preset ==="
head -n 50 nemoclaw-blueprint/policies/presets/pypi.yaml | rg -A 10 "^network_policies:"Repository: NVIDIA/NemoClaw Length of output: 1270 🏁 Script executed: #!/bin/bash
# Check actual ollama.yaml content
echo "=== Actual ollama.yaml content ==="
cat nemoclaw-blueprint/policies/presets/ollama.yaml
echo -e "\n=== Full discord.yaml for structure comparison ==="
cat nemoclaw-blueprint/policies/presets/discord.yaml
echo -e "\n=== Full pypi.yaml for structure comparison ==="
cat nemoclaw-blueprint/policies/presets/pypi.yamlRepository: NVIDIA/NemoClaw Length of output: 4494 🏁 Script executed: #!/bin/bash
# Check the test file to see what validations are performed
echo "=== Checking test validations ==="
rg -n "network_policies" nemoclaw-blueprint/test/ --type js -A 5 -B 5
echo -e "\n=== Looking for schema validation or parsing in tests ==="
rg -n "network_policies|preset|egress" nemoclaw-blueprint/test/policies.test.js -A 2 -B 2Repository: NVIDIA/NemoClaw Length of output: 314 🏁 Script executed: #!/bin/bash
# Find test files
echo "=== Searching for test files ==="
find . -name "*.test.js" -o -name "*.test.ts" -o -name "*test*" -type f | grep -i test | head -20
echo -e "\n=== Searching for policies test ==="
find . -name "*policies*test*" -o -name "*test*policies*" | head -10
echo -e "\n=== Checking repository structure ==="
ls -la | head -20Repository: NVIDIA/NemoClaw Length of output: 2158 🏁 Script executed: #!/bin/bash
# Read the test file to understand validations
echo "=== Content of test/policies.test.js ==="
cat test/policies.test.jsRepository: NVIDIA/NemoClaw Length of output: 8295 🏁 Script executed: #!/bin/bash
# Find and examine the policies implementation
echo "=== Finding policies implementation ==="
find . -name "policies.js" -o -name "policies.ts" | grep -v test
echo -e "\n=== Reading policies implementation ==="
cat bin/lib/policies.js 2>/dev/null || cat bin/lib/policies.ts 2>/dev/null || find . -path "*/bin/*" -name "policies*" -exec cat {} \;Repository: NVIDIA/NemoClaw Length of output: 7053 Critical: Missing The preset structure is missing the required Impact:
it("every preset has network_policies section", () => {
for (const p of policies.listPresets()) {
const content = policies.loadPreset(p.name);
expect(content.includes("network_policies:")).toBeTruthy();
}
});
Fix the schema by replacing 🤖 Prompt for AI Agents |
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
Repository: NVIDIA/NemoClaw
Length of output: 428
🏁 Script executed:
Repository: NVIDIA/NemoClaw
Length of output: 877
Update test assertions for new preset.
The test file
test/policies.test.jshas hard-coded assertions that will fail with the new ollama preset:expect(presets.length).toBe(9);must be updated to10🤖 Prompt for AI Agents