A multi-user agent runtime for teams that deploy AI agents on their own infrastructure. OS-level user isolation. Per-session sandboxing. Segmented networking. Agents get a real shell and work freely - inside boundaries enforced by the operating system, not the model.
YOLO-style agent harnesses proved that giving a model a shell and letting it work is genuinely useful.
Their architecture reflects their origin: a single user on a personal machine. The agent runs as you, sees your files, and reaches the open internet - because on your own machine, the blast radius is just you.
Teams need agents that reach internal systems - codebases, tickets, APIs behind your perimeter. But when an agent with internal access also has an open path to the internet, any compromise - prompt injection, malicious input, poisoned web content - becomes a data exfiltration path. The blast radius is no longer one person's laptop; it's everything the agent can reach on your network.
OpenRig separates access from exposure. Each user maps to a dedicated Unix account with their own credentials and service tokens - the agent operates with exactly that user's permissions, not a shared service account. Each session runs in a firejail sandbox with a private filesystem, isolated network namespace, and hard resource limits. The public internet is unreachable by default - agents reach internal systems but cannot exfiltrate to the outside. A compromised session is contained to one sandbox, one user, one network segment.
The isolation is what makes the access tenable.
OpenRig runs on your infrastructure - on-premise or in a private cloud. Pair it with a self-hosted inference backend like vLLM or llama.cpp and nothing leaves your network: user data, model traffic, and agent activity all stay inside your boundary. The runtime has no telemetry and makes no calls beyond the inference endpoint you configure.
Sandbox networking controls what each agent session can reach:
| Policy | Behavior |
|---|---|
intranet (default) |
Agents reach private RFC 1918 subnets. Public internet is blocked. |
none |
Total network isolation. No bridge, no DNS. |
internet |
Full outbound internet via NAT. |
Configure DNS filtering. Network policy blocks direct internet access, but data can still be exfiltrated by encoding it in DNS queries that the resolver forwards upstream. DNS filtering closes this channel: only allowlisted domains resolve, everything else returns NXDOMAIN and never leaves the host. Off by default because it requires an allowlist; recommended for production deployments.
OpenRig uses three isolation layers, each for a different purpose.
Docker packages and deploys the runtime. It makes OpenRig reproducible and portable across Linux hosts. Docker is not the isolation boundary between users.
Per-user Unix accounts give each application user a dedicated OS identity.
Home directories, file ownership, and process ownership are separated at the kernel
level. Each account is non-login (nologin shell) - it exists for ownership and
isolation, not interactive access.
Firejail sandboxes isolate each agent session. Every session gets its own filesystem view, network namespace, seccomp filtering, dropped capabilities, and resource limits. The agent runs real shell commands with full autonomy inside the sandbox. The sandbox boundary - not content filtering - is the security control. If a session is compromised, the blast radius is one sandbox, not the system.
Templates define what each agent can do. Instructions, available skills, and inference settings are all set per template. A three-layer hierarchy - system, role, user - gives operators granular control over agent behavior without per-session configuration.
Persistent storage gives each user a directory that carries across sessions. Contents are copied into the sandbox at session start and synced back when commands complete. Agents retain state - files, notes, intermediate work products - without breaking the sandbox boundary. Managed through the web UI file manager.
Cronjobs run agents on a schedule - every 30 minutes, hourly, daily, or weekly - for unattended recurring tasks. Admin-controlled and feature-toggled.
Secret management stores user credentials (API keys, service tokens) with encryption. Admin-set defaults can be overridden per user. Credentials are scoped to each user's sessions, backing the per-user permission model described above.
Admin panel provides session monitoring, user management, inference configuration, network policy, feature toggles, and an action log. Operators manage the runtime from a web UI without touching config files.
File manager lets users upload and manage files within their isolated workspace, including their persistent storage.
Inference scheduling distributes model access fairly across users with round-robin dispatch and concurrency limits. One user's workload does not starve the rest.
Authentication supports local accounts and LDAP/Active Directory with auto-provisioning and group-to-admin mapping. LDAP support is still maturing. OAuth2 is planned.
Matrix/Element integration lets users interact with agents via chat threads, each backed by an isolated sandbox session. Work in progress. Slack integration is planned.
The OpenRig container runs privileged because the runtime creates its own isolation boundaries inside the container:
- creating and managing Unix users and home directories
- configuring the sandbox bridge network and dnsmasq
- applying iptables network policy
- managing firejail sandboxes and cgroup resource limits
Docker is the deployment vehicle. The trust model is:
- trust the OpenRig container as the runtime appliance
- do not trust one user with another user's data
- do not trust one agent session with access to the rest of the system
- Linux host with cgroup v2 and iptables (Ubuntu recommended)
- Docker Engine with Compose support
- Permission to run privileged containers
git clone https://github.com/EliasOenal/OpenRig.git
cd OpenRig
cp .env.example .envFor a local demo with the bundled mock inference backend:
OPENRIG_USE_MOCK_INFERENCE=1 ./scripts/docker-deploy-up.shFor a real inference backend:
export OPENRIG_BASE_URL=http://your-inference-host:8000/v1
export OPENRIG_API_KEY=your-api-key
export OPENRIG_MODEL=your-model-name
./scripts/docker-deploy-up.shOpen http://127.0.0.1:8080 and sign in with the bootstrap admin credentials from
your environment file.
By default:
- username:
admin@example.com - password:
ChangeMeNow!123
Change these before any real deployment.
Inference can also be configured after sign-in from the Admin UI if you did not set a backend at startup.
The runtime fits a single-container model backed by named volumes:
docker run --name openrig \
--privileged \
-p 8080:8080 \
-e OPENRIG_BOOTSTRAP_ADMIN=admin@example.com \
-e OPENRIG_BOOTSTRAP_ADMIN_PASSWORD=ChangeMeNow!123 \
-v openrig_data:/var/lib/openrig \
-v openrig_run:/run/openrig \
-v openrig_homes:/home \
openrig/openrig:latestAfter first launch, inference can be configured from the Admin UI.
./scripts/docker-deploy-up.sh # Start
./scripts/docker-deploy-down.sh # Stop
./scripts/docker-logs.sh # View logs
OPENRIG_DEPLOY_BUILD=0 ./scripts/docker-deploy-up.sh # Skip local buildOpenRig talks to OpenAI-compatible inference APIs. Tested with vLLM and llama.cpp.
Configuration can be set through environment variables at startup or changed at runtime from the Admin UI.
OpenRig ships with a bridge-oriented sandbox network:
- Bridge:
br-intranet - Sandbox DNS:
10.20.0.1 - Intranet allowlist:
10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
These are product defaults for the isolated bridge, not environment-specific host addresses. All are configurable.
openrig-web- web UI, user-facing API, admin interfaceopenrig-supervisor- privileged daemon for user provisioning, sandbox lifecycle, network policySessionManager- session lifecycle, scheduling, coordinationSandboxSession- firejail execution wrapperTemplateResolver- template and skill discovery with layered resolutionLLMClient- OpenAI-compatible inference with streaming and token tracking
src/openrig/- application codetests/- unit, integration, and Playwright E2E coveragedocker/- Dockerfiles and container entrypointsserver/- host-side network setup helpersscripts/- build, deploy, smoke, and test workflow helpers
Docker-based validation at several levels:
| Lane | Command | Scope |
|---|---|---|
| Smoke | ./scripts/test-smoke.sh |
Deploy artifact smoke test |
| Fast | ./scripts/test-fast.sh |
Smoke + focused core regression |
| Full | ./scripts/test-full.sh |
Smoke + complete Dockerized suite |
| Targeted | ./scripts/docker-test-targeted.sh tests/test_session.py -v |
Single file or pattern |
| All | ./scripts/docker-test-all.sh |
Full Docker test container |
Lane scripts use a repo-local lock and are not parallel-safe to avoid interference with shared Docker resources.
python -m venv .venv
. .venv/bin/activate
pip install -e ".[dev]"
ruff check src tests
mypy src/openrig --ignore-missing-importsBSD-3-Clause-Clear. See LICENSE for details.
