void-box runs each agent stage inside its own micro-VM — Claude, Codex, Ollama, or any tool you put on PATH — with hardware isolation, OTLP telemetry, and sub-second snapshot/restore.
Docs · Examples · Getting Started · Architecture
⭐ Star us on GitHub — it helps the project a lot! ⭐
Real workflows you can run today. Every stage executes inside its own KVM (Linux) or Virtualization.framework (macOS) micro-VM — no shared kernel, no shared blast radius.
- HackerNews researcher — autonomous research agent that fetches, ranks, and summarizes top stories. Skills declared as files; one VM per run. (
examples/hackernews/) - Quant trading pipeline — four sequential stages (data → technical analysis → sentiment → portfolio strategy), each in its own micro-VM with its own skill set. (
examples/trading_pipeline.rs) - Parallel fan-out —
.fan_out()runs branches in parallel, each in its own micro-VM (the example splits into a quant and a sentiment branch), then.pipe()merges the outputs into a downstream reducer stage. (examples/parallel_pipeline.rs)
- Two-stage review pipeline — analyzer stage clones the repo and proposes fixes; proposer stage opens the GitHub PR. Each runs in its own micro-VM; the
GITHUB_TOKENis scoped to the proposer alone, so a prompt injection that compromises the analyzer can't reach the PR-opening machinery. (examples/code_review/)
- OpenClaw Telegram bot — long-running gateway running as a service-mode workflow step that accepts commands over Telegram. Companion specs (
openclaw_telegram.yaml,_ollama.yaml,_lmstudio.yaml) demonstrate Claude, Ollama, and LM Studio backends. (examples/openclaw/openclaw_telegram.yaml)
- Ollama / LM Studio backends — reach a local model through the SLIRP gateway (
10.0.2.2:<port>). No API key, no SaaS round-trip — your prompts and the model's responses never leave the host. (examples/ollama_local.rs,examples/lm_studio_local.rs)
| 🛡 Hardware-isolated stages | KVM (Linux) / Virtualization.framework (macOS) boundary per stage — not shared-process containers, not advisory namespaces. |
| ⚡ Sub-second snapshot & restore | Warm restore in ~138 ms, cold in ~252 ms. Fork agents from a snapshot instead of cold-booting per task. |
| 🔌 Vendor-neutral providers | Claude, OpenAI Codex, Ollama, LM Studio, OpenRouter, or any Anthropic-compatible endpoint — selected via one config field. |
| 📦 OCI-native | Auto-pulls guest images from GHCR; mount container images as base rootfs or as skill providers via overlay. |
| 📊 OTLP-native observability | Traces, metrics, structured logs, and stage-level telemetry emitted by design — not bolted on. |
| 🔓 No root required | Usermode SLIRP networking via smoltcp — no TAP devices, no elevated privileges, no host network reach beyond what you allow. |
Claude Code · OpenAI Codex · Ollama · LM Studio · OpenRouter · Together AI · any Anthropic-compatible endpoint · MCP servers · OCI base images (GHCR) · OpenTelemetry · Grafana Tempo · Prometheus · 9p / virtiofs host mounts · …and any CLI you can put on PATH.
- Hardware boundary per stage — KVM/VZ isolation enforced by the CPU, not by the kernel or by process controls.
- Defense-in-depth — seccomp-BPF on the VMM thread, session-secret auth on the vsock control channel, uid:1000 privilege drop, and SLIRP NAT isolation.
- Credentials never persist — host OAuth tokens are mounted read-only; API keys are injected as session-scoped env vars and never written to disk inside the guest.
- Fully auditable — every stage emits OTLP traces, metrics, and structured logs. Nothing in the run is a black box.
- Open source · self-hostable — Apache-2.0. This repo. Inspect, fork, run on your own metal.
Read the Security overview.
curl -fsSL https://raw.githubusercontent.com/the-void-ia/void-box/main/scripts/install.sh | sh
voidbox run --file examples/hackernews/hackernews_agent.yamlOther ways to install:
- Homebrew (macOS):
brew install the-void-ia/tap/voidbox - Rust (CLI only):
cargo install void-box - Debian / Fedora / tarballs: voidplatform.ai/docs/installation
First run, env vars, and provider auth → Getting Started.
| Architecture | Component diagram, data flow, security model |
| Runtime Model | LLM providers, skill types, agent binaries |
| CLI + TUI | Command reference, daemon API |
| YAML Specs | Declarative agent and pipeline definitions |
| Pipeline Composition | .pipe(), .fan_out(), failure domains |
| OCI Containers | Guest images, base images, OCI skills |
| Snapshots | Sub-second VM restore, snapshot types |
| Host Mounts | 9p / virtiofs host directory sharing |
| Events + Observability | OTLP traces, metrics, event types |
| Security Model | Defense-in-depth, seccomp, session auth |
| Wire Protocol | vsock framing, message types |
Platform setup: Linux · macOS · Local LLMs · Observability stack
Where we're headed. Current focus is hardening the security boundary and squeezing more out of the snapshot/restore path. We'll be sharing the work as it lands — follow along on voidplatform.ai/updates.
Up next, after the security and performance push:
- Session persistence — Durable run/session state with pluggable backends (filesystem, SQLite, Valkey).
- Terminal-native interactive experience — Panel-based, live-streaming TUI powered by the event API.
- Language bindings — Python and Node.js SDKs for daemon-level integration.
Apache-2.0 · The Void Platform

