Skip to content

the-void-ia/void-box

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

202 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Void-Box

Hardware-isolated micro-VMs for AI agents — bring any model, run any pipeline, audit every step.

CI License Rust 1.88+ Docs

void-box runs each agent stage inside its own micro-VM — Claude, Codex, Ollama, or any tool you put on PATH — with hardware isolation, OTLP telemetry, and sub-second snapshot/restore.

Docs · Examples · Getting Started · Architecture

Star us on GitHub — it helps the project a lot!


hn_demo — two-stage stock analysis pipeline

What you build with void-box

Real workflows you can run today. Every stage executes inside its own KVM (Linux) or Virtualization.framework (macOS) micro-VM — no shared kernel, no shared blast radius.

🔬 Multi-stage research pipelines

  • HackerNews researcher — autonomous research agent that fetches, ranks, and summarizes top stories. Skills declared as files; one VM per run. (examples/hackernews/)
  • Quant trading pipeline — four sequential stages (data → technical analysis → sentiment → portfolio strategy), each in its own micro-VM with its own skill set. (examples/trading_pipeline.rs)
  • Parallel fan-out.fan_out() runs branches in parallel, each in its own micro-VM (the example splits into a quant and a sentiment branch), then .pipe() merges the outputs into a downstream reducer stage. (examples/parallel_pipeline.rs)

🤖 Code review & PR automation

  • Two-stage review pipeline — analyzer stage clones the repo and proposes fixes; proposer stage opens the GitHub PR. Each runs in its own micro-VM; the GITHUB_TOKEN is scoped to the proposer alone, so a prompt injection that compromises the analyzer can't reach the PR-opening machinery. (examples/code_review/)

📡 Long-running agent gateways

  • OpenClaw Telegram bot — long-running gateway running as a service-mode workflow step that accepts commands over Telegram. Companion specs (openclaw_telegram.yaml, _ollama.yaml, _lmstudio.yaml) demonstrate Claude, Ollama, and LM Studio backends. (examples/openclaw/openclaw_telegram.yaml)

🏠 Local-first model experimentation

👉 Browse all examples →


Why void-box is different

🛡 Hardware-isolated stages KVM (Linux) / Virtualization.framework (macOS) boundary per stage — not shared-process containers, not advisory namespaces.
Sub-second snapshot & restore Warm restore in ~138 ms, cold in ~252 ms. Fork agents from a snapshot instead of cold-booting per task.
🔌 Vendor-neutral providers Claude, OpenAI Codex, Ollama, LM Studio, OpenRouter, or any Anthropic-compatible endpoint — selected via one config field.
📦 OCI-native Auto-pulls guest images from GHCR; mount container images as base rootfs or as skill providers via overlay.
📊 OTLP-native observability Traces, metrics, structured logs, and stage-level telemetry emitted by design — not bolted on.
🔓 No root required Usermode SLIRP networking via smoltcp — no TAP devices, no elevated privileges, no host network reach beyond what you allow.

Works with the agents and tools you already use

Claude Code · OpenAI Codex · Ollama · LM Studio · OpenRouter · Together AI · any Anthropic-compatible endpoint · MCP servers · OCI base images (GHCR) · OpenTelemetry · Grafana Tempo · Prometheus · 9p / virtiofs host mounts · …and any CLI you can put on PATH.


Your data stays yours

  • Hardware boundary per stage — KVM/VZ isolation enforced by the CPU, not by the kernel or by process controls.
  • Defense-in-depth — seccomp-BPF on the VMM thread, session-secret auth on the vsock control channel, uid:1000 privilege drop, and SLIRP NAT isolation.
  • Credentials never persist — host OAuth tokens are mounted read-only; API keys are injected as session-scoped env vars and never written to disk inside the guest.
  • Fully auditable — every stage emits OTLP traces, metrics, and structured logs. Nothing in the run is a black box.
  • Open source · self-hostable — Apache-2.0. This repo. Inspect, fork, run on your own metal.

Read the Security overview.


Get started

curl -fsSL https://raw.githubusercontent.com/the-void-ia/void-box/main/scripts/install.sh | sh
voidbox run --file examples/hackernews/hackernews_agent.yaml

Other ways to install:

First run, env vars, and provider auth → Getting Started.


Documentation

Architecture Component diagram, data flow, security model
Runtime Model LLM providers, skill types, agent binaries
CLI + TUI Command reference, daemon API
YAML Specs Declarative agent and pipeline definitions
Pipeline Composition .pipe(), .fan_out(), failure domains
OCI Containers Guest images, base images, OCI skills
Snapshots Sub-second VM restore, snapshot types
Host Mounts 9p / virtiofs host directory sharing
Events + Observability OTLP traces, metrics, event types
Security Model Defense-in-depth, seccomp, session auth
Wire Protocol vsock framing, message types

Platform setup: Linux · macOS · Local LLMs · Observability stack


Roadmap

Where we're headed. Current focus is hardening the security boundary and squeezing more out of the snapshot/restore path. We'll be sharing the work as it lands — follow along on voidplatform.ai/updates.

Up next, after the security and performance push:

  • Session persistence — Durable run/session state with pluggable backends (filesystem, SQLite, Valkey).
  • Terminal-native interactive experience — Panel-based, live-streaming TUI powered by the event API.
  • Language bindings — Python and Node.js SDKs for daemon-level integration.

License

Apache-2.0 · The Void Platform