ZeroOS is a modular library OS for zkVM environments. It implements the Linux
userspace syscall interface (syscall ABI + calling convention for the target
ISA) at the syscall trap boundary (e.g., ecall on RISC-V), enabling standard
toolchains and std-based programs to run without toolchain forks or runtime
patches.
You can, but toolchain forks are a long-term maintenance burden (frequent rebases), expand the TCB (bespoke runtime code), and fragment the ecosystem. ZeroOS avoids this by operating at the syscall boundary instead.
Currently, ZeroOS has been integrated with Jolt (see the signature recovery
example). The architecture is designed to be zkVM-agnostic: any zkVM that can
expose a syscall/trap boundary (e.g., trap on RISC-V ecall) can integrate
ZeroOS by implementing the syscall contract.
"Compatible" means a binary compiled for a standard target (like
riscv64imac-unknown-linux-musl) can execute correctly inside the zkVM with
ZeroOS. Specifically:
- ISA compatibility: The zkVM executes the RISC-V instructions correctly
- ABI compatibility: The binary's calling conventions and data layouts match what libc and the syscall interface expect
ZeroOS has a modular architecture designed to support multiple ISAs and OS ABIs.
Currently, RISC-V (ISA) and Linux (OS ABI) are supported, targeting binaries
compiled for riscv64imac-unknown-linux-musl. In principle, other ISAs (like
x86-64, ARM) and OS ABIs could be added by implementing the corresponding
architecture-specific and OS-specific modules.
In the current integration, each syscall adds ~128 instructions to the trace: 57 to save registers, 33 in the trap handler, and 38 to restore registers. This is largely a fixed per-syscall cost, but the exact count depends on the zkVM integration/configuration.
Yes. ZeroOS includes a cooperative scheduler with threading primitives (mutexes, condition variables, thread spawning). Rayon can run on top of this (execution remains single-core in the zkVM, but threads are deterministically interleaved).
ZeroOS maintains determinism through single-core execution with no interrupts and no preemption. Thread scheduling is cooperative and deterministic, so all behavior is captured in the zkVM trace.
ZeroOS supports core syscalls across three main categories: memory management, scheduler (threading and synchronization), and I/O. The signature recovery trace shows these syscalls in action during execution.
ZeroOS's modular design architecturally allows for supporting multi-process
semantics like fork, execve, and wait4. However, these are not currently
implemented, and there may be implementation challenges in adapting full process
semantics to the deterministic, single-core zkVM environment. The current focus
is on an in-process runtime model (memory, threads, basic I/O), which covers
most zkVM workloads today.
zkVM execution must be deterministic, so wall-clock time and entropy must be
virtualized or provided as explicit inputs. In the current design, the developer
supplies a seed when initializing the randomness subsystem during
__platform_bootstrap, and all randomness is derived deterministically from
that seed. If/when APIs like getrandom and clock_gettime are supported, they
must be backed by deterministic, trace-committed sources rather than the host
OS.
Yes, through a virtual filesystem (VFS). ZeroOS provides device abstractions like console output, and you only link what you need.
The VFS is an abstraction layer, so semantics depend on which filesystem/device modules you link (e.g., console, in-memory, or host-provided). For proofs, any file content that influences execution should be treated as committed input.
ZeroOS follows a fail-fast principle: unsupported syscalls are rejected immediately with a clear error, rather than silently stubbing or returning fake success. This ensures your trace only contains intentional, fully-supported operations.
Start from the syscall log/trace: identify the syscall and arguments, then confirm (1) it is implemented and (2) the relevant module/device is linked and configured. For missing syscalls, the most direct fix is usually to implement the syscall (or a compatibility shim) rather than patch application code.
See the zkVM integration guide for detailed instructions, and the Jolt integration serves as a reference implementation.
You need to:
- Memory layout: Declare the guest memory layout (heap, stack regions, etc.) via linker scripts
- Platform bootstrap: Implement
__platform_bootstrapand a few other platform-specific initialization functions
See the zkVM integration guide for complete details on these requirements.
Each syscall is an independent compatibility unit: define the Linux-visible semantics, implement them using deterministic primitives, and wire it into the dispatcher. Some are mostly bookkeeping; others require additional devices/subsystems (filesystem, clocks, randomness).
No. If your code compiles with standard targets like
riscv64imac-unknown-linux-musl, it can run on ZeroOS without application-level
modifications (see compatibility notes above). The signature recovery example
uses upstream Reth and Rayon crates without maintaining forks.
Yes. That’s the key advantage: you use standard toolchains and std-based
crates, and you inherit upstream security fixes and updates. Compatibility
depends on whether the crate’s runtime stays within the syscalls/devices you’ve
enabled.
ZeroOS is language-agnostic. Any language that compiles to RISC-V and links against a libc can use ZeroOS, as long as the resulting program stays within the supported syscall/device surface.
Yes— with more engineering work planned in the roadmap.
Go has its own runtime (at a similar layer to libc), and that runtime still
requests kernel services via the Linux userspace syscall interface at the trap
boundary (e.g., RISC-V ecall). With ZeroOS providing that syscall layer, Go
programs can run without forking the Go toolchain.
In practice, this works to the extent that the Go runtime’s required syscalls and devices are implemented and enabled in your ZeroOS configuration.
You link only the subsystems your program needs. For example, if you don't spawn threads, you don't link the scheduler. This minimizes both TCB and trace length. The dependency tree slide shows how Jolt features map to ZeroOS modules.
ZeroOS includes a memory management subsystem that handles brk and mmap
syscalls. The allocator implementation is pluggable - the example uses a
linked-list allocator.
The TCB is modular and depends on which subsystems you link. The fail-fast design and shared codebase across zkVMs help consolidate security audits compared to maintaining separate toolchain forks.
ZeroOS successfully runs the Jolt signature recovery workload and produces verifiable proofs. The project demonstrates feasibility. Check the repository for current status and limitations.
Compute-heavy workloads that fit a single-process model but want standard
toolchains/std (e.g., parsers, cryptography, signature verification). If you
need extensive OS services (networking, complex filesystems, multi-process
orchestration), expect additional integration work or a larger syscall/device
surface.
- Integration ZeroOS in zkVM:
github.com/LayerZero-Research/jolt/tree/gx/integrate-zeroos - Example application:
github.com/zouguangxian/jolt-on-zeroos
Multiple zkVM projects can share one OS implementation instead of each maintaining separate toolchain forks. This consolidates effort and reduces the total TCB across the ecosystem. Reach out to discuss integration.
Implement the syscall contract once in your zkVM, integrate ZeroOS, and stop carrying bespoke runtime forks. This shifts the burden from "every language × every zkVM needs patches" to "each zkVM integrates once, all languages work."