Bolt-on incremental execution for the shell. Incr wraps shell commands to track their file dependencies and memoize their results, so that unchanged commands are skipped on re-execution and their outputs are replayed from cache.
The quickest path is Ubuntu 22.04 with the bootstrap script:
curl -fsSL https://raw.githubusercontent.com/atlas-brown/incr/main/scripts/up.sh | sh
cd ~/incrThe bootstrap script installs:
- Rust via
rustupif needed - Ubuntu packages:
git,mergerfs,strace,python3-pip,curl,ca-certificates,build-essential,pkg-config,libssl-dev, andlibtool - Python dependencies from
requirements.txt - the release binary via
cargo build --release
Ubuntu 22.04 is the supported environment for these setup steps. Newer Ubuntu releases may require extra adjustments due to newer Python packaging and toolchain behavior.
If you prefer to install manually on Ubuntu 22.04:
- Update packages:
sudo apt update && sudo apt upgrade -y- Install system dependencies:
sudo apt install -y git mergerfs strace python3-pip curl ca-certificates build-essential pkg-config libssl-dev libtool- Install Rust via
rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh- Install Python dependencies:
pip3 install --no-cache-dir -r requirements.txt- Build the release binary:
cargo build --releaseSee INSTRUCTIONS.md for full evaluation instructions.
docker build -t incr .
docker run -it --rm --privileged incrToggle DEBUG and DEBUG_LOGS in src/config.rs for debug output.
incr intercepts shell command execution to memoize results. On re-execution, it replays cached stdout/stderr and file outputs when the command's inputs, environment, and file dependencies are unchanged, using strace and an OverlayFS sandbox to track side effects.
src/main.rs- CLI entrypoint that selects an execution strategy for each command.src/command.rs- Represents a command invocation and handles spawning child processes.src/execution/- Execution engines that manage tracing, caching, and replaying command results.src/cache/- Stores and retrieves memoized outputs and file dependency information.src/config.rs- Runtime and compile-time configuration constants.src/scripts/- Helper scripts for parsing trace output and rewriting shell scripts to use incr.
To sanity-check the install with a minimal example:
./incr.sh ./evaluation/hello-world.shThis should print the same Hello, world!-style output as the underlying shell script, while exercising the incr.sh entrypoint.
The evaluation/war-and-peace pipeline counts word frequencies. Run the combined harness:
./evaluation/war-and-peace/test.shThis runs:
- the baseline Bash pipeline,
- a cold Incr run, and
- a warm Incr run that should reuse cached results.
It checks that both Incr outputs match the baseline. Clean up with bash ./evaluation/war-and-peace/clean.sh.
Each benchmark under evaluation/benchmarks/ has its own setup and execution scripts. The main suite driver is run_all.sh. To run the benchmarks with minimum inputs to verify that their dependencies are installed correctly:
cd evaluation/benchmarks && ./run_all.sh --mode=easy --size=min --run-mode=bothTo run the benchmarks with full-sized inputs:
cd evaluation/benchmarks && ./run_all.sh --mode=easy --size=small --run-mode=bothResults are written under evaluation/run_results/. For a specific input size, use python3 ./show_results.py --size=[SIZE] to print a summary and bash ./verify_outputs.sh --mode=easy --size=[SIZE] to check Bash/Incr output agreement.
See INSTRUCTIONS.md for full benchmark setup and the behavioral-equivalence harness.
If you use Incr or build on any component in this repository, please cite the following paper:
@inproceedings{incr:osdi:2026,
title = {Incr: Faster Re-execution via Bolt-on Incrementalization},
author = {Xie, Yizheng and Lamprou, Evangelos and Xia, Jerry and Vasilakis, Nikos},
booktitle = {20th USENIX Symposium on Operating Systems Design and Implementation (OSDI 26)},
year = {2026},
publisher = {USENIX Association},
tags = {performance}
}Incr is an open-source, collaborative, MIT-licensed project developed by the ATLAS group at Brown University. If you'd like to contribute, please see CONTRIBUTING.md — contributions, bug reports, and reproducibility feedback are welcome.