anapao is a deterministic Rust testing utility for simulation and stochastic workflows.
This README is a linear tutorial for new users: you will build one scenario, run it deterministically, add expectations, run Monte Carlo batches, and persist CI-friendly artifacts.
By the end, you will have a repeatable testing flow that can:
- compile a
ScenarioSpecinto a validated executable model, - execute seeded deterministic single runs,
- execute deterministic Monte Carlo batches,
- evaluate typed assertions with evidence,
- persist artifact packs (
manifest.json,events.jsonl,series.csv, and more).
- Rust
1.70+ - Cargo
- A Rust test project where you want deterministic simulation checks
Add the dependency:
[dependencies]
anapao = "0.1.0"ScenarioSpec is your declarative model: nodes, edges, end conditions, and tracked metrics.
use anapao::types::{EndConditionSpec, MetricKey, ScenarioSpec, TransferSpec};
let mut scenario = ScenarioSpec::source_sink(TransferSpec::Fixed { amount: 1.0 })
.with_end_condition(EndConditionSpec::MaxSteps { steps: 3 });
scenario.tracked_metrics.insert(MetricKey::fixture("sink"));
assert_eq!(scenario.nodes.len(), 2);
assert_eq!(scenario.edges.len(), 1);What you learned:
- how to bootstrap a minimum source->sink scenario with a convenience constructor,
- how end conditions and tracked metrics are attached.
Compilation validates and transforms your scenario into deterministic execution indexes.
use anapao::types::{EndConditionSpec, ScenarioSpec, TransferSpec};
use anapao::Simulator;
let scenario = ScenarioSpec::source_sink(TransferSpec::Fixed { amount: 1.0 })
.with_end_condition(EndConditionSpec::MaxSteps { steps: 3 });
let compiled = Simulator::compile(scenario).unwrap();
assert_eq!(compiled.scenario.id.as_str(), "scenario-source-sink");What you learned:
- compilation is explicit and deterministic,
- you should compile once and reuse the compiled form for runs.
RunConfig controls deterministic single-run execution (seed, max_steps, capture policy).
use anapao::types::{CaptureConfig, RunConfig};
let run = RunConfig::for_seed(42).with_max_steps(250).with_capture(CaptureConfig {
every_n_steps: 5,
include_step_zero: true,
include_final_state: true,
..CaptureConfig::default()
});
assert_eq!(run.seed, 42);
assert_eq!(run.max_steps, 250);
assert_eq!(run.capture.every_n_steps, 5);What you learned:
- seeds pin determinism,
- capture configuration controls trace granularity.
Now run one deterministic simulation and assert expected outputs.
use anapao::{testkit, Simulator};
use anapao::types::MetricKey;
let compiled = Simulator::compile(testkit::fixture_scenario()).unwrap();
let report = Simulator::run(&compiled, &testkit::deterministic_run_config()).unwrap();
assert!(report.completed);
assert_eq!(report.steps_executed, 3);
assert_eq!(report.final_metrics.get(&MetricKey::fixture("sink")), Some(&3.0));What you learned:
- deterministic single-run output can be asserted directly in tests.
Expectation provides typed assertion semantics for run and batch reports.
use anapao::assertions::{Expectation, MetricSelector};
use anapao::types::MetricKey;
let metric = MetricKey::fixture("sink");
let expectations = vec![
Expectation::Equals {
metric: metric.clone(),
selector: MetricSelector::Final,
expected: 3.0,
},
Expectation::Approx {
metric: metric.clone(),
selector: MetricSelector::Final,
expected: 3.0,
abs_tol: 0.0001,
rel_tol: 0.0,
},
Expectation::Between {
metric,
selector: MetricSelector::Final,
min: 0.0,
max: 10.0,
},
];
assert_eq!(expectations.len(), 3);What you learned:
- expectations are data, not ad-hoc assertion code,
- selector controls whether you validate final value vs specific step.
Use the integrated assertion path and capture ordered events for diagnostics.
use anapao::assertions::{Expectation, MetricSelector};
use anapao::events::VecEventSink;
use anapao::types::MetricKey;
use anapao::{testkit, Simulator};
let compiled = Simulator::compile(testkit::fixture_scenario()).unwrap();
let expectations = vec![Expectation::Equals {
metric: MetricKey::fixture("sink"),
selector: MetricSelector::Final,
expected: 3.0,
}];
let mut sink = VecEventSink::new();
let (_report, assertion_report) = Simulator::run_with_assertions_and_sink(
&compiled,
&testkit::deterministic_run_config(),
&expectations,
&mut sink,
)
.unwrap();
assert!(assertion_report.is_success());
assert!(sink
.events()
.iter()
.any(|event| event.event_name() == "assertion_checkpoint"));What you learned:
- assertions and execution can be done in one call,
- event streams provide structured debugging context.
BatchConfig controls deterministic Monte Carlo execution.
use anapao::types::{BatchConfig, BatchRunTemplate, ExecutionMode};
let batch = BatchConfig::for_runs(64)
.with_execution_mode(ExecutionMode::SingleThread)
.with_base_seed(7)
.with_run_template(BatchRunTemplate::default())
.with_max_steps(50);
assert_eq!(batch.runs, 64);
assert_eq!(batch.base_seed, 7);
assert_eq!(batch.run_template.max_steps, 50);What you learned:
runsscales the Monte Carlo sample size,base_seed+ run index derivation preserve reproducibility.
Run many deterministic simulations and check aggregate outputs.
use anapao::{testkit, Simulator};
use anapao::types::MetricKey;
let compiled = Simulator::compile(testkit::fixture_scenario()).unwrap();
let batch = Simulator::run_batch(&compiled, &testkit::deterministic_batch_config()).unwrap();
assert_eq!(batch.completed_runs, batch.requested_runs);
assert!(batch.runs.windows(2).all(|window| window[0].run_index < window[1].run_index));
assert!(batch.aggregate_series.contains_key(&MetricKey::fixture("sink")));What you learned:
- batch summaries are deterministic and index-ordered.
completed_runscounts reported run summaries; inspect eachrun.completedfor semantic completion.
Persist reports for CI diffing and post-run diagnostics.
use anapao::artifact::write_run_artifacts_with_assertions;
use anapao::assertions::{Expectation, MetricSelector};
use anapao::events::VecEventSink;
use anapao::types::MetricKey;
use anapao::{testkit, Simulator};
let compiled = Simulator::compile(testkit::fixture_scenario()).unwrap();
let expectations = vec![Expectation::Equals {
metric: MetricKey::fixture("sink"),
selector: MetricSelector::Final,
expected: 3.0,
}];
let mut sink = VecEventSink::new();
let (run_report, assertion_report) = Simulator::run_with_assertions_and_sink(
&compiled,
&testkit::deterministic_run_config(),
&expectations,
&mut sink,
)
.unwrap();
assert!(run_report.completed);
assert!(assertion_report.is_success());
let output_dir = std::env::temp_dir().join("anapao-readme-playbook");
let manifest = write_run_artifacts_with_assertions(
&output_dir,
&run_report,
sink.events(),
Some(&assertion_report),
)
.unwrap();
assert!(manifest.artifacts.contains_key("manifest"));
assert!(manifest.artifacts.contains_key("events"));
assert!(manifest.artifacts.contains_key("assertions"));What you learned:
- persisted artifacts become your CI and debugging contract,
- manifest keys are stable assertions for artifact expectations.
Use testkit helpers to avoid duplicating setup across tests.
use anapao::{testkit, Simulator};
use anapao::types::MetricKey;
fn deterministic_fixture_smoke() {
let compiled = Simulator::compile(testkit::fixture_scenario()).unwrap();
let report = Simulator::run(&compiled, &testkit::deterministic_run_config()).unwrap();
assert_eq!(report.final_metrics.get(&MetricKey::fixture("sink")), Some(&3.0));
}
deterministic_fixture_smoke();What you learned:
- fixture helpers keep tests concise and deterministic,
- you can wrap these helpers in your own
rstestfixture macros for larger matrices.
- Missing tracked metric:
- symptom: expectation fails with missing observed value.
- fix: ensure metric key is in
scenario.tracked_metrics.
- Non-terminating scenarios:
- symptom: run ends at
max_stepsunexpectedly. - fix: verify
end_conditionsare configured and reachable.
- symptom: run ends at
- Seed confusion:
- symptom: output differs between runs.
- fix: pin
RunConfig.seedfor single runs and keep batchbase_seedstable (batch seeds derive frombase_seed+ run index).
- Sparse traces:
- symptom: insufficient snapshots for diagnostics.
- fix: adjust
RunConfig.capture(every_n_steps, step-zero/final flags).
parallel: enables Rayon-backed batch execution mode (ExecutionMode::Rayon).analysis-polars: enables Polars DataFrame shaping helpers.assertions-extended: enables extra assertion/snapshot/property helper crates.
anapao exports:
typeserrorrngvalidationenginestochasticeventsbatchstatsartifactassertionstestkitanalysis(only withanalysis-polars)Simulator(compile/run/batch facade)
cargo test --doc
cargo test
cargo test --features parallel
cargo test --features analysis-polars
cargo bench --no-run# capture baseline matrix
./scripts/bench-criterion save --bench simulation --baseline hotspots-20260224-default
./scripts/bench-criterion save --bench simulation --features parallel --baseline hotspots-20260224-parallel
# compare matrix
./scripts/bench-criterion compare --bench simulation --baseline hotspots-20260224-default
./scripts/bench-criterion compare --bench simulation --features parallel --baseline hotspots-20260224-parallel
# manual non-failing regression summary (+7% threshold)
./scripts/bench-criterion summary --bench simulation --baseline hotspots-20260224-default --threshold 0.07
./scripts/bench-criterion summary --bench simulation --features parallel --baseline hotspots-20260224-parallel --threshold 0.07
# flamegraphs and csv summaries
./benchmarks/run_profiles.sh
BENCH_FEATURES=parallel ./benchmarks/run_profiles.shThis repo ships a native prek.toml for fast local commit gates.
prek validate-config
prek run --all-files
prek installThe hooks intentionally stay lightweight: cargo fmt --all -- --check and cargo clippy --all-targets --all-features -- -D warnings.