Deferred from PR #85. Finding showed 500 steps in 14.6s for a mock policy at control_frequency=50Hz. 500 steps / 50Hz = 10s real-time target, so ~4.6s of Python overhead per rollout.
DoD
- Profile
PolicyRunner.run with a mock policy (e.g. py-spy or cProfile).
- Identify the hotspot: observation build? action mapping? logger/hook dispatch? sim lock contention?
- Goal: 500 steps at 500Hz in <2s with
fast_mode=True (batch-eval / data-collection path, no real-time throttling).
- Add a perf regression test under
tests_integ/ that pins the wall-clock ceiling.
Notes from PR #85
For now, the run_policy / start_policy docstrings have been updated
(T25) to point callers at fast_mode=True for non-real-time usage. The
performance budget DoD (< 2s) isn't met yet but isn't a correctness bug —
it's a throughput optimisation opportunity.
Tracked per AGENTS.md rule: the project board is the source of truth for
follow-ups.
Deferred from PR #85. Finding showed 500 steps in 14.6s for a mock policy at control_frequency=50Hz. 500 steps / 50Hz = 10s real-time target, so ~4.6s of Python overhead per rollout.
DoD
PolicyRunner.runwith a mock policy (e.g. py-spy or cProfile).fast_mode=True(batch-eval / data-collection path, no real-time throttling).tests_integ/that pins the wall-clock ceiling.Notes from PR #85
For now, the
run_policy/start_policydocstrings have been updated(T25) to point callers at
fast_mode=Truefor non-real-time usage. Theperformance budget DoD (< 2s) isn't met yet but isn't a correctness bug —
it's a throughput optimisation opportunity.
Tracked per AGENTS.md rule: the project board is the source of truth for
follow-ups.