This folder collects everything needed to capture, validate, and analyse metrics produced by the trading simulator.
Follow docs/metrics_quickstart.md for the full command-by-command
walkthrough. The highlights:
- Generate a stub or real run (
tools/mock_stub_run.pyortools/run_with_metrics.py). - Summarise logs into
marketsimulatorresults.md(tools/summarize_results.py). - Export the summaries to CSV (
tools/metrics_to_csv.py).
| Script | Purpose |
|---|---|
tools/mock_stub_run.py |
Creates synthetic log/summary pairs for fast smoke tests. |
tools/run_with_metrics.py |
Wraps python -m marketsimulator.run_trade_loop … and captures both log and summary JSON. |
tools/summarize_results.py |
Sweeps matching logs and regenerates marketsimulatorresults.md. |
tools/metrics_to_csv.py |
Builds a CSV table from JSON summaries for downstream analysis. |
tools/check_metrics.py |
Validates summaries against schema/metrics_summary.schema.json. |
scripts/metrics_smoke.sh |
End-to-end CI smoke test (mock → summary → CSV). |
To ensure every summary file is well-formed:
python tools/check_metrics.py --glob 'runs/*_summary.json'The underlying schema lives at schema/metrics_summary.schema.json.
- No logs found – Verify the run wrote
*.logfiles to the directory you pass to the summariser. For mock runs, re-runtools/mock_stub_run.py. - Invalid JSON – Run
tools/check_metrics.pyto pinpoint the field. Regenerate the summary withrun_with_metrics.pyif necessary. - CSV missing fields – Ensure the summaries include the metrics you
expect (
return,sharpe,pnl,balance). The validator will warn if any required fields are absent. - CI smoke test failures – Run
scripts/metrics_smoke.sh runs/local-smokelocally to reproduce.
The in-process stub mode inside marketsimulator/run_trade_loop.py is
still pending until we can safely short-circuit the simulator’s
configuration loading. All tooling above will continue to work with the
stub generator or with real simulator runs once available.
The repository now provides convenience targets:
make stub-run # generate a stub log/summary
make summarize # rebuild marketsimulatorresults.md
make metrics-csv # export CSV from summaries
make metrics-check # validate summaries
make smoke # run the mock-based smoke testUse the RUN_DIR/SUMMARY_GLOB/LOG_GLOB variables to customise locations, e.g. make RUN_DIR=runs/experiment summarize.
Most scripts honour the Make variables below. Override them on demand:
make RUN_DIR=runs/my-test summarize
make SUMMARY_GLOB='runs/my-test/*_summary.json' metrics-checkFor more detailed failure scenarios, see docs/metrics_troubleshooting.md.