-
Notifications
You must be signed in to change notification settings - Fork 0
Tutorial Benchmarking Performance
This guide explains how to use the benchmark_perf.py script to run
automated performance-scaling studies on the cellfoundry simulation.
- Overview
- Prerequisites
- How It Works
- Quick Start
- Command-Line Reference
- Parameter Grid
- What Gets Patched
- Output Format
- Step-by-Step Examples
- Understanding the Results CSV
- Plotting Results
- Interpreting Status Codes
- Architecture & Safety Guarantees
- Excluded Directories
- Troubleshooting
- Advanced: Adding New Sweep Axes
benchmark_perf.py automates performance-scaling experiments by sweeping
over combinations of:
| Axis | CLI flag | Default values | What it controls |
|---|---|---|---|
| N (ECM grid) | --n |
21, 41, 81 | ECM agents = N³ (9 261 → 531 441) |
| N_CELLS | --n-cells |
100, 1 000, 1 000 000 | Number of cell agents |
| INIT_N_FOCAD_PER_CELL | --focad |
5, 25, 50 | Focal adhesions per cell |
| CELL_RADIUS | --cell-radius |
8.412 | Cell radius (µm); controls MAX_SEARCH_RADIUS_CELL_CELL_INTERACTION = 3× |
| Network file | --network |
network_low_density.pkl, network_medium_density.pkl, network_high_density.pkl
|
Fibre network geometry |
With the defaults, the full Cartesian product is 3 × 3 × 3 × 1 × 3 = 81 runs
(add more --cell-radius values to grow the grid).
For each configuration the script records initialization time (CPU-side agent/variable setup), simulation time (GPU step loop), wall-clock execution time, and time-per-step, then saves everything to a CSV file for later analysis.
Why CELL_RADIUS matters for performance: The cell–cell interaction search radius (
MAX_SEARCH_RADIUS_CELL_CELL_INTERACTION = 3 × CELL_RADIUS) determines how many spatial neighbours each cell must inspect every step. At high cell densities the number of pair-wise interactions grows super-linearly with the search radius, so even a modest increase inCELL_RADIUScan result in a disproportionately large slowdown. Including this axis in the benchmark allows you to quantify and visualise that effect.
-
Python 3.10+ with the cellfoundry dependencies installed
(
pip install -r requirements.txt). -
FLAMEGPU2 Python bindings (
pyflamegpu) available in the active environment. - A working simulation — you should be able to run
python model.pyfrom the project root without errors before benchmarking.
Tip: If you use conda, make sure the correct environment is activated before running the script, or use the
--conda-envflag (see below).
Original project (NEVER touched)
│
├── model.py ← stays untouched
├── *.cpp ← stays untouched
├── network_low_density.pkl
├── network_medium_density.pkl
├── network_high_density.pkl
└── ...
┌──────────────────────────────────┐
│ 1. COPY entire project into │
│ tools/_benchmark_workdir/ │
│ <timestamp>/ │
└──────────┬───────────────────────┘
│
┌──────────▼───────────────────────┐
│ 2. PATCH the copies: │
│ • N = <val> in model.py │
│ • ECM_POPULATION_SIZE = N³ │
│ in 5 .cpp files │
│ • Write JSON overrides for │
│ N_CELLS, FOCAD, CELL_RADIUS│
│ STEPS, etc │
└──────────┬───────────────────────┘
│
┌──────────▼───────────────────────┐
│ 3. RUN python model.py │
│ --overrides <json> │
│ from the working copy dir │
└──────────┬───────────────────────┘
│
┌──────────▼───────────────────────┐
│ 4. PARSE [BENCHMARK] line from │
│ stdout → extract timings │
└──────────┬───────────────────────┘
│
┌──────────▼───────────────────────┐
│ 5. REPEAT for every config in │
│ the parameter grid │
└──────────┬───────────────────────┘
│
┌──────────▼───────────────────────┐
│ 6. CLEANUP: delete working copy │
│ (or keep with --keep-workdir)│
└──────────┬───────────────────────┘
│
┌──────────▼───────────────────────┐
│ 7. SAVE results CSV to │
│ tools/benchmark_results.csv │
└──────────────────────────────────┘
Key safety property: the original project directory is never modified. All patching happens inside the disposable working copy. If the script crashes, gets interrupted (Ctrl+C), or encounters an error, the original files remain intact.
# Preview what will be run (no files copied, no simulations launched)
python tools/benchmark_perf.py --steps 5 --dry-run
# Run a small test: 1 ECM size, 2 cell counts, 1 FOCAD count = 2 runs
python tools/benchmark_perf.py --steps 5 --n 21 --n-cells 100 500 --focad 10
# Full default sweep (81 runs)
python tools/benchmark_perf.py --steps 10usage: benchmark_perf.py [-h] --steps STEPS
[--n N [N ...]]
[--n-cells N_CELLS [N_CELLS ...]]
[--focad FOCAD [FOCAD ...]]
[--cell-radius CELL_RADIUS [CELL_RADIUS ...]]
[--network NETWORK [NETWORK ...]]
[--output OUTPUT]
[--conda-env CONDA_ENV]
[--keep-workdir]
[--dry-run]
| Flag | Required | Description |
|---|---|---|
--steps |
Yes | Number of simulation steps per run. |
--n |
No | ECM grid sizes. Each value produces N³ ECM agents. Default: 21 41 81. |
--n-cells |
No | Cell agent counts. Default: 100 1000 1000000. |
--focad |
No | Initial focal adhesions per cell. Default: 5 25 50. |
--cell-radius |
No | Cell radius (µm). Each value also sets MAX_SEARCH_RADIUS_CELL_CELL_INTERACTION = 3×. Default: 8.412. |
--network |
No | Network .pkl file name(s). Default: network_low_density.pkl network_medium_density.pkl network_high_density.pkl. |
--output |
No | Custom path for the results CSV. Default: tools/benchmark_results.csv. |
--conda-env |
No | Name of a conda environment to activate for each subprocess run. |
--keep-workdir |
No | Do not delete the working copy after completion. Useful for inspecting patched files. |
--dry-run |
No | Print the run matrix and exit. No files are copied or modified. |
The script builds the Cartesian product of all axes:
total_runs = len(N_values) × len(N_CELLS_values) × len(FOCAD_values) × len(CELL_RADIUS_values) × len(network_files)
Default grid (showing a subset — network axis omitted for brevity):
| N | ECM agents (N³) | N_CELLS | FOCAD | CELL_RADIUS | Search radius | Total FOCAD agents (initial) |
|---|---|---|---|---|---|---|
| 21 | 9 261 | 100 | 5 | 8.412 | 25.24 | 500 |
| 21 | 9 261 | 100 | 25 | 8.412 | 25.24 | 2 500 |
| 21 | 9 261 | 100 | 50 | 8.412 | 25.24 | 5 000 |
| 21 | 9 261 | 1 000 | 5 | 8.412 | 25.24 | 5 000 |
| … | … | … | … | … | … | … |
| 81 | 531 441 | 1 000 000 | 50 | 8.412 | 25.24 | 50 000 000 |
Each row is run 3× (once per network file), giving 81 total runs.
Warning: Large configurations (N=81 with 1 000 000 cells) will require significant GPU memory and may take a very long time. Start small and scale up gradually.
For each run, the script modifies only the working copy:
| File(s) | What changes | Example |
|---|---|---|
model.py (line ~56) |
N = 21 → N = 41
|
ECM grid density |
ecm_Csp_update.cpp |
ECM_POPULATION_SIZE = 9261 → = 68921
|
C++ template param |
ecm_ecm_interaction.cpp |
(same) | |
ecm_boundary_concentration_conditions.cpp |
(same) | |
cell_ecm_interaction_metabolism.cpp |
(same) | |
cell_move.cpp |
(same) |
Why patch C++ files?
ECM_POPULATION_SIZEis used as a template parameter forgetMacroProperty<float, N_SPECIES, ECM_POPULATION_SIZE>. FLAMEGPU2 RTC compilation requires this value at compile time, so it cannot be passed as a runtime environment property.
A temporary _benchmark_overrides.json is written inside the working
copy and passed to model.py --overrides:
{
"N_CELLS": 1000,
"INIT_N_FOCAD_PER_CELL": 25,
"CELL_RADIUS": 8.412,
"STEPS": 10,
"SAVE_DATA_TO_FILE": false,
"SAVE_PICKLE": false,
"SHOW_PLOTS": false,
"VISUALISATION": false,
"NETWORK_FILE": "network_low_density.pkl"
}These suppress all file I/O and visualisation so that only compute time is measured.
Results are saved to tools/benchmark_results.csv (or the path given
by --output).
| Column | Type | Description |
|---|---|---|
run |
int | Sequential run number (1-based). |
N |
int | ECM grid dimension used for this run. |
ECM_POPULATION_SIZE |
int | N³ — total ECM agent count. |
N_CELLS |
int | Number of cell agents. |
INIT_N_FOCAD_PER_CELL |
int | Focal adhesions seeded per cell. |
FOCAD_count_init |
int |
N_CELLS × INIT_N_FOCAD_PER_CELL. |
N_FNODES |
int | Number of fibre-network nodes (from the .pkl file). |
CELL_RADIUS |
float | Cell radius (µm) used for this run. |
MAX_SEARCH_RADIUS |
float |
3 × CELL_RADIUS — cell–cell interaction search radius. |
steps |
int | Number of simulation steps requested. |
init_time_s |
float | Total initialization time (seconds) = Python setup + RTC compilation + FLAMEGPU init functions (agent creation on GPU). |
simulation_time_s |
float | Pure stepping time (seconds) — sum of per-step durations from CUDASimulation.getElapsedTimeSteps(). |
rtc_time_s |
float | RTC (runtime-compiled CUDA) kernel compilation time from CUDASimulation.getElapsedTimeRTCInitialisation(). |
init_functions_time_s |
float | FLAMEGPU init-function execution time (agent population creation on GPU) from CUDASimulation.getElapsedTimeInitFunctions(). |
exit_functions_time_s |
float | FLAMEGPU exit-function execution time from CUDASimulation.getElapsedTimeExitFunctions(). |
total_time_s |
float | Total wall-clock execution time (seconds). |
time_per_step_s |
float |
simulation_time_s / steps — pure per-step cost excluding initialization. |
status |
str | Run outcome (see Status Codes). |
timestamp |
str | When the benchmark batch was started. |
Note:
init_time_sincludes three components: (1) Python-level setup (model definition, layer configuration, etc.), (2) RTC kernel compilation, and (3) FLAMEGPU init functions that create agent populations on the GPU. The granular columnsrtc_time_sandinit_functions_time_slet you see where initialization cost is concentrated.
run,N,ECM_POPULATION_SIZE,N_CELLS,INIT_N_FOCAD_PER_CELL,FOCAD_count_init,N_FNODES,CELL_RADIUS,MAX_SEARCH_RADIUS,steps,init_time_s,simulation_time_s,rtc_time_s,init_functions_time_s,exit_functions_time_s,total_time_s,time_per_step_s,status,timestamp
1,21,9261,100,5,500,1234,8.412,25.236,10,5.123456,12.345678,2.100000,1.800000,0.010000,17.469134,1.234568,OK,2026-03-03 14:30:00
2,21,9261,100,5,500,2468,8.412,25.236,10,5.234567,13.456789,2.200000,1.900000,0.012000,18.691356,1.345679,OK,2026-03-03 14:30:00Run a single configuration with very few steps to verify the pipeline works end-to-end:
python tools/benchmark_perf.py --steps 2 --n 21 --n-cells 100 --focad 5This produces 1 run (1×1×1×1). Expect it to finish in under a minute.
Before committing to a long batch, inspect the planned configurations:
python tools/benchmark_perf.py --steps 10 --n 21 41 --n-cells 100 1000 --focad 5 25 --dry-runOutput:
Performance benchmark: 24 configurations, 10 steps each
N: [21, 41]
N_CELLS: [100, 1000]
FOCAD: [5, 25]
CELL_RADIUS: [8.412]
Network: ['network_low_density.pkl', 'network_medium_density.pkl', 'network_high_density.pkl']
(DRY RUN — nothing will be copied or executed)
============================================================
Run 1/24: N=21 N_CELLS=100 FOCAD=5 CELL_RADIUS=8.412 N_FNODES=1234
ECM_POPULATION_SIZE=9261 SEARCH_RADIUS=25.24 STEPS=10
============================================================
...
No files are created or modified.
To study how time scales with ECM agent count (holding cells and FOCAD constant):
python tools/benchmark_perf.py --steps 20 --n 11 21 31 41 51 --n-cells 500 --focad 10This gives 5 runs. Plot ECM_POPULATION_SIZE vs time_per_step_s from
the CSV.
python tools/benchmark_perf.py --steps 10Uses the default grid: N ∈ {21, 41, 81}, N_CELLS ∈ {100, 1000, 1000000}, FOCAD ∈ {5, 25, 50}, CELL_RADIUS ∈ {8.412}, network ∈ {low_density, medium_density, high_density}. → 81 runs.
Estimated time: hours to days depending on GPU and the largest configurations. Consider starting with
--steps 3for a rough estimate, then scale up.
Three fibre-network geometries are included by default:
| File | Description |
|---|---|
network_low_density.pkl |
Sparse / low-density fibre network |
network_medium_density.pkl |
Medium-density fibre network |
network_high_density.pkl |
Dense / high-density fibre network |
All three are swept automatically. To benchmark with only one:
python tools/benchmark_perf.py --steps 10 --network network_medium_density.pklOr supply your own files:
python tools/benchmark_perf.py --steps 10 \
--network network_medium_density.pkl my_custom_network.pklNetwork files must exist in the project root (they get copied into the working directory).
python tools/benchmark_perf.py --steps 10 --output results/perf_study_march2026.csvTo verify that source patching works correctly, use --keep-workdir:
python tools/benchmark_perf.py --steps 2 --n 41 --n-cells 100 --focad 5 --keep-workdirAfter completion the script prints:
[keep] Working copy retained at C:\...\tools\_benchmark_workdir\20260303_143000
You can then open the working copy and inspect the patched model.py and
.cpp files. Delete the folder manually when done.
If you launch the benchmark from a different environment than the one where FLAMEGPU2 is installed:
python tools/benchmark_perf.py --steps 10 --conda-env flamegpu_py310Each subprocess will be launched via conda run -n flamegpu_py310.
import pandas as pd
df = pd.read_csv("tools/benchmark_results.csv")
# Filter successful runs
ok = df[df["status"].str.startswith("OK")]
# Pivot: rows=N_CELLS, columns=N, values=time_per_step_s
pivot = ok.pivot_table(
index="N_CELLS",
columns="N",
values="time_per_step_s",
aggfunc="mean",
)
print(pivot)import matplotlib.pyplot as plt
fig, ax = plt.subplots()
for n_val, group in ok.groupby("N"):
group = group.sort_values("N_CELLS")
ax.plot(group["N_CELLS"], group["time_per_step_s"],
marker="o", label=f"N={n_val} (ECM={n_val**3})")
ax.set_xlabel("N_CELLS")
ax.set_ylabel("Time per step (s)")
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend()
ax.set_title("Cellfoundry scaling: time per step vs cell count")
plt.tight_layout()
plt.savefig("tools/benchmark_scaling.png", dpi=150)
plt.show()import matplotlib.pyplot as plt
fig, axes = plt.subplots(1, 2, figsize=(12, 5))
# Left: init time vs total agent count
ok["total_agents"] = ok["ECM_POPULATION_SIZE"] + ok["N_CELLS"] + ok["FOCAD_count_init"]
axes[0].scatter(ok["total_agents"], ok["init_time_s"], alpha=0.6)
axes[0].set_xlabel("Total agents (ECM + Cells + FOCAD)")
axes[0].set_ylabel("Initialization time (s)")
axes[0].set_xscale("log")
axes[0].set_title("Init time vs agent count")
# Right: simulation time as stacked bar
axes[1].bar(ok["run"], ok["init_time_s"], label="Initialization")
axes[1].bar(ok["run"], ok["simulation_time_s"], bottom=ok["init_time_s"],
label="Simulation")
axes[1].set_xlabel("Run")
axes[1].set_ylabel("Time (s)")
axes[1].set_title("Time breakdown per run")
axes[1].legend()
plt.tight_layout()
plt.savefig("tools/benchmark_init_vs_sim.png", dpi=150)
plt.show()When multiple --cell-radius values are used, the impact of the
interaction search radius on performance can be visualised:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
for cr, group in ok.groupby("CELL_RADIUS"):
group = group.sort_values("N_CELLS")
sr = 3.0 * cr
ax.plot(group["N_CELLS"], group["time_per_step_s"],
marker="o", label=f"CELL_RADIUS={cr} (search={sr:.1f})")
ax.set_xlabel("N_CELLS")
ax.set_ylabel("Time per step (s)")
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend()
ax.set_title("Effect of cell radius / search radius on scaling")
plt.tight_layout()
plt.savefig("tools/benchmark_cell_radius.png", dpi=150)
plt.show()Tip: To produce the cell-radius plot, run the benchmark with several radius values, e.g.:
python tools/benchmark_perf.py --steps 10 --cell-radius 5.0 8.412 15.0 --n 21 --focad 10
A companion script, tools/plot_benchmark_results.py, generates a
complete set of publication-quality figures from the benchmark CSV.
# Generate all plots (saved to tools/benchmark_plots/ by default)
python tools/plot_benchmark_results.py
# Also display figures interactively
python tools/plot_benchmark_results.py --show
# Use a different CSV file
python tools/plot_benchmark_results.py --csv tools/benchmark_results_20260315.csv
# Custom output directory and resolution
python tools/plot_benchmark_results.py --csv results.csv --outdir my_figs/ --dpi 150| Flag | Default | Description |
|---|---|---|
--csv |
tools/benchmark_results.csv |
Path to the benchmark CSV file |
--outdir |
tools/benchmark_plots/ |
Output directory for saved PNG figures |
--show |
off | Display figures interactively after saving |
--dpi |
300 | Resolution for saved PNGs |
The script generates up to 11 figure types (some are skipped automatically when the data does not contain enough variation):
| # | Figure | File name pattern | Notes |
|---|---|---|---|
| 1 | Pairwise heatmaps | heatmap_*.png |
Mean time/step for each pair of swept variables |
| 2 | Scaling curves (log-log) | scaling_*.png |
One per swept variable, grouped by secondary variable |
| 3 | Total-agent scatter | total_agent_scatter.png |
Power-law fit included |
| 4 | Scaling exponents | scaling_exponents.png |
Bar chart of log-log slopes |
| 5 | Cost breakdown | cost_breakdown.png |
Pie chart of marginal cost attribution |
| 6 | Box-plots | boxplot_*.png |
Distribution by each swept variable |
| 7 | Total time bars | total_time_bars.png |
Sorted wall-clock time per configuration |
| 8 | Contourf panel | contourf_panel.png |
Combined filled-contour figure |
| 9 | Summary panel | summary_panel.png |
One-page overview |
| 10 | Cell-radius scaling | cell_radius_scaling.png |
x = N_CELLS, y = time/step, curves per CELL_RADIUS |
| 11 | Init vs simulation time | init_vs_sim_time.png |
Scatter + stacked-bar breakdown |
Figure #10 is particularly useful when benchmarking with multiple
--cell-radius values, since the MAX_SEARCH_RADIUS (= 3 × CELL_RADIUS)
directly controls the size of the cell–cell interaction neighbourhood and
has a large impact on performance.
# 1. Run a benchmark sweeping cell radius and cell count
python tools/benchmark_perf.py --steps 10 \
--cell-radius 5.0 8.412 15.0 \
--n-cells 100 500 1000 \
--n 21 --focad 10
# 2. Generate all plots
python tools/plot_benchmark_results.py --show
# 3. Or point at a specific CSV
python tools/plot_benchmark_results.py --csv tools/benchmark_results.csv --outdir figures/| Status | Meaning |
|---|---|
OK |
Run completed successfully; timing parsed from [BENCHMARK] output line. |
dry-run |
Dry-run mode — no simulation was launched. |
ERROR(<code>) |
The subprocess exited with a non-zero return code. Last 20 lines of output are printed to the console. |
TIMEOUT |
The run exceeded the 1-hour timeout. |
CRITICAL_ERROR |
model.py hit a critical-error check and quit before completing. |
NO_TIMING |
The subprocess exited normally (code 0) but no [BENCHMARK] line was found in stdout. |
Some parameters — notably ECM_POPULATION_SIZE — are hard-coded as C++
template arguments (getMacroProperty<float, N_SPECIES, ECM_POPULATION_SIZE>).
FLAMEGPU2's RTC compilation reads these values from the source text at
build time, so they must be changed in the .cpp files themselves.
Rather than modifying the original files (even with backup/restore), the benchmark script copies the entire project into a temporary directory and patches only the copies. This means:
- Your source files are never touched — not even temporarily.
- If the script crashes, is killed, or the machine loses power, nothing is lost.
- You can continue developing in the main project while a benchmark runs.
- Multiple benchmark runs could (in principle) run in parallel from separate working copies.
Everything under the project root except heavy or irrelevant directories:
| Excluded directory | Reason |
|---|---|
.git |
Large; not needed for simulation |
__pycache__ |
Regenerated automatically |
result_files, results
|
Potentially very large output data |
_benchmark_workdir |
Avoids recursive nesting |
.vscode |
Editor settings |
manual_tests, docs, assets
|
Not needed at runtime |
optimizer, postprocessing
|
Not needed for the simulation run |
network_generator |
Not needed at runtime |
optuna_results, node_modules
|
Not needed at runtime |
Files with suffix .db are also excluded.
model.py emits a structured line at the end of each run:
[BENCHMARK] EXECUTION_TIME=17.469134 STEPS=10 TIME_PER_STEP=1.234568 INIT_TIME=5.123456 SIMULATION_TIME=12.345678 RTC_TIME=2.100000 INIT_FUNCTIONS_TIME=1.800000 EXIT_FUNCTIONS_TIME=0.010000
The benchmark script parses this with a regex. The first five fields
(EXECUTION_TIME, STEPS, TIME_PER_STEP, INIT_TIME,
SIMULATION_TIME) are required. The three additional fields
(RTC_TIME, INIT_FUNCTIONS_TIME, EXIT_FUNCTIONS_TIME)
are optional and come from CUDASimulation's internal high-resolution
timers.
Timing definitions:
| Field | Source | What it measures |
|---|---|---|
EXECUTION_TIME |
time.time() wall-clock |
Total Python process time |
INIT_TIME |
Python setup + RTC + init functions | Everything before the first simulation step |
SIMULATION_TIME |
sum(CUDASimulation.getElapsedTimeSteps()) |
Pure stepping time (excludes init/exit) |
RTC_TIME |
CUDASimulation.getElapsedTimeRTCInitialisation() |
CUDA kernel compilation |
INIT_FUNCTIONS_TIME |
CUDASimulation.getElapsedTimeInitFunctions() |
Agent population creation on GPU |
EXIT_FUNCTIONS_TIME |
CUDASimulation.getElapsedTimeExitFunctions() |
Exit function execution |
You can customise the set of excluded directories by editing the
COPY_EXCLUDE_DIRS set near the top of benchmark_perf.py:
COPY_EXCLUDE_DIRS = {
".git",
"__pycache__",
"result_files",
"results",
"manual_tests",
"network_generator",
"docs",
"assets",
"optimizer",
"postprocessing",
"_benchmark_workdir",
"_benchmark_backups",
"optuna_results",
".vscode",
"node_modules",
}If your simulation requires files from one of these directories at runtime, remove it from the set so it gets copied into the working directory.
The script expects a line matching ^N = <digits> (at the start of a
line) in model.py. If N has been renamed or the formatting changed,
update RE_N_PY in benchmark_perf.py.
The working copy may be missing a file or directory needed at import time.
Check whether a required module lives in one of the excluded directories
and remove that directory from COPY_EXCLUDE_DIRS.
Re-run with --keep-workdir and try running python model.py manually
from the working copy to diagnose.
model.py must print the [BENCHMARK] line. Verify it is present by
running python model.py --overrides ... | findstr BENCHMARK (Windows) or
grep BENCHMARK (Linux/macOS).
Each working copy contains all .cpp, .py, .pkl, and .vtk files.
The .pkl network files can be tens of MB. If disk space is tight:
- Use
--keep-workdironly when debugging. - The working copy is deleted automatically after a normal run.
- Consider reducing the number of
.vtk/.pklfiles in the project root before benchmarking.
The default per-run timeout is 3600 seconds (1 hour). For very large
configurations you may need to increase this by editing timeout=3600 in
the subprocess.run() call inside _run_single().
To add a new parameter axis (e.g., TIME_STEP):
-
Add a CLI argument in
main():parser.add_argument( "--time-step", type=float, nargs="+", default=[0.1], help="TIME_STEP values. Default: 0.1")
-
Include it in the grid product:
grid = list(itertools.product( args.n, args.n_cells, args.focad, args.cell_radius, args.network, args.time_step))
-
Unpack it in the loop and pass to
_run_single(). -
Add it to the overrides dict inside
_run_single():overrides["TIME_STEP"] = time_step
-
Add a column to
FIELDNAMESand_make_result().
If the new parameter requires patching C++ source files (like
ECM_POPULATION_SIZE), add the corresponding regex and patching logic
following the existing _patch_ecm_pop_in_cpp() pattern.
Last updated: March 2026