Overview
The workspace modeling scripts (imaging/modeling.py and interferometer/modeling.py) serve as smoke tests when run with PYAUTOFIT_TEST_MODE=2, but each takes ~50s — totaling ~100s for just two scripts. Profiling reveals that most time is spent on VRAM estimation (23s), plotting (33s), model.info string formatting (12s), and result deserialization (9s) rather than the actual likelihood evaluation. Reducing these overheads would make the smoke test feedback loop significantly faster.
Plan
- Profile both modeling scripts to identify where runtime is spent under
PYAUTOFIT_TEST_MODE=2
- Fix bug in
interferometer/modeling.py line 477 (undefined variable fit)
- Reduce or skip VRAM estimation in test mode (~23s saved)
- Reduce or skip plotting calls in test mode (~33s saved)
- Speed up MGE model.info string formatting (~12s saved)
- Speed up result access / deserialization (~9s saved)
- Optimize search.fit pre-fit output overhead (~18s saved)
Detailed implementation plan
Affected Repositories
- autolens_workspace (primary)
- PyAutoFit (model.info, search.fit, result access)
- PyAutoArray (potential plotting optimizations)
- PyAutoGalaxy (potential plotting optimizations)
- PyAutoLens (VRAM estimation, analysis, plotting)
Work Classification
Both (library + workspace)
Branch Survey
| Repository |
Current Branch |
Dirty? |
| autolens_workspace |
main |
minor dataset changes |
| PyAutoFit |
main_build |
clean |
| PyAutoArray |
main |
untracked test files |
| PyAutoGalaxy |
main |
minor changes |
| PyAutoLens |
main |
clean |
Suggested branch: feature/smoke-test-fast
Profiling Results (PYAUTOFIT_TEST_MODE=2)
| Phase |
Imaging |
Interferometer |
| Imports |
2.37s |
(shared) |
| Data load |
0.13s |
0.00s |
| Plotting |
9.08s |
24.00s |
| Model composition |
1.64s |
0.55s |
| Model.info |
6.92s |
5.38s |
| VRAM estimation |
16.17s |
6.96s |
| search.fit |
10.77s |
7.81s |
| Result access |
4.75s |
4.14s |
| TOTAL |
49.47s |
48.83s |
Implementation Steps
- Fix interferometer bug —
scripts/interferometer/modeling.py line 477: fit=fit → fit=result.max_log_likelihood_fit
- Skip VRAM estimation in test mode — In PyAutoLens analysis classes, make
print_vram_use() a no-op when PYAUTOFIT_TEST_MODE >= 2
- Skip or reduce plotting in test mode — Either skip
aplt.* calls in test mode or use a non-rendering backend
- Speed up model.info for MGE models — Profile and optimize the string formatting in PyAutoFit for large model trees
- Speed up result access — Profile why
max_log_likelihood_instance takes ~5s and optimize
- Reduce search.fit pre-fit output overhead — The pre-fit file output (model.info, visualization) takes several seconds
Key Files
autolens_workspace/scripts/imaging/modeling.py
autolens_workspace/scripts/interferometer/modeling.py
PyAutoFit/autofit/non_linear/test_mode.py
PyAutoFit/autofit/non_linear/result.py
PyAutoLens/autolens/analysis/analysis.py
Original Prompt
Click to expand starting prompt
The workspace scripts are used as integration tests and smoke tests, with
good examples being @autolens_workspace/scripts/imaging/modeling.py
and @autolens_workspace/scripts/interferometer/modeling.py.
PYAUTOFIT_TEST_MODE=2
These run with PYAUTOFIT_TEST_MODE=2, which skips sampling but still performd
a likelihood evaluation to test the model.
However, we would benefit from these smoke tests running a lot faster as they
are a core part of ensuring the software from top to bottom is working. WE ultimately
want all scritps, if possible, to run really fast.
Can you run the abvoe two modeling.py scripts using PYAUTOFIT_TEST_MODE=2 and give a break
down of where the run time is? We can then work towards making them run a lot faster
and thus having quick smoke tests and integration tests.
Overview
The workspace modeling scripts (
imaging/modeling.pyandinterferometer/modeling.py) serve as smoke tests when run withPYAUTOFIT_TEST_MODE=2, but each takes ~50s — totaling ~100s for just two scripts. Profiling reveals that most time is spent on VRAM estimation (23s), plotting (33s), model.info string formatting (12s), and result deserialization (9s) rather than the actual likelihood evaluation. Reducing these overheads would make the smoke test feedback loop significantly faster.Plan
PYAUTOFIT_TEST_MODE=2interferometer/modeling.pyline 477 (undefined variablefit)Detailed implementation plan
Affected Repositories
Work Classification
Both (library + workspace)
Branch Survey
Suggested branch:
feature/smoke-test-fastProfiling Results (PYAUTOFIT_TEST_MODE=2)
Implementation Steps
scripts/interferometer/modeling.pyline 477:fit=fit→fit=result.max_log_likelihood_fitprint_vram_use()a no-op whenPYAUTOFIT_TEST_MODE >= 2aplt.*calls in test mode or use a non-rendering backendmax_log_likelihood_instancetakes ~5s and optimizeKey Files
autolens_workspace/scripts/imaging/modeling.pyautolens_workspace/scripts/interferometer/modeling.pyPyAutoFit/autofit/non_linear/test_mode.pyPyAutoFit/autofit/non_linear/result.pyPyAutoLens/autolens/analysis/analysis.pyOriginal Prompt
Click to expand starting prompt
The workspace scripts are used as integration tests and smoke tests, with
good examples being @autolens_workspace/scripts/imaging/modeling.py
and @autolens_workspace/scripts/interferometer/modeling.py.
PYAUTOFIT_TEST_MODE=2
These run with PYAUTOFIT_TEST_MODE=2, which skips sampling but still performd
a likelihood evaluation to test the model.
However, we would benefit from these smoke tests running a lot faster as they
are a core part of ensuring the software from top to bottom is working. WE ultimately
want all scritps, if possible, to run really fast.
Can you run the abvoe two modeling.py scripts using PYAUTOFIT_TEST_MODE=2 and give a break
down of where the run time is? We can then work towards making them run a lot faster
and thus having quick smoke tests and integration tests.