diff --git a/CLAUDE.md b/CLAUDE.md index 6252bfd9..91179cce 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -22,21 +22,31 @@ python scripts/interferometer/modeling.py Jupyter notebooks mirror the scripts in `notebooks/` and use `# %%` markers to separate cells. -**Integration testing / fast mode**: Set `PYAUTOFIT_TEST_MODE=1` to run any example script without performing full non-linear search sampling. The search is skipped and a mock result is returned immediately, making this ideal for high-level integration testing of scripts end-to-end: - -```bash -PYAUTOFIT_TEST_MODE=1 python scripts/imaging/modeling.py -``` - -**Codex / sandboxed runs**: when running from Codex or any restricted environment, set writable cache directories so `numba` and `matplotlib` do not fail on unwritable home or source-tree paths: - -```bash -NUMBA_CACHE_DIR=/tmp/numba_cache MPLCONFIGDIR=/tmp/matplotlib python scripts/imaging/modeling.py -``` - -This workspace is often imported from `/mnt/c/...` and Codex may not be able to write to module `__pycache__` directories or `/home/jammy/.cache`, which can cause import-time `numba` caching failures without this override. - -**Parallelization bug fix**: On Linux/Windows, model fits using JAX + multiprocessing may error unless the fit is wrapped in `if __name__ == "__main__": fit()`. See `scripts/guides/modeling/bug_fix.py` for the pattern. +**Integration testing / fast mode**: Set `PYAUTOFIT_TEST_MODE=1` to run any example script without performing full non-linear search sampling. The search is skipped and a mock result is returned immediately, making this ideal for high-level integration testing of scripts end-to-end: + +```bash +PYAUTOFIT_TEST_MODE=1 python scripts/imaging/modeling.py +``` + +**Fast smoke tests**: For maximum speed, combine test mode with small datasets and disabled critical curves: + +```bash +PYAUTOFIT_TEST_MODE=2 PYAUTO_WORKSPACE_SMALL_DATASETS=1 PYAUTO_DISABLE_CRITICAL_CAUSTICS=1 PYAUTO_FAST_PLOTS=1 python scripts/imaging/modeling.py +``` + +- `PYAUTO_WORKSPACE_SMALL_DATASETS=1` — caps all grids/masks to 15x15 pixels at 0.6"/px, making simulators and all downstream computations dramatically faster. Delete `dataset/` when toggling this variable. +- `PYAUTO_DISABLE_CRITICAL_CAUSTICS=1` — skips critical curve and caustic overlay computation in plots. +- `PYAUTO_FAST_PLOTS=1` — skips `plt.tight_layout()` in subplot functions, avoiding expensive matplotlib font/text metrics computation. + +**Codex / sandboxed runs**: when running from Codex or any restricted environment, set writable cache directories so `numba` and `matplotlib` do not fail on unwritable home or source-tree paths: + +```bash +NUMBA_CACHE_DIR=/tmp/numba_cache MPLCONFIGDIR=/tmp/matplotlib python scripts/imaging/modeling.py +``` + +This workspace is often imported from `/mnt/c/...` and Codex may not be able to write to module `__pycache__` directories or `/home/jammy/.cache`, which can cause import-time `numba` caching failures without this override. + +**Parallelization bug fix**: On Linux/Windows, model fits using JAX + multiprocessing may error unless the fit is wrapped in `if __name__ == "__main__": fit()`. See `scripts/guides/modeling/bug_fix.py` for the pattern. ## Testing All Scripts diff --git a/scripts/imaging/features/pixelization/modeling.py b/scripts/imaging/features/pixelization/modeling.py index 6edbbce7..94159f60 100644 --- a/scripts/imaging/features/pixelization/modeling.py +++ b/scripts/imaging/features/pixelization/modeling.py @@ -288,8 +288,11 @@ This is why the `batch_size` above is 20, lower than other examples, because reducing the batch size ensures a more modest amount of VRAM is used. If you have a GPU with more VRAM, increasing the batch size will lead to faster run times. -Given VRAM use is an important consideration, we print out the estimated VRAM required for this +Given VRAM use is an important consideration, we print out the estimated VRAM required for this model-fit and advise you do this for your own pixelization model-fits. + +The method below prints the VRAM usage estimate for the analysis and model with the specified batch size, +it takes about 20-30 seconds to run so you may want to comment it out once you are familiar with your GPU's VRAM limits. """ analysis.print_vram_use(model=model, batch_size=search.batch_size) diff --git a/scripts/interferometer/features/pixelization/modeling.py b/scripts/interferometer/features/pixelization/modeling.py index 1f45e177..1a426d94 100644 --- a/scripts/interferometer/features/pixelization/modeling.py +++ b/scripts/interferometer/features/pixelization/modeling.py @@ -337,6 +337,9 @@ VRAM does scale with batch size though, and for high resoluiton datasets may require you to reduce from the value of 20 set above if your GPU does not have too much VRAM (e.g. < 4GB). + +The method below prints the VRAM usage estimate for the analysis and model with the specified batch size, +it takes about 20-30 seconds to run so you may want to comment it out once you are familiar with your GPU's VRAM limits. """ analysis.print_vram_use(model=model, batch_size=search.batch_size) diff --git a/scripts/multi/modeling.py b/scripts/multi/modeling.py index b2f8b9b1..657dc356 100644 --- a/scripts/multi/modeling.py +++ b/scripts/multi/modeling.py @@ -272,8 +272,11 @@ When multiple datasets are fitted simultaneously, as in this example, VRAM usage increases with each dataset, as their data structures must all be stored in VRAM. -Given VRAM use is an important consideration, we print out the estimated VRAM required for this +Given VRAM use is an important consideration, we print out the estimated VRAM required for this model-fit and advise you do this for your own pixelization model-fits. + +The method below prints the VRAM usage estimate for the analysis and model with the specified batch size, +it takes about 20-30 seconds to run so you may want to comment it out once you are familiar with your GPU's VRAM limits. """ factor_graph.print_vram_use( model=factor_graph.global_prior_model, batch_size=search.batch_size diff --git a/smoke_tests.txt b/smoke_tests.txt index 40bccfdd..e3e03a94 100644 --- a/smoke_tests.txt +++ b/smoke_tests.txt @@ -3,6 +3,6 @@ imaging/fit.py imaging/modeling.py interferometer/start_here.py multi/start_here.py -ellipse/modeling.py +# ellipse/modeling.py # disabled: bypass mode tuple-path KeyError on ellipse models (rhayes777/PyAutoFit#1179) guides/galaxies.py guides/modeling/cookbook.py