Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 25 additions & 15 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,21 +22,31 @@ python scripts/interferometer/modeling.py

Jupyter notebooks mirror the scripts in `notebooks/` and use `# %%` markers to separate cells.

**Integration testing / fast mode**: Set `PYAUTOFIT_TEST_MODE=1` to run any example script without performing full non-linear search sampling. The search is skipped and a mock result is returned immediately, making this ideal for high-level integration testing of scripts end-to-end:

```bash
PYAUTOFIT_TEST_MODE=1 python scripts/imaging/modeling.py
```

**Codex / sandboxed runs**: when running from Codex or any restricted environment, set writable cache directories so `numba` and `matplotlib` do not fail on unwritable home or source-tree paths:

```bash
NUMBA_CACHE_DIR=/tmp/numba_cache MPLCONFIGDIR=/tmp/matplotlib python scripts/imaging/modeling.py
```

This workspace is often imported from `/mnt/c/...` and Codex may not be able to write to module `__pycache__` directories or `/home/jammy/.cache`, which can cause import-time `numba` caching failures without this override.

**Parallelization bug fix**: On Linux/Windows, model fits using JAX + multiprocessing may error unless the fit is wrapped in `if __name__ == "__main__": fit()`. See `scripts/guides/modeling/bug_fix.py` for the pattern.
**Integration testing / fast mode**: Set `PYAUTOFIT_TEST_MODE=1` to run any example script without performing full non-linear search sampling. The search is skipped and a mock result is returned immediately, making this ideal for high-level integration testing of scripts end-to-end:

```bash
PYAUTOFIT_TEST_MODE=1 python scripts/imaging/modeling.py
```

**Fast smoke tests**: For maximum speed, combine test mode with small datasets and disabled critical curves:

```bash
PYAUTOFIT_TEST_MODE=2 PYAUTO_WORKSPACE_SMALL_DATASETS=1 PYAUTO_DISABLE_CRITICAL_CAUSTICS=1 PYAUTO_FAST_PLOTS=1 python scripts/imaging/modeling.py
```

- `PYAUTO_WORKSPACE_SMALL_DATASETS=1` — caps all grids/masks to 15x15 pixels at 0.6"/px, making simulators and all downstream computations dramatically faster. Delete `dataset/` when toggling this variable.
- `PYAUTO_DISABLE_CRITICAL_CAUSTICS=1` — skips critical curve and caustic overlay computation in plots.
- `PYAUTO_FAST_PLOTS=1` — skips `plt.tight_layout()` in subplot functions, avoiding expensive matplotlib font/text metrics computation.

**Codex / sandboxed runs**: when running from Codex or any restricted environment, set writable cache directories so `numba` and `matplotlib` do not fail on unwritable home or source-tree paths:

```bash
NUMBA_CACHE_DIR=/tmp/numba_cache MPLCONFIGDIR=/tmp/matplotlib python scripts/imaging/modeling.py
```

This workspace is often imported from `/mnt/c/...` and Codex may not be able to write to module `__pycache__` directories or `/home/jammy/.cache`, which can cause import-time `numba` caching failures without this override.

**Parallelization bug fix**: On Linux/Windows, model fits using JAX + multiprocessing may error unless the fit is wrapped in `if __name__ == "__main__": fit()`. See `scripts/guides/modeling/bug_fix.py` for the pattern.

## Testing All Scripts

Expand Down
5 changes: 4 additions & 1 deletion scripts/imaging/features/pixelization/modeling.py
Original file line number Diff line number Diff line change
Expand Up @@ -288,8 +288,11 @@
This is why the `batch_size` above is 20, lower than other examples, because reducing the batch size ensures a more
modest amount of VRAM is used. If you have a GPU with more VRAM, increasing the batch size will lead to faster run times.

Given VRAM use is an important consideration, we print out the estimated VRAM required for this
Given VRAM use is an important consideration, we print out the estimated VRAM required for this
model-fit and advise you do this for your own pixelization model-fits.

The method below prints the VRAM usage estimate for the analysis and model with the specified batch size,
it takes about 20-30 seconds to run so you may want to comment it out once you are familiar with your GPU's VRAM limits.
"""
analysis.print_vram_use(model=model, batch_size=search.batch_size)

Expand Down
3 changes: 3 additions & 0 deletions scripts/interferometer/features/pixelization/modeling.py
Original file line number Diff line number Diff line change
Expand Up @@ -337,6 +337,9 @@

VRAM does scale with batch size though, and for high resoluiton datasets may require you to reduce from the value of
20 set above if your GPU does not have too much VRAM (e.g. < 4GB).

The method below prints the VRAM usage estimate for the analysis and model with the specified batch size,
it takes about 20-30 seconds to run so you may want to comment it out once you are familiar with your GPU's VRAM limits.
"""
analysis.print_vram_use(model=model, batch_size=search.batch_size)

Expand Down
5 changes: 4 additions & 1 deletion scripts/multi/modeling.py
Original file line number Diff line number Diff line change
Expand Up @@ -272,8 +272,11 @@
When multiple datasets are fitted simultaneously, as in this example, VRAM usage increases with each
dataset, as their data structures must all be stored in VRAM.

Given VRAM use is an important consideration, we print out the estimated VRAM required for this
Given VRAM use is an important consideration, we print out the estimated VRAM required for this
model-fit and advise you do this for your own pixelization model-fits.

The method below prints the VRAM usage estimate for the analysis and model with the specified batch size,
it takes about 20-30 seconds to run so you may want to comment it out once you are familiar with your GPU's VRAM limits.
"""
factor_graph.print_vram_use(
model=factor_graph.global_prior_model, batch_size=search.batch_size
Expand Down
2 changes: 1 addition & 1 deletion smoke_tests.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,6 @@ imaging/fit.py
imaging/modeling.py
interferometer/start_here.py
multi/start_here.py
ellipse/modeling.py
# ellipse/modeling.py # disabled: bypass mode tuple-path KeyError on ellipse models (rhayes777/PyAutoFit#1179)
guides/galaxies.py
guides/modeling/cookbook.py