Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion scripts/imaging/features/pixelization/modeling.py
Original file line number Diff line number Diff line change
Expand Up @@ -288,8 +288,11 @@
This is why the `batch_size` above is 20, lower than other examples, because reducing the batch size ensures a more
modest amount of VRAM is used. If you have a GPU with more VRAM, increasing the batch size will lead to faster run times.

Given VRAM use is an important consideration, we print out the estimated VRAM required for this
Given VRAM use is an important consideration, we print out the estimated VRAM required for this
model-fit and advise you do this for your own pixelization model-fits.

The method below prints the VRAM usage estimate for the analysis and model with the specified batch size,
it takes about 20-30 seconds to run so you may want to comment it out once you are familiar with your GPU's VRAM limits.
"""
analysis.print_vram_use(model=model, batch_size=search.batch_size)

Expand Down
3 changes: 3 additions & 0 deletions scripts/interferometer/features/pixelization/modeling.py
Original file line number Diff line number Diff line change
Expand Up @@ -337,6 +337,9 @@

VRAM does scale with batch size though, and for high resoluiton datasets may require you to reduce from the value of
20 set above if your GPU does not have too much VRAM (e.g. < 4GB).

The method below prints the VRAM usage estimate for the analysis and model with the specified batch size,
it takes about 20-30 seconds to run so you may want to comment it out once you are familiar with your GPU's VRAM limits.
"""
analysis.print_vram_use(model=model, batch_size=search.batch_size)

Expand Down
5 changes: 4 additions & 1 deletion scripts/multi/modeling.py
Original file line number Diff line number Diff line change
Expand Up @@ -272,8 +272,11 @@
When multiple datasets are fitted simultaneously, as in this example, VRAM usage increases with each
dataset, as their data structures must all be stored in VRAM.

Given VRAM use is an important consideration, we print out the estimated VRAM required for this
Given VRAM use is an important consideration, we print out the estimated VRAM required for this
model-fit and advise you do this for your own pixelization model-fits.

The method below prints the VRAM usage estimate for the analysis and model with the specified batch size,
it takes about 20-30 seconds to run so you may want to comment it out once you are familiar with your GPU's VRAM limits.
"""
factor_graph.print_vram_use(
model=factor_graph.global_prior_model, batch_size=search.batch_size
Expand Down