diff --git a/README.rst b/README.rst index c28d2962c..de842d22d 100644 --- a/README.rst +++ b/README.rst @@ -1,5 +1,5 @@ -PyAutoLens: Open-Source Strong Lensing -====================================== +PyAutoLens-JAX: Open-Source Strong Lensing +========================================== .. |nbsp| unicode:: 0xA0 :trim: @@ -50,7 +50,7 @@ PyAutoLens: Open-Source Strong Lensing When two or more galaxies are aligned perfectly down our line-of-sight, the background galaxy appears multiple times. -This is called strong gravitational lensing and **PyAutoLens** makes it simple to model strong gravitational lenses. +This is called strong gravitational lensing and **PyAutoLens** makes it **simple** to model strong gravitational lenses, using JAX to **accelerate lens modeling on GPUs**. Getting Started --------------- @@ -80,136 +80,4 @@ For users less familiar with gravitational lensing, Bayesian inference and scien you may wish to read through the **HowToLens** lectures. These teach you the basic principles of gravitational lensing and Bayesian inference, with the content pitched at undergraduate level and above. -A complete overview of the lectures `is provided on the HowToLens readthedocs page `_ - -API Overview ------------- - -Lensing calculations are performed in **PyAutoLens** by building a ``Tracer`` object from ``LightProfile``, -``MassProfile`` and ``Galaxy`` objects. We create a simple strong lens system where a redshift 0.5 -lens ``Galaxy`` with an ``Isothermal`` ``MassProfile`` lenses a background source at redshift 1.0 with an -``Exponential`` ``LightProfile`` representing a disk. - -.. code-block:: python - - import autolens as al - import autolens.plot as aplt - from astropy import cosmology as cosmo - - """ - To describe the deflection of light by mass, two-dimensional grids of (y,x) Cartesian - coordinates are used. - """ - grid = al.Grid2D.uniform( - shape_native=(50, 50), - pixel_scales=0.05, # <- Conversion from pixel units to arc-seconds. - ) - - """ - The lens galaxy has an elliptical isothermal mass profile and is at redshift 0.5. - """ - mass = al.mp.Isothermal( - centre=(0.0, 0.0), ell_comps=(0.1, 0.05), einstein_radius=1.6 - ) - - lens_galaxy = al.Galaxy(redshift=0.5, mass=mass) - - """ - The source galaxy has an elliptical exponential light profile and is at redshift 1.0. - """ - disk = al.lp.Exponential( - centre=(0.3, 0.2), - ell_comps=(0.05, 0.25), - intensity=0.05, - effective_radius=0.5, - ) - - source_galaxy = al.Galaxy(redshift=1.0, disk=disk) - - """ - We create the strong lens using a Tracer, which uses the galaxies, their redshifts - and an input cosmology to determine how light is deflected on its path to Earth. - """ - tracer = al.Tracer( - galaxies=[lens_galaxy, source_galaxy], - cosmology = al.cosmo.Planck15() - ) - - """ - We can use the Grid2D and Tracer to perform many lensing calculations, for example - plotting the image of the lensed source. - """ - tracer_plotter = aplt.TracerPlotter(tracer=tracer, grid=grid) - tracer_plotter.figures_2d(image=True) - -With **PyAutoLens**, you can begin modeling a lens in minutes. The example below demonstrates a simple analysis which -fits the lens galaxy's mass with an ``Isothermal`` and the source galaxy's light with a ``Sersic``. - -.. code-block:: python - - import autofit as af - import autolens as al - import autolens.plot as aplt - - """ - Load Imaging data of the strong lens from the dataset folder of the workspace. - """ - dataset = al.Imaging.from_fits( - data_path="/path/to/dataset/image.fits", - noise_map_path="/path/to/dataset/noise_map.fits", - psf_path="/path/to/dataset/psf.fits", - pixel_scales=0.1, - ) - - """ - Create a mask for the imaging data, which we setup as a 3.0" circle, and apply it. - """ - mask = al.Mask2D.circular( - shape_native=dataset.shape_native, - pixel_scales=dataset.pixel_scales, - radius=3.0 - ) - dataset = dataset.apply_mask(mask=mask) - - """ - We model the lens galaxy using an elliptical isothermal mass profile and - the source galaxy using an elliptical sersic light profile. - - To setup these profiles as model components whose parameters are free & fitted for - we set up each Galaxy as a `Model` and define the model as a `Collection` of all galaxies. - """ - # Lens: - - mass = af.Model(al.mp.Isothermal) - lens = af.Model(al.Galaxy, redshift=0.5, mass=lens_mass_profile) - - # Source: - - disk = af.Model(al.lp.Sersic) - source = af.Model(al.Galaxy, redshift=1.0, disk=disk) - - # Overall Lens Model: - model = af.Collection(galaxies=af.Collection(lens=lens, source=source)) - - """ - We define the non-linear search used to fit the model to the data (in this case, Dynesty). - """ - search = af.Nautilus(name="search[example]", n_live=50) - - """ - We next set up the `Analysis`, which contains the `log likelihood function` that the - non-linear search calls to fit the lens model to the data. - """ - analysis = al.AnalysisImaging(dataset=dataset) - - """ - To perform the model-fit we pass the model and analysis to the search's fit method. This will - output results (e.g., dynesty samples, model parameters, visualization) to hard-disk. - """ - result = search.fit(model=model, analysis=analysis) - - """ - The results contain information on the fit, for example the maximum likelihood - model from the Dynesty parameter space search. - """ - print(result.samples.max_log_likelihood()) \ No newline at end of file +A complete overview of the lectures `is provided on the HowToLens readthedocs page `_ \ No newline at end of file diff --git a/autolens/config/non_linear.yaml b/autolens/config/non_linear.yaml index 2ed0f9508..d9356cf2d 100644 --- a/autolens/config/non_linear.yaml +++ b/autolens/config/non_linear.yaml @@ -29,9 +29,6 @@ nest: slices: 5 update_interval: null walks: 5 - updates: - iterations_per_update: 2500 - remove_state_files_at_end: true DynestyStatic: initialize: method: prior @@ -58,9 +55,3 @@ nest: slices: 5 update_interval: null walks: 5 - updates: - iterations_per_update: 5000 - log_every_update: 1 - model_results_every_update: 1 - remove_state_files_at_end: true - visualize_every_update: 1 diff --git a/autolens/imaging/model/analysis.py b/autolens/imaging/model/analysis.py index 5269df1b3..fe4d3a1a7 100644 --- a/autolens/imaging/model/analysis.py +++ b/autolens/imaging/model/analysis.py @@ -173,3 +173,5 @@ def save_attributes(self, paths: af.DirectoryPaths): ) analysis.save_attributes(paths=paths) + + diff --git a/autolens/imaging/model/plotter_interface.py b/autolens/imaging/model/plotter_interface.py index f4bb79889..5864df02d 100644 --- a/autolens/imaging/model/plotter_interface.py +++ b/autolens/imaging/model/plotter_interface.py @@ -19,7 +19,7 @@ class PlotterInterfaceImaging(PlotterInterface): imaging_combined = AgPlotterInterfaceImaging.imaging_combined def fit_imaging( - self, fit: FitImaging, visuals_2d_of_planes_list : Optional[aplt.Visuals2D] = None + self, fit: FitImaging, visuals_2d_of_planes_list : Optional[aplt.Visuals2D] = None, quick_update: bool = False ): """ Visualizes a `FitImaging` object, which fits an imaging dataset. @@ -40,20 +40,10 @@ def fit_imaging( The maximum log likelihood `FitImaging` of the non-linear search which is used to plot the fit. """ - if plot_setting(section="tracer", name="subplot_tracer"): - - mat_plot_2d = self.mat_plot_2d_from() - - fit_plotter = FitImagingPlotter( - fit=fit, mat_plot_2d=mat_plot_2d, visuals_2d_of_planes_list=visuals_2d_of_planes_list, - ) - - fit_plotter.subplot_tracer() - def should_plot(name): return plot_setting(section=["fit", "fit_imaging"], name=name) - mat_plot_2d = self.mat_plot_2d_from() + mat_plot_2d = self.mat_plot_2d_from(quick_update=quick_update) fit_plotter = FitImagingPlotter( fit=fit, mat_plot_2d=mat_plot_2d, visuals_2d_of_planes_list=visuals_2d_of_planes_list, @@ -61,7 +51,7 @@ def should_plot(name): plane_indexes_to_plot = [i for i in fit.tracer.plane_indexes_with_images if i != 0] - if should_plot("subplot_fit"): + if should_plot("subplot_fit") or quick_update: # This loop means that multiple subplot_fit objects are output for a double source plane lens. @@ -71,6 +61,19 @@ def should_plot(name): else: fit_plotter.subplot_fit() + if quick_update: + return + + if plot_setting(section="tracer", name="subplot_tracer"): + + mat_plot_2d = self.mat_plot_2d_from() + + fit_plotter = FitImagingPlotter( + fit=fit, mat_plot_2d=mat_plot_2d, visuals_2d_of_planes_list=visuals_2d_of_planes_list, + ) + + fit_plotter.subplot_tracer() + if should_plot("subplot_fit_log10"): try: @@ -82,7 +85,6 @@ def should_plot(name): except ValueError: pass - if should_plot("subplot_of_planes"): fit_plotter.subplot_of_planes() diff --git a/autolens/imaging/model/visualizer.py b/autolens/imaging/model/visualizer.py index 3c2724f24..fbffb95ba 100644 --- a/autolens/imaging/model/visualizer.py +++ b/autolens/imaging/model/visualizer.py @@ -63,6 +63,7 @@ def visualize( paths: af.DirectoryPaths, instance: af.ModelInstance, during_analysis: bool, + quick_update: bool = False, ): """ Output images of the maximum log likelihood model inferred by the model-fit. This function is called throughout @@ -91,8 +92,56 @@ def visualize( via a non-linear search). """ + import time + + start_time = time.time() + fit = analysis.fit_from(instance=instance) + print(f"Fit From time: {time.time() - start_time} seconds") + + start_time = time.time() + + tracer = fit.tracer_linear_light_profiles_to_light_profiles + + print(f"Tracer Linear Light Profiles time: {time.time() - start_time} seconds") + + start_time = time.time() + + visuals_2d_of_planes_list = tracer_util.visuals_2d_of_planes_list_from( + tracer=fit.tracer, + grid=fit.grids.lp.mask.derive_grid.all_false + ) + + print(f"Visuals 2D of planes list time: {time.time() - start_time} seconds") + + start_time = time.time() + + plotter_interface = PlotterInterfaceImaging( + image_path=paths.image_path, + title_prefix=analysis.title_prefix, + ) + + print(f"Plotter Interface Imaging time: {time.time() - start_time} seconds") + + start = time.time() + + try: + plotter_interface.fit_imaging( + fit=fit, + visuals_2d_of_planes_list=visuals_2d_of_planes_list, + quick_update=quick_update, + ) + except exc.InversionException: + pass + + print(f"Plotter Interface Fit Imaging time: {time.time() - start} seconds") + + if quick_update: + return + + # Full update based on configs. + if analysis.positions_likelihood_list is not None: overwrite_file = True @@ -111,8 +160,6 @@ def visualize( except exc.InversionException: return - tracer = fit.tracer_linear_light_profiles_to_light_profiles - zoom = ag.Zoom2D(mask=fit.mask) extent = zoom.extent_from(buffer=0) @@ -120,24 +167,6 @@ def visualize( grid = ag.Grid2D.from_extent(extent=extent, shape_native=shape_native) - visuals_2d_of_planes_list = tracer_util.visuals_2d_of_planes_list_from( - tracer=fit.tracer, - grid=fit.grids.lp.mask.derive_grid.all_false - ) - - plotter_interface = PlotterInterfaceImaging( - image_path=paths.image_path, - title_prefix=analysis.title_prefix, - ) - - try: - plotter_interface.fit_imaging( - fit=fit, - visuals_2d_of_planes_list=visuals_2d_of_planes_list - ) - except exc.InversionException: - pass - plotter_interface.tracer( tracer=tracer, grid=grid, diff --git a/autolens/imaging/plot/fit_imaging_plotters.py b/autolens/imaging/plot/fit_imaging_plotters.py index ea0396258..8bd923fc4 100644 --- a/autolens/imaging/plot/fit_imaging_plotters.py +++ b/autolens/imaging/plot/fit_imaging_plotters.py @@ -568,7 +568,7 @@ def subplot_fit(self, plane_index: Optional[int] = None): self.set_title(label=None) self.mat_plot_2d.output.subplot_to_figure( - auto_filename=f"subplot_fit{plane_index_tag}" + auto_filename=f"subplot_fit{plane_index_tag}", also_show=self.mat_plot_2d.quick_update ) self.close_subplot_figure() diff --git a/autolens/interferometer/model/analysis.py b/autolens/interferometer/model/analysis.py index aa9d11133..128149975 100644 --- a/autolens/interferometer/model/analysis.py +++ b/autolens/interferometer/model/analysis.py @@ -233,4 +233,4 @@ def save_attributes(self, paths: af.DirectoryPaths): dataset=self.dataset, ) - analysis.save_attributes(paths=paths) + analysis.save_attributes(paths=paths) \ No newline at end of file diff --git a/autolens/interferometer/model/plotter_interface.py b/autolens/interferometer/model/plotter_interface.py index dd889a272..fef75f04d 100644 --- a/autolens/interferometer/model/plotter_interface.py +++ b/autolens/interferometer/model/plotter_interface.py @@ -24,6 +24,7 @@ def fit_interferometer( self, fit: FitInterferometer, visuals_2d_of_planes_list: Optional[aplt.Visuals2D] = None, + quick_update: bool = False, ): """ Visualizes a `FitInterferometer` object, which fits an interferometer dataset. @@ -61,6 +62,9 @@ def should_plot(name): if should_plot("subplot_fit_dirty_images"): fit_plotter.subplot_fit_dirty_images() + if quick_update: + return + if should_plot("subplot_fit_real_space"): fit_plotter.subplot_fit_real_space() diff --git a/autolens/interferometer/model/visualizer.py b/autolens/interferometer/model/visualizer.py index 1914c0a96..1abe6522a 100644 --- a/autolens/interferometer/model/visualizer.py +++ b/autolens/interferometer/model/visualizer.py @@ -58,6 +58,7 @@ def visualize( paths: af.DirectoryPaths, instance: af.ModelInstance, during_analysis: bool, + quick_update: bool = False, ): """ Outputs images of the maximum log likelihood model inferred by the model-fit. This function is called @@ -90,6 +91,24 @@ def visualize( """ fit = analysis.fit_from(instance=instance) + visuals_2d_of_planes_list = tracer_util.visuals_2d_of_planes_list_from( + tracer=fit.tracer, grid=fit.grids.lp.mask.derive_grid.all_false + ) + + plotter_interface = PlotterInterfaceInterferometer( + image_path=paths.image_path, title_prefix=analysis.title_prefix + ) + + try: + plotter_interface.fit_interferometer( + fit=fit, visuals_2d_of_planes_list=visuals_2d_of_planes_list, quick_update=quick_update, + ) + except exc.InversionException: + pass + + if quick_update: + return + if analysis.positions_likelihood_list is not None: overwrite_file = True @@ -110,9 +129,6 @@ def visualize( except exc.InversionException: return - visuals_2d_of_planes_list = tracer_util.visuals_2d_of_planes_list_from( - tracer=fit.tracer, grid=fit.grids.lp.mask.derive_grid.all_false - ) tracer = fit.tracer_linear_light_profiles_to_light_profiles @@ -123,13 +139,9 @@ def visualize( grid = ag.Grid2D.from_extent(extent=extent, shape_native=shape_native) - plotter_interface = PlotterInterfaceInterferometer( - image_path=paths.image_path, title_prefix=analysis.title_prefix - ) - try: plotter_interface.fit_interferometer( - fit=fit, visuals_2d_of_planes_list=visuals_2d_of_planes_list + fit=fit, visuals_2d_of_planes_list=visuals_2d_of_planes_list, ) except exc.InversionException: pass diff --git a/autolens/point/model/plotter_interface.py b/autolens/point/model/plotter_interface.py index db3df0c43..425cd7a87 100644 --- a/autolens/point/model/plotter_interface.py +++ b/autolens/point/model/plotter_interface.py @@ -42,6 +42,7 @@ def should_plot(name): def fit_point( self, fit: FitPointDataset, + quick_update: bool = False, ): """ Visualizes a `FitPointDataset` object, which fits an imaging dataset. @@ -69,5 +70,8 @@ def should_plot(name): fit_plotter = FitPointDatasetPlotter(fit=fit, mat_plot_2d=mat_plot_2d) - if should_plot("subplot_fit"): + if should_plot("subplot_fit") or quick_update: fit_plotter.subplot_fit() + + if quick_update: + return \ No newline at end of file diff --git a/autolens/point/model/visualizer.py b/autolens/point/model/visualizer.py index 216421629..8b90c41df 100644 --- a/autolens/point/model/visualizer.py +++ b/autolens/point/model/visualizer.py @@ -38,6 +38,7 @@ def visualize( paths: af.DirectoryPaths, instance: af.ModelInstance, during_analysis: bool, + quick_update: bool = False, ): """ Output images of the maximum log likelihood model inferred by the model-fit. This function is called throughout @@ -68,7 +69,10 @@ def visualize( image_path=paths.image_path, title_prefix=analysis.title_prefix ) - plotter_interface.fit_point(fit=fit) + plotter_interface.fit_point(fit=fit, quick_update=quick_update) + + if quick_update: + return tracer = fit.tracer diff --git a/docs/index.rst b/docs/index.rst index ff447d815..721e2d812 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -7,7 +7,7 @@ What is PyAutoLens? When two or more galaxies are aligned perfectly down our line-of-sight, the background galaxy appears multiple times. -This is called strong gravitational lensing and **PyAutoLens** makes it simple to model strong gravitational lenses. +This is called strong gravitational lensing and **PyAutoLens** makes it simple to model strong gravitational lenses, using JAX to **accelerate lens modeling on GPUs**. Getting Started =============== @@ -221,7 +221,6 @@ strong gravitational lensing. installation/overview installation/conda installation/pip - installation/numba installation/source installation/troubleshooting diff --git a/docs/installation/numba.rst b/docs/installation/numba.rst deleted file mode 100644 index 8d81ba55f..000000000 --- a/docs/installation/numba.rst +++ /dev/null @@ -1,83 +0,0 @@ -.. _numba: - -Numba -===== - -Numba (https://numba.pydata.org) is an optional library which makes **PyAutoLens** run a lot faster, which we strongly -recommend all users have installed. - -Certain functionality (pixelized source reconstructions, linear light profiles) is disabled without numba installed -because it will have too slow run-times. - -However, some users have experienced difficulties installing numba, meaning they have been unable to try out -PyAutoLens and determine if it the right software for them, before committing more time to installing numba -successfully. - -For this reason, numba is an optional installation, so that users can easily experiment and learn -the basic API. - -If you do not have numba installed, you can do so via pip as follows: - -.. code-block:: bash - - pip install numba - - -Troubleshooting (Conda) ------------------------ - -Numba can be installed as part of your conda environment, with this version of numba used when you make the -conda environment. - -If you cannot get numba to install in an existing conda environment you can try creating a new one from fresh, -which is created with numba - -To install (or update) numba in conda use the following command: - -.. code-block:: bash - - conda install numba - -When you create the conda environment run the following command: - -.. code-block:: bash - - conda create -n autolens numba astropy scikit-image scikit-learn scipy - -You can then follow the standard conda installation instructions give here ``_ - -Troubleshooting (Numpy) ------------------------ - -The libraries ``numpy`` and ``numba`` can be installed with incompatible versions. - -An error message like the one below occurs when importing **PyAutoGalaxy**: - -.. code-block:: bash - - Traceback (most recent call last): - File "", line 1, in - File "/home/jammy/venvs/PyAutoMay2/lib/python3.8/site-packages/autolens/__init__.py", line 1, in - from autoarray import preprocess - File "/home/jammy/venvs/PyAutoMay2/lib/python3.8/site-packages/autoarray/__init__.py", line 2, in - from . import type - File "/home/jammy/venvs/PyAutoMay2/lib/python3.8/site-packages/autoarray/type.py", line 7, in - from autoarray.mask.mask_1d import Mask1D - File "/home/jammy/venvs/PyAutoMay2/lib/python3.8/site-packages/autoarray/mask/mask_1d.py", line 8, in - from autoarray.structures.arrays import array_1d_util - File "/home/jammy/venvs/PyAutoMay2/lib/python3.8/site-packages/autoarray/structures/arrays/array_1d_util.py", line 5, in - from autoarray import numba_util - File "/home/jammy/venvs/PyAutoMay2/lib/python3.8/site-packages/autoarray/numba_util.py", line 2, in - import numba - File "/home/jammy/venvs/PyAutoMay2/lib/python3.8/site-packages/numba/__init__.py", line 200, in - _ensure_critical_deps() - File "/home/jammy/venvs/PyAutoMay2/lib/python3.8/site-packages/numba/__init__.py", line 140, in _ensure_critical_deps - raise ImportError("Numba needs NumPy 1.21 or less") - ImportError: Numba needs NumPy 1.21 or less - -This can be fixed by reinstalling numpy with the version requested by the error message, in the example -numpy 1.21 (you should replace the ``==1.21.0`` with a different version if requested). - -.. code-block:: bash - - pip install numpy==1.21.0 \ No newline at end of file diff --git a/docs/installation/overview.rst b/docs/installation/overview.rst index c5917fc7c..c14d701cf 100644 --- a/docs/installation/overview.rst +++ b/docs/installation/overview.rst @@ -20,17 +20,10 @@ The installation guide for both approaches can be found at: Users who wish to build **PyAutoLens** from source (e.g. via a ``git clone``) should follow our `building from source installation guide `_. -Known Issues ------------- - -There is a known issue installing **PyAutoLens** via both ``conda`` and ``pip`` associated with the libraries ``llvmlite`` -and ``numba``. If your installation raises an error mentioning either library, follow the instructions in -our `troubleshooting section `_. - Dependencies ------------ -**PyAutoLens** has the following dependencies: +**PyAutoLens** uses the following parent packages: **PyAutoConf** https://github.com/rhayes777/PyAutoConf @@ -40,30 +33,3 @@ Dependencies **PyAutoGalaxy** https://github.com/Jammy2211/PyAutoGalaxy -**dynesty** https://github.com/joshspeagle/dynesty - -**emcee** https://github.com/dfm/emcee - -**PySwarms** https://github.com/ljvmiranda921/pyswarms - -**colossus**: https://bdiemer.bitbucket.io/colossus/ - -**astropy** https://www.astropy.org/ - -**corner.py** https://github.com/dfm/corner.py - -**matplotlib** https://matplotlib.org/ - -**numba** https://github.com/numba/numba - -**numpy** https://numpy.org/ - -**scipy** https://www.scipy.org/ - -**scikit-image**: https://github.com/scikit-image/scikit-image - -**scikit-learn**: https://github.com/scikit-learn/scikit-learn - -And the following optional dependencies: - -**pynufft**: https://github.com/jyhmiinlin/pynufft \ No newline at end of file diff --git a/docs/installation/troubleshooting.rst b/docs/installation/troubleshooting.rst index 91d338731..f6b46d312 100644 --- a/docs/installation/troubleshooting.rst +++ b/docs/installation/troubleshooting.rst @@ -3,11 +3,6 @@ Troubleshooting =============== -Numba ------ - -Help for troubleshooting specifically numba is provided at `at this readthedocs page `_ - Pip Version ----------- @@ -23,7 +18,7 @@ Pip / Conda ----------- If you are trying to `install via pip `_ but -still haing issues, we recommend you try to `install via conda `_ +still having issues, we recommend you try to `install via conda `_ instead, or visa versa. Support diff --git a/docs/overview/overview_1_start_here.rst b/docs/overview/overview_1_start_here.rst index 7869e186d..49232e00a 100644 --- a/docs/overview/overview_1_start_here.rst +++ b/docs/overview/overview_1_start_here.rst @@ -6,6 +6,8 @@ Start Here **PyAutoLens** is software for analysing strong gravitational lenses, an astrophysical phenomenon where a galaxy appears multiple times because its light is bent by the gravitational field of an intervening foreground lens galaxy. +It uses JAX to accelerate lensing calculations, with the example code below all running significantly faster on GPU. + Here is a schematic of a strong gravitational lens: .. image:: https://raw.githubusercontent.com/Jammy2211/PyAutoLens/main/docs/overview/images/overview_1/schematic.jpg @@ -68,7 +70,7 @@ lens galaxy. We therefore need to ray-trace the ``Grid2D``'s coordinates from th This uses analytic functions representing a galaxy's light and mass distributions, referred to as ``LightProfile`` and ``MassProfile`` objects. -The most common light profile in Astronomy is the elliptical Sersic, which we create an instance of below: +A common light profile in Astronomy is the elliptical Sersic, which we create an instance of below: .. code:: python @@ -340,312 +342,71 @@ the stellar components use a ``LightAndMassProfile`` via the ``lmp`` module. :width: 600 :alt: Alternative text -Simulating Data ---------------- - -The strong lens images above are **not** what we would observe if we looked at the sky through a telescope. -In reality, images of strong lenses are observed using a telescope and detector, for example a CCD Imaging device -attached to the Hubble Space Telescope. +Simulator +--------- -To make images that look like realistic Astronomy data, we must account for the effects like how the length of the -exposure time change the signal-to-noise, how the optics of the telescope blur the galaxy's light and that -there is a background sky which also contributes light to the image and adds noise. +Let’s now switch gears and simulate our own strong lens imaging. This is a great way to: -The ``SimulatorImaging`` object simulates this process, creating realistic CCD images of galaxies using the ``Imaging`` -object. +- Practice lens modeling before using real data. +- Build large training sets (e.g. for machine learning). +- Test lensing theory in a controlled environment. -.. code:: python - - simulator = al.SimulatorImaging( - exposure_time=300.0, - background_sky_level=1.0, - psf=al.Kernel2D.from_gaussian(shape_native=(11, 11), sigma=0.1, pixel_scales=0.05), - add_poisson_noise_to_data=True, - ) +In this example. we simulate “perfect” images without telescope effects. This means no blurring +from a PSF and no noise — just the raw light from galaxies and deflections from gravity. -Once we have a simulator, we can use it to create an imaging dataset which consists of an image, noise-map and -Point Spread Function (PSF) by passing it a galaxies and grid. - -This uses the tracer above to create the image of the galaxy and then add the effects that occur during data -acquisition. - -This data is used below to illustrate model-fitting, so lets simulate a very simple image of a strong lens. +In fact, this exactly what the image above is: a perfect image of a double Einstein ring system. The only +thing we need to do then, is output it to a .fits file so we can load it elsewhere. .. code:: python - lens_galaxy = al.Galaxy( - redshift=0.5, - light=al.lp.Sersic( - centre=(0.0, 0.0), - ell_comps=( - 0.2, - 0.1, - ), - intensity=0.005, - effective_radius=2.0, - sersic_index=4.0, - ), - mass=al.mp.Isothermal(centre=(0.0, 0.0), ell_comps=(0.1, 0.0), einstein_radius=1.6), - ) - - source_galaxy = al.Galaxy( - redshift=1.0, - light=al.lp.Exponential( - centre=(0.3, 0.2), ell_comps=(0.1, 0.0), intensity=0.1, effective_radius=0.5 - ), + al.output_to_fits( + values=image.native, + file_path=Path("image.fits"), + overwrite=True, ) - tracer = al.Tracer(galaxies=[lens_galaxy, source_galaxy], cosmology=al.cosmo.Planck15()) - - dataset = simulator.via_tracer_from(tracer=tracer, grid=grid) - -Observed Dataset ----------------- - -We now have an ``Imaging`` object, which is a realistic representation of the data we observe with a telescope. - -We use the ``ImagingPlotter`` to plot the dataset, showing that it contains the observed image, but also other -import dataset attributes like the noise-map and PSF. - -.. code:: python - - dataset_plotter = aplt.ImagingPlotter(dataset=dataset) - dataset_plotter.figures_2d(data=True) - -.. image:: https://raw.githubusercontent.com/Jammy2211/PyAutoLens/main/docs/overview/images/overview_1/11_data.png - :width: 600 - :alt: Alternative text - -If you have come to **PyAutoLens** to perform interferometry, the API above is easily adapted to use -a ``SimulatorInterferometer`` object to simulate an ``Interferometer`` dataset instead. - -However, you should finish reading this notebook before moving on to the interferometry examples, to get a full -overview of the core **PyAutoLens** API. - -Masking +Samples ------- -We are about to fit the data with a model, but first must define a mask, which defines the regions of the image that -are used to fit the data and which regions are not. - -We create a ``Mask2D`` object which is a 3.0" circle, whereby all pixels within this 3.0" circle are used in the -model-fit and all pixels outside are omitted. - -Inspection of the dataset above shows that no signal from the strong lens is observed outside of this radius, so -this is a sensible mask. - -.. code:: python - - mask = al.Mask2D.circular( - shape_native=dataset.shape_native, # The mask's shape must match the dataset's to be applied to it. - pixel_scales=dataset.pixel_scales, # It must also have the same pixel scales. - radius=3.0, # The mask's circular radius [units of arc-seconds]. - ) - -Combine the imaging dataset with the mask. - -.. code:: python - - dataset = dataset.apply_mask(mask=mask) - -When we plot a masked dataset, the removed regions of the image (e.g. outside the 3.0") are automatically set to zero -and the plot axis automatically zooms in around the mask. - -.. code:: python - - dataset_plotter = aplt.ImagingPlotter(dataset=dataset) - dataset_plotter.figures_2d(data=True) - -.. image:: https://raw.githubusercontent.com/Jammy2211/PyAutoLens/main/docs/overview/images/overview_1/12_data.png - :width: 600 - :alt: Alternative text - - -Fitting -_______ - -We are now at the point a scientist would be after observing a strong lens - we have an image of it, have used to a -mask to determine where we observe signal from the galaxy, but cannot make any quantitative statements about its -mass or source morphology. +Often we want to simulate *many* strong lenses — for example, to train a neural network +or to explore population-level statistics. -We therefore must now fit a model to the data. This model is a representation of the lens galaxy's light and mass and -source galaxy's light. We seek a way to determine whether a given model provides a good fit to the data. +This uses the model composition API to define the distribution of the light and mass profiles +of the lens and source galaxies we draw from. The model composition is a little too complex for +the first example, thus we use a helper function to create a simple lens and source model. -A fit is performing using a ``FitImaging`` object, which takes a dataset and tracer object as input and determine if -the galaxies are a good fit to the data. +We then generate 3 lenses for speed, and plot their images so you can see the variety of lenses +we create. -.. code:: python - - fit = al.FitImaging(dataset=dataset, tracer=tracer) - -The fit creates ``model_data``, which is the image of the strong lens including effects which change its appearance -during data acquisition. +If you want to simulate lenses yourself (e.g. for training a neural network), checkout the +`autolens_workspace/simulators` package for a full description of how to do this and customize +the simulated lenses to your science. -For example, by plotting the fit's ``model_data`` and comparing it to the image of the strong lens obtained via -the ``TracerPlotter``, we can see the model data has been blurred by the dataset's PSF. +The images below are perfect lenses of strong lenses, the next examples will show us how to +instead output realistic observations of strong lenses (e.g. CCD imaging, interferometer data, etc). .. code:: python - tracer_plotter = aplt.TracerPlotter(tracer=fit.tracer, grid=grid) - tracer_plotter.figures_2d(image=True) + lens_model, source_model = al.model_util.simulator_start_here_model_from() - fit_plotter = aplt.FitImagingPlotter(fit=fit) - fit_plotter.figures_2d(model_image=True) + total_datasets = 3 -.. image:: https://raw.githubusercontent.com/Jammy2211/PyAutoLens/main/docs/overview/images/overview_1/13_image_2d.png - :width: 400 - :alt: Alternative text - -.. image:: https://raw.githubusercontent.com/Jammy2211/PyAutoLens/main/docs/overview/images/overview_1/14_image_2d.png - :width: 400 - :alt: Alternative text + for sample_index in range(total_datasets): -The fit also creates the following: + lens_galaxy = lens_model.random_instance() + source_galaxy = source_model.random_instance() - - The ``residual_map``: The ``model_image`` subtracted from the observed dataset``s ``image``. - - The ``normalized_residual_map``: The ``residual_map ``divided by the observed dataset's ``noise_map``. - - The ``chi_squared_map``: The ``normalized_residual_map`` squared. - -We can plot all 3 of these on a subplot that also includes the data, signal-to-noise map and model data. - -In this example, the tracer used to simulate the data are used to fit it, thus the fit is good and residuals are minimized. - -.. code:: python - - fit_plotter.subplot_fit() - -The overall quality of the fit is quantified with the ``log_likelihood``. - -.. code:: python + tracer = al.Tracer(galaxies=[lens_galaxy, source_galaxy]) - print(fit.log_likelihood) + tracer_plotter = aplt.TracerPlotter(tracer=tracer, grid=grid) + tracer_plotter.figures_2d(image=True) -If you are familiar with statistical analysis, this quick run-through of the fitting tools will make sense and you -will be familiar with concepts like model data, residuals and a likelihood. - -If you are less familiar with these concepts, I recommend you finish this notebook and then go to the fitting API -guide, which explains the concepts in more detail and provides a more thorough overview of the fitting tools. - -The take home point is that **PyAutoLens**'s API has extensive tools for fitting models to data and visualizing the -results, which is what makes it a powerful tool for studying the morphologies of galaxies. - -Modeling --------- - -The fitting tools above are used to fit a model to the data given an input set of galaxies. Above, we used the true -galaxies used to simulate the data to fit the data, but we do not know what this "truth" is in the real world and -is therefore not something a real scientist can do. - -Modeling is the processing of taking a dataset and inferring the model that best fits the data, for example -the galaxy light and mass profile(s) that best fits the light observed in the data or equivalently the combination -of Sersic profile parameters that maximize the likelihood of the fit. - -Lens modeling uses the probabilistic programming language **PyAutoFit**, an open-source project that allows complex -model fitting techniques to be straightforwardly integrated into scientific modeling software. Check it out if you -are interested in developing your own software to perform advanced model-fitting: - -https://github.com/rhayes777/PyAutoFit - -We import **PyAutoFit** separately to **PyAutoLens**: - -.. code:: python - - import autofit as af - -We now compose the galaxy model using ``af.Model`` objects. - -These behave analogously to the ``Galaxy``, ``LightProfile`` and ``MassProfile`` objects above, however their parameters -are not specified and are instead determined by a fitting procedure. - -We will fit our galaxy data with a model which has one galaxy where: - -We will fit our strong lens data with two galaxies: - -- A lens galaxy with a ``Sersic`` ``LightProfile`` representing its light and an ``Isothermal`` ``MassProfile`` representing its mass. -- A source galaxy with an ``Exponential`` ``LightProfile`` representing a disk. - -The redshifts of the lens (z=0.155) and source(z=0.517) are fixed, but as discussed above their values do not -matter for a two-plane lens system because the units of angles in arc-seconds are independent of the redshifts. - -The light profiles below are linear light profiles, input via the ``lp_linear`` module. These solve for the intensity of -the light profiles via linear algebra, making the modeling more efficient and accurate. They are explained in more -detail in other workspace examples, but are a key reason why modeling with **PyAutoLens** performs well and -can scale to complex models. - -.. code:: python - - galaxy_model = af.Model( - al.Galaxy, - redshift=0.5, - bulge=al.lp_linear.Sersic, - disk=al.lp_linear.Exponential, - ) - - lens = af.Model( - al.Galaxy, - redshift=0.155, - bulge=al.lp_linear.Sersic, # Note the use of ``lp_linear`` instead of ``lp``. - mass=al.mp.Isothermal, # This uses linear light profiles explained in the modeling ``start_here`` example. - ) - - source = af.Model(al.Galaxy, redshift=0.517, disk=al.lp_linear.Exponential) - -We combine the lens and source model galaxies above into a ``Collection``, which is the model we will fit. - -Note how we could easily extend this object to compose highly complex models containing many galaxies. - -.. code:: python - - model = af.Collection(galaxies=af.Collection(lens=lens, source=source)) - -By printing the ``Model``'s we see that each parameters has a prior associated with it, which is used by the -model-fitting procedure to fit the model. - -.. code:: python - - print(model) - -The ``info`` attribute shows the model information in a more readable format: - -.. code:: python - - print(model.info) - -We now choose the 'non-linear search', which is the fitting method used to determine the light profile parameters that -best-fit the data. - -In this example we use [nautilus](https://nautilus-sampler.readthedocs.io/en/stable/), a nested sampling algorithm -that in our experience has proven very effective at galaxy modeling. - -.. code:: python - - search = af.Nautilus(name="start_here") - -To perform the model-fit, we create an ``AnalysisImaging`` object which contains the ``log_likelihood_function`` that the -non-linear search calls to fit the galaxy model to the data. - -The ``AnalysisImaging`` object is expanded on in the modeling ``start_here`` example, but in brief performs many useful -associated with modeling, including outputting results to hard-disk and visualizing the results of the fit. - -.. code:: python - - analysis = al.AnalysisImaging(dataset=dataset) - -To perform the model-fit we pass the model and analysis to the search's fit method. This will output results (e.g., -Nautilus samples, model parameters, visualization) to your computer's storage device. - -However, the lens modeling of this system takes a minute or so. Therefore, to save time, we have commented out -the ``fit`` function below so you can skip through to the next section of the notebook. Feel free to uncomment the code -and run the galaxy modeling yourself! - -Once a model-fit is running, **PyAutoLens** outputs the results of the search to storage device on-the-fly. This -includes galaxy model parameter estimates with errors non-linear samples and the visualization of the best-fit galaxy -model inferred by the search so far. - -.. code:: python +Lens Modeling +------------- - result = search.fit(model=model, analysis=analysis) +Lens modeling is the process where given data on a strong lens, we fit the data with a model to infer the properties +of the lens and source galaxies. The animation below shows a slide-show of the lens modeling procedure. Many lens models are fitted to the data over and over, gradually improving the quality of the fit to the data and looking more and more like the observed image. @@ -660,36 +421,14 @@ iterations are performed. **Credit: Amy Etherington** -Results -------- - -The fit returns a ``Result`` object, which contains the best-fit galaxies and the full posterior information of the -non-linear search, including all parameter samples, log likelihood values and tools to compute the errors on the -galaxy model. - -Using results is explained in full in the ``guides/results`` section of the workspace, but for a quick illustration -the commented out code below shows how easy it is to plot the fit and posterior of the model. - -.. code:: python - - fit_plotter = aplt.FitImagingPlotter(fit=result.max_log_likelihood_fit) - fit_plotter.subplot_fit() - - plotter = aplt.NestPlotter(samples=result.samples) - plotter.corner_cornerpy() - -Here is an example corner plot of the model-fit, which shows the probability density function of every parameter in the -model: - -.. image:: https://raw.githubusercontent.com/Jammy2211/PyAutoLens/main/docs/overview/images/overview_1/cornerplot.png - :width: 600 - :alt: Alternative text +**PyAutoLens**'s main goal is to make lens modeling **simple** for everyone, **scale** to large datasets +and **run very fast** thanks to GPU acceleration via JAX. Wrap Up ------- We have now completed the API overview of **PyAutoLens**, including a brief introduction to the core API for -creating galaxies, simulating data, fitting data and performing galaxy modeling. +creating galaxies, simulating data and performing lens modeling. The next overview describes how a new user should navigate the **PyAutoLens** workspace, which contains many examples and tutorials, in order to get up and running with the software. \ No newline at end of file diff --git a/docs/overview/overview_2_new_user_guide.rst b/docs/overview/overview_2_new_user_guide.rst index 101673130..2cb501fd4 100644 --- a/docs/overview/overview_2_new_user_guide.rst +++ b/docs/overview/overview_2_new_user_guide.rst @@ -5,39 +5,58 @@ New User Guide **PyAutoLens** is an extensive piece of software with functionality for doing many different analysis tasks, fitting different data types and it is used for a variety of different science cases. This means the documentation is quite -extensive, and it may be difficult to find the example script you need. +extensive, and it may initially be difficult to find the example script you need. -This page provides a sequential guide for news users on how to begin learning **PyAutoLens**, and can act as a useful -resource for existing users who are looking for how to do a specific task. +The `autolens_workspace` has five `start_here.ipynb` notebooks, and you need to determine which is most relevant +to your scientific interests: -Before starting this guide, you should ensure you have installed **PyAutoLens** and downloaded the ``autolens_workspace`` -by following the `installation guide `_. + - ``start_here_imaging.ipynb``: Galaxy scale strong lenses observed with CCD imaging (e.g. Hubble, James Webb). + - ``start_here_interferometer.ipynb``: Galaxy scale strong lenses observed with interferometer data (e.g. ALMA). + - ``start_here_point_source.ipynb``: Galaxy scale strong lenses with a lensed point source (e.g. lensed quasars). + - ``start_here_group.ipynb``: Group scale strong lenses where there are 2-10 lens galaxies. + - ``start_here_cluster.ipynb``: Cluster scale strong lenses with 2+ lenses and 5+ source galaxies. -Contents --------- +If you are still unsure based on the brief descriptions above, answer the following two questions to work out +where to start -One line summaries of each step in the new user guide is given below, to give you a sense of what you are going to learn: +What Scale Lens? +---------------- -**1) Workspace:** Read the ``start_here.ipynb`` workspace example for a quick run through of the core API for lensing. -**2) HowToLens?**: Whether you should begin with lectures aimed at inexperienced scientists (e.g. under graduate students). +What size and scale of strong lens system are you expecting to work with? +There are three scales to choose from: -1) Workspace ------------- +- **Galaxy Scale**: Made up of a single lens galaxy lensing a single source galaxy, the simplest strong lens you can get! + If you're interested in galaxy scale lenses, go to the question below called "What Data Type?". -You should now have the ``autolens_workspace`` on your computer and see many of the folder and files we'll begin -navigating. +- **Group Scale**: Strong Lens Groups contains 2-10 lens galaxies, normally with one main large galaxy responsible for the majority of lensing. + They also typically lens just one source galaxy. If you are interested in groups, go to the `start_here_group.ipynb` notebook. -First of all, if you have not already, you should read the `autolens_workspace/start_here.ipynb` notebook, -which provides a run through of the core API for gravitational lensing calculations and lens modeling. +- **Cluster Scale**: Strong Lens Galaxy clusters often contained 20-50, or more, lens galaxies, lensing 10, or more, sources galaxies. + If you are interested in clusters, go to the `start_here_cluster.ipynb` notebook. -GitHub Links: +What Dataset Type? +------------------ + +If you are interested in galaxy-scale strong lenses, you now need to decide what type of strong lens data you are +interested in: + +- **CDD Imaging**: For image data from telescopes like Hubble and James Webb, go to `start_here_imaging.ipynb`. -https://github.com/Jammy2211/autolens_workspace/tree/release +- **Interferometer**: For radio / sub-mm interferometer from instruments like ALMA, go to `start_here_interferometer.ipynb`. -2) HowToLens? +- **Point Sources**: For strongly lensed point sources (e.g. lensed quasars, supernovae), go to `start_here_point_source.ipynb`. + +Still Unsure? ------------- +Each notebook is short and self-contained, and can be completed and adapted quickly to your particular task. +Therefore, if you're unsure exactly which scale of lensing applies to you, or quite what data you want to use, you +should just read through a few different notebooks and go from there. + +HowToLens +--------- + For experienced scientists, the **PyAutoLens** examples will be simple to follow. Concepts surrounding strong lensing may already be familiar and the statistical techniques used for fitting and modeling already understood. @@ -57,197 +76,6 @@ GitHub Links: https://github.com/Jammy2211/autolens_workspace/tree/release/notebooks/howtolens -3) Configs ----------- - -The ``autolens_workspace/config`` folder contains numerous .YAML configuration files which customization many -default settings of **PyAutoLens**. - -Documentation for all config settings are provided within each config file. - -New users should not worry about the majority of configs for now. However, the ``config/visualize`` folder contains -config files which customization ``matplotlib`` visualization, and editing these now will ensure figures and -images display optimally in your Jupyter Notebooks. - -All default ``matplotlib`` options are customized via the `mat_wrap.yaml`, `mat_wrap_1d.yaml` and `mat_wrap_2d.yaml` files -in `autolens_workspace/config/visualize/mat_wrap`. For example, if figures display with labels that are too big -or small, you can adjust their default labelsizes by changing the following options: - - - mat_wrap.yaml -> Figure -> figure: -> figsize - - mat_wrap.yaml -> YLabel -> figure: -> fontsize - - mat_wrap.yaml -> XLabel -> figure: -> fontsize - - mat_wrap.yaml -> TickParams -> figure: -> labelsize - - mat_wrap.yaml -> YTicks -> figure: -> labelsize - - mat_wrap.yaml -> XTicks -> figure: -> labelsize - -The default colormap can be changed from the default to your favour ``matplotlib`` colormap, but adjusting: - - - mat_wrap.yaml -> Cmap -> figure -> cmap - -All settings have a ``figure`` and ``subplot`` option, so that single image ``figures`` and a subplot of multiple -figures can be customized independently. - -GitHub Links: - -https://github.com/Jammy2211/autolens_workspace/tree/release/config -https://github.com/Jammy2211/autolens_workspace/tree/release/config/visualize - -4) Dataset Type ---------------- - -**PyAutoLens** supports multiple different data types, and you as a user likely only require to learn how to use -the software to analyse one type of dataset. - -Therefore, you now need to assess which dataset type is relevant to you (``imaging``, ``interferometer``, ``point_source`` or ``group``).: - -- **Imaging**: CCD imaging data (e.g. from the Hubble Space Telescope or James Webb Space Telescope), in which case -you will go to the ``imaging`` packages in the workspace. - -- **Interferometry**: Interferometer data from a submm or radio interferometer (e.g. ALMA or JVLA), in which case -you will go to the ``interferometer`` packages in the workspace. - -- **Point Source**: Data of a lensed point source (e.g. a quasar or supernovae) where analysis is in the point source regime, -in which case you will go to the ``point_source`` packages in the workspace. - -The scale of your strong lens is also important. Most examples assume that your strong lens is galaxy-scale, meaning that -there is only one lens galaxy and one lensed source. For these systems the Einstein radius is typically below 5.0". - -A **group** scale lens is one where the lens has multiple galaxies responsible for the lensing, and all of their -mass must be modeled for an accurate analysis. Group scale lenses often have multiple sources and Einstein Radii -above 5.0". - -If you are modeling group-scale data, you should go to the ``group`` packages in the workspace. - -5) API and Units Guides ------------------------ - -The ``autolens_workspace/guides`` package has many useful guides, including concise API reference guides (``guides/api``) -and unit conversion guides (``guides/units``). - -Quickly navigate to this part of the workspace and skim read the guides quickly. You do not need to understand them in detail now -so don't spend long reading them. - -**If your dataset type is a point source, you should read the ``guides/point_source.ipynb`` guide now, which covers many details of point source modeling.** - -The purpose of looking at them now is you know they exist and can refer to them if you get stuck using **PyAutoLens**. - -GitHub Links: - -https://github.com/Jammy2211/autolens_workspace/tree/release/notebooks/guides -https://github.com/Jammy2211/autolens_workspace/tree/release/notebooks/guides/api -https://github.com/Jammy2211/autolens_workspace/tree/release/notebooks/guides/units - -6) Simulations --------------- - -Learning how to simulate your type of data is the best way to understanding how to analyse it. - -Therefore, in the ``autolens_workspace/simulators`` folder, find the ``start_here.ipynb`` of your dataset. - -For example, if your dataset type is CCD imaging data, you'll read the notebook ``autolens_workspace/simulators/imaging/start_here.ipynb``. - -Your **PyAutoLens** use case might only require you to be able to simulate strong lenses, for example if you are -training a neural network. In this case, you can stop the guide and use the tools in the ``simulators`` package -to start doing your science! - -GitHub Links: - -https://github.com/Jammy2211/autolens_workspace/tree/release/notebooks/simulators - -7) Modeling ------------ - -Having simulated a dataset, you are now ready to learn how to model it. - -Therefore, in the ``autolens_workspace/modeling`` folder, find the ``start_here.ipynb`` of your dataset. - -For example, if your dataset type is CCD imaging data, you'll read the notebook ``autolens_workspace/modeling/imaging/start_here.ipynb``. - -Your **PyAutoLens** use case might only require you to be able to model simulated strong lenses, for example if you are -investigating what lens models can be used to learn from strong lenses. In this case, you can skip the data preparation -step below and go straight to learning about results. - -GitHub Links: - -https://github.com/Jammy2211/autolens_workspace/tree/release/notebooks/modeling - -8) Data Preparation -------------------- - -If you have real observations of strong lenses you want to model, you need to prepare the data so that it -is appropriate for **PyAutoLens**. - -This includes reducing the data so the strong lens is in the centre of the image, making sure all units -are defined correctly and reducing extra data products like the Point Spread Function for CCD imaging data. - -Therefore, in the ``autolens_workspace/data_preparation`` folder, find the ``start_here.ipynb`` of your dataset. - -For example, if your dataset type is CCD imaging data, you'll read the notebook ``autolens_workspace/data_preparation/imaging/start_here.ipynb``. - -GitHub Links: - -https://github.com/Jammy2211/autolens_workspace/tree/release/notebooks/data_preparation - -9) Results ----------- - -Lens modeling infers many results, including parameter estimates, posteriors and a Bayesian evidence of the model. -Furthermore, you may wish to inspect the results, the quality of the fit and produce visuals to determine -if you think its a good fit. - -Therefore, now read the ``autolens_workspace/*/results/start_here.ipynb`` notebook. - -GitHub Links: - -https://github.com/Jammy2211/autolens_workspace/tree/release/notebooks/results - -10) Plotting ------------- - -**PyAutoLens** has an in depth visualizaiton library that allows for high levels of customization via ``matplotlib``. - -Plotting has its own dedicated API, which you should become familiar with via the example ``autolens_workspace/*/plot/start_here.ipynb``. - -GitHub Links: - -https://github.com/Jammy2211/autolens_workspace/blob/main/notebooks/plot/start_here.ipynb - -11) Features ------------- - -You now have a comprehensive understanding of the **PyAutoLens** API and how to use it to simulate, model and -plot your data. - -**PyAutoLens** has many more features, which may or may not be useful for your science case. - -Example notebooks for every feature are provided in the ``autolens_workspace/*/features`` package and a high-level -summary of each feature is provided on the next page of this readthedocs. - -What features you need depend on many factors: (i) your science case; (ii) the quality of your data; (iii) how -much time you are willing to invest in learning **PyAutoLens**. We recommend you read the literature in conjunction -with assessing what features are available, and then make an informed decision on what is appropriate for you. - -GitHub Links: - -https://github.com/Jammy2211/autolens_workspace/tree/release/notebooks/features - -12) Advanced ------------- - -The ``autolens_workspace/*/advanced`` folder has numerous advanced examples which only a user experienced with -**PyAutoLens** should use. - -These include examples of how to fit multiple datasets simultaneously (e.g. multi-wavelength CCD imaging datasets), -automated pipelines for modeling large lens samples (called the Source, Light and Mass (SLaM) pipelines in the -literature) and a step-by-step guide of the **PyAutoLens** likelihood function. - -New users should ignore this folder for now, but note that you may find it has important functionality for -your science research in a couple of months time once you are experienced with **PyAutoLens**! - -GitHub Links: - -https://github.com/Jammy2211/autolens_workspace/tree/release/notebooks/advanced - Wrap Up ------- diff --git a/test_autolens/config/general.yaml b/test_autolens/config/general.yaml index 74f6e5655..f13037268 100644 --- a/test_autolens/config/general.yaml +++ b/test_autolens/config/general.yaml @@ -8,7 +8,7 @@ grid: remove_projected_centre: false hpc: hpc_mode: false - iterations_per_update: 5000 + iterations_per_full_update: 5000 adapt: adapt_minimum_percent: 0.01 adapt_noise_limit: 100000000.0 diff --git a/test_autolens/config/non_linear.yaml b/test_autolens/config/non_linear.yaml index 3fb607c33..31784457d 100644 --- a/test_autolens/config/non_linear.yaml +++ b/test_autolens/config/non_linear.yaml @@ -4,18 +4,14 @@ mock: method: prior printing: silence: false - updates: - iterations_per_update: 2500 - remove_state_files_at_end: true + MockSearch: initialize: method: prior printing: silence: false search: {} - updates: - iterations_per_update: 2500 - remove_state_files_at_end: true + nest: DynestyDynamic: general: @@ -39,9 +35,7 @@ nest: number_of_cores: 1 printing: silence: false - updates: - iterations_per_update: 2500 - remove_state_files_at_end: true + DynestyStatic: parallel: number_of_cores: 1 @@ -78,6 +72,4 @@ nest: stagger_resampling_likelihood: true verbose: false write_output: true - updates: - iterations_per_update: 2500 - remove_state_files_at_end: true +