diff --git a/notebooks/cookbooks/analysis.ipynb b/notebooks/cookbooks/analysis.ipynb index 0f8aabf9..f7e3594b 100644 --- a/notebooks/cookbooks/analysis.ipynb +++ b/notebooks/cookbooks/analysis.ipynb @@ -119,7 +119,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/cookbooks/configs.ipynb b/notebooks/cookbooks/configs.ipynb index 77a6d343..b1528b44 100644 --- a/notebooks/cookbooks/configs.ipynb +++ b/notebooks/cookbooks/configs.ipynb @@ -21,7 +21,7 @@ "__Contents__\n", "\n", " - No Config Behaviour: An example of what happens when a model component does not have a config file.\n", - " - Template: A template config file for specifying default model component priors.\n", + " - Templates: A template config file for specifying default model component priors.\n", " - Modules: Writing prior config files based on the Python module the model component Python class is contained in.\n", " - Labels: Config files which specify the labels of model component parameters for visualization." ] @@ -112,7 +112,33 @@ "source": [ "model = af.Model(GaussianNoConfig)\n", "\n", - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/cookbooks/model.ipynb b/notebooks/cookbooks/model.ipynb index fcfaa404..fdca2e1e 100644 --- a/notebooks/cookbooks/model.ipynb +++ b/notebooks/cookbooks/model.ipynb @@ -51,7 +51,8 @@ " - Instances (af.Array): Create an instance of a numpy array model via input parameters.\n", " - Model Customization (af.Array): Customize a numpy array model (e.g. fixing parameters or linking them to one another).\n", " - Json Output (af.Array): Output a numpy array model in human readable text via a .json file and loading it back again.\n", - " - Extensible Models (af.Array): Using numpy arrays to compose models with a flexible number of parameters." + " - Extensible Models (af.Array): Using numpy arrays to compose models with a flexible number of parameters.\n", + " - Wrap Up: A summary of model composition in PyAutoFit." ] }, { diff --git a/notebooks/cookbooks/model_internal.ipynb b/notebooks/cookbooks/model_internal.ipynb new file mode 100644 index 00000000..f8f96342 --- /dev/null +++ b/notebooks/cookbooks/model_internal.ipynb @@ -0,0 +1,1263 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Cookbook: Model Internals\n", + "=========================\n", + "\n", + "The model composition cookbooks show how to compose models and fit them to data using the high level **PyAutoFit** API\n", + "(e.g. ``af.Model``, ``af.Collection``, ``af.Array``).\n", + "\n", + "Under the hood, **PyAutoFit** uses a recursive tree structure to map parameter vectors to Python class instances. This\n", + "cookbook exposes that internal machinery, which is useful when:\n", + "\n", + " - You need to programmatically inspect or manipulate models at a lower level than the standard API.\n", + " - You are building custom non-linear search integrations or analysis pipelines.\n", + " - You want to pass information between model-fitting stages (prior passing, model replacement).\n", + " - You want to understand how **PyAutoFit** works under the hood so you can debug and extend it.\n", + "\n", + "__Contents__\n", + "\n", + " - Prior Identity And Shared Priors: How linked parameters work via Python object identity.\n", + " - Prior Tuples And Ordering: The canonical ordering of priors that defines the parameter vector layout.\n", + " - Argument Dictionaries: Building ``{Prior: value}`` dictionaries and using ``instance_for_arguments`` directly.\n", + " - Model Tree Navigation: Using ``paths``, ``object_for_path``, and ``path_instance_tuples_for_class`` to traverse models.\n", + " - Creating New Models From Existing Ones: ``mapper_from_prior_arguments``, ``take_attributes``, ``from_instance``.\n", + " - Model Subsetting: ``with_paths`` and ``without_paths`` to extract sub-models.\n", + " - Freezing For Performance: Caching repeated lookups during fitting.\n", + " - Serialization Round Trip: ``dict()`` and ``from_dict()`` for JSON persistence." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "%matplotlib inline\n", + "from pyprojroot import here\n", + "workspace_path = str(here())\n", + "%cd $workspace_path\n", + "print(f\"Working Directory has been set to `{workspace_path}`\")\n", + "\n", + "import autofit as af\n", + "import numpy as np" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Setup__\n", + "\n", + "We define two simple model component classes to use throughout this cookbook." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "\n", + "\n", + "class Gaussian:\n", + " def __init__(\n", + " self,\n", + " centre: float = 30.0,\n", + " normalization: float = 1.0,\n", + " sigma: float = 5.0,\n", + " ):\n", + " self.centre = centre\n", + " self.normalization = normalization\n", + " self.sigma = sigma\n", + "\n", + "\n", + "class Exponential:\n", + " def __init__(\n", + " self,\n", + " centre: float = 30.0,\n", + " normalization: float = 1.0,\n", + " rate: float = 0.01,\n", + " ):\n", + " self.centre = centre\n", + " self.normalization = normalization\n", + " self.rate = rate\n" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Prior Identity And Shared Priors__\n", + "\n", + "In **PyAutoFit**, every free parameter is represented by a ``Prior`` object. When you create a model, each constructor\n", + "argument gets its own ``Prior`` instance with a unique ``id``.\n", + "\n", + "The key insight is that **linked parameters share the same ``Prior`` object** (same Python reference). This means the\n", + "non-linear search treats them as a single dimension." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model = af.Model(Gaussian)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Each parameter is a ``Prior`` with a unique id:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "print(\"centre prior id:\", model.centre.id)\n", + "print(\"normalization prior id:\", model.normalization.id)\n", + "print(\"sigma prior id:\", model.sigma.id)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now link two parameters by assigning one to the other:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model.centre = model.normalization" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After linking, ``centre`` and ``normalization`` are the exact same ``Prior`` object:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "print(\"\\nAfter linking:\")\n", + "print(\"centre is normalization:\", model.centre is model.normalization)\n", + "print(\"centre prior id:\", model.centre.id)\n", + "print(\"normalization prior id:\", model.normalization.id)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This reduces the dimensionality of the model by 1:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "print(\"Free parameters:\", model.total_free_parameters) # 2, not 3" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When the model is instantiated, both ``centre`` and ``normalization`` receive the same value from the parameter vector.\n", + "\n", + "Let's verify this directly:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "instance = model.instance_from_vector(vector=[10.0, 5.0])\n", + "print(\"\\ncentre =\", instance.centre)\n", + "print(\"normalization =\", instance.normalization)\n", + "print(\"sigma =\", instance.sigma)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Both ``centre`` and ``normalization`` are 10.0 because they share the same prior, and that prior was mapped to the\n", + "first element of the vector.\n", + "\n", + "__Prior Tuples And Ordering__\n", + "\n", + "The model maintains several views of its priors as lists of ``(name, prior)`` tuples. Understanding these is essential\n", + "for working with the internal API.\n", + "\n", + "Let's create a fresh model to explore:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model = af.Collection(gaussian=Gaussian, exponential=Exponential)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "``prior_tuples`` returns ALL ``(name, Prior)`` pairs, recursively traversing the model tree. If a prior appears in\n", + "multiple places (linked), it appears multiple times here:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "print(\"prior_tuples:\")\n", + "for name, prior in model.prior_tuples:\n", + " print(f\" {name}: id={prior.id}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "``unique_prior_tuples`` deduplicates by prior identity. For an unlinked model, this is the same as ``prior_tuples``:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "print(f\"\\nunique_prior_tuples count: {len(model.unique_prior_tuples)}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "``prior_tuples_ordered_by_id`` is the **canonical ordering** that defines the layout of the parameter vector. Priors\n", + "are sorted by their ``id`` attribute (an auto-incrementing integer assigned at creation time):" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "print(\"\\nprior_tuples_ordered_by_id (defines vector layout):\")\n", + "for i, (name, prior) in enumerate(model.prior_tuples_ordered_by_id):\n", + " print(f\" vector[{i}] -> {name} (prior id={prior.id})\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The ``paths`` property gives the same ordering but as tuple-paths through the model tree:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "print(\"\\npaths:\")\n", + "for path in model.paths:\n", + " print(f\" {path}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The distinction between ``direct_prior_tuples`` and ``prior_tuples`` is important:\n", + "\n", + " - ``direct_prior_tuples``: Only priors that are direct attributes of THIS model level (not nested).\n", + " - ``prior_tuples``: All priors recursively, including those inside child Models and Collections." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "print(f\"\\nCollection direct_prior_tuples: {len(model.direct_prior_tuples)}\")\n", + "print(f\"Collection prior_tuples: {len(model.prior_tuples)}\")\n", + "print(f\"Gaussian direct_prior_tuples: {len(model.gaussian.direct_prior_tuples)}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Argument Dictionaries__\n", + "\n", + "The core of the model mapping system is the ``{Prior: value}`` dictionary, called ``arguments``. Every method\n", + "that creates an instance ultimately works through this dictionary.\n", + "\n", + "You can build one manually and call ``instance_for_arguments`` directly. This is useful when you need fine-grained\n", + "control over how instances are created." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model = af.Model(Gaussian)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Step 1: Get the priors. Each prior object is a key in the arguments dictionary." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "centre_prior = model.centre\n", + "normalization_prior = model.normalization\n", + "sigma_prior = model.sigma" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Step 2: Build the arguments dictionary mapping each Prior to a physical value." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "arguments = {\n", + " centre_prior: 50.0,\n", + " normalization_prior: 3.0,\n", + " sigma_prior: 10.0,\n", + "}" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Step 3: Call ``instance_for_arguments`` to create the instance." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "instance = model.instance_for_arguments(arguments)\n", + "\n", + "print(\"Instance from arguments:\")\n", + "print(f\" centre = {instance.centre}\")\n", + "print(f\" normalization = {instance.normalization}\")\n", + "print(f\" sigma = {instance.sigma}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This is exactly what ``instance_from_vector`` does internally. Here is the equivalent using a vector:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "instance_from_vec = model.instance_from_vector(vector=[50.0, 3.0, 10.0])" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For a ``Collection``, the same arguments dictionary is shared across all child models. Each child\n", + "picks out its own priors from the shared dictionary:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model = af.Collection(gaussian=Gaussian, exponential=Exponential)\n", + "\n", + "arguments = {}\n", + "for name, prior in model.prior_tuples_ordered_by_id:\n", + " arguments[prior] = 1.0 # Set all parameters to 1.0\n", + "\n", + "instance = model.instance_for_arguments(arguments)\n", + "\n", + "print(\"\\nCollection instance (all params = 1.0):\")\n", + "print(f\" gaussian.centre = {instance.gaussian.centre}\")\n", + "print(f\" exponential.rate = {instance.exponential.rate}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Unit Vector Mapping__\n", + "\n", + "Non-linear searches work in \"unit space\" where each parameter ranges from 0 to 1. The ``instance_from_unit_vector``\n", + "method transforms unit values to physical values via each prior's ``value_for`` method." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model = af.Model(Gaussian)\n", + "model.centre = af.UniformPrior(lower_limit=0.0, upper_limit=100.0)\n", + "model.normalization = af.UniformPrior(lower_limit=0.0, upper_limit=10.0)\n", + "model.sigma = af.UniformPrior(lower_limit=0.0, upper_limit=50.0)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "A unit vector of [0.5, 0.5, 0.5] maps to the midpoint of each prior:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "instance = model.instance_from_unit_vector(unit_vector=[0.5, 0.5, 0.5])\n", + "\n", + "print(\"Instance from unit vector [0.5, 0.5, 0.5]:\")\n", + "print(f\" centre = {instance.centre}\") # 50.0\n", + "print(f\" normalization = {instance.normalization}\") # 5.0\n", + "print(f\" sigma = {instance.sigma}\") # 25.0" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can also convert a unit vector to a physical vector without creating an instance:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "physical_vector = model.vector_from_unit_vector(unit_vector=[0.0, 1.0, 0.5])\n", + "print(f\"\\nPhysical vector: {physical_vector}\") # [0.0, 10.0, 25.0]" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Model Tree Navigation__\n", + "\n", + "Models form a tree. **PyAutoFit** provides several methods to navigate this tree.\n", + "\n", + "``object_for_path`` retrieves any object in the model by its tuple-path:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model = af.Collection(gaussian=af.Model(Gaussian), exponential=af.Model(Exponential))" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Retrieve a child model:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "gaussian_model = model.object_for_path((\"gaussian\",))\n", + "print(f\"Object at ('gaussian',): {gaussian_model}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Retrieve a specific prior:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "centre_prior = model.object_for_path((\"gaussian\", \"centre\"))\n", + "print(f\"Object at ('gaussian', 'centre'): {centre_prior}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "``path_instance_tuples_for_class`` finds all objects of a given type in the tree, returning their paths:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "from autofit.mapper.prior.abstract import Prior\n", + "\n", + "print(\"\\nAll Prior locations:\")\n", + "for path, prior in model.path_instance_tuples_for_class(Prior):\n", + " print(f\" path={path}, prior id={prior.id}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "``path_for_prior`` and ``name_for_prior`` go the other direction, finding where a prior lives:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "prior = model.gaussian.centre\n", + "path = model.path_for_prior(prior)\n", + "name = model.name_for_prior(prior)\n", + "print(f\"\\nPath for gaussian.centre prior: {path}\")\n", + "print(f\"Name for gaussian.centre prior: {name}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Creating New Models From Existing Ones__\n", + "\n", + "A common workflow is to take the results of one fit and use them to initialize the next. **PyAutoFit** provides\n", + "several methods for this.\n", + "\n", + "``mapper_from_prior_arguments`` creates a new model where specified priors are replaced:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model = af.Model(Gaussian)\n", + "model.centre = af.UniformPrior(lower_limit=0.0, upper_limit=100.0)\n", + "model.normalization = af.UniformPrior(lower_limit=0.0, upper_limit=10.0)\n", + "model.sigma = af.UniformPrior(lower_limit=0.0, upper_limit=50.0)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "``mapper_from_partial_prior_arguments`` creates a new model where specified priors are replaced and\n", + "all other priors are kept unchanged:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "new_centre_prior = af.GaussianPrior(mean=50.0, sigma=5.0)\n", + "new_model = model.mapper_from_partial_prior_arguments({model.centre: new_centre_prior})\n", + "\n", + "print(\"Original centre prior:\", model.centre)\n", + "print(\"New centre prior:\", new_model.centre)\n", + "print(\"Normalization unchanged:\", type(new_model.normalization).__name__)\n", + "print(f\"New model prior count: {new_model.prior_count}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "``mapper_from_prior_arguments`` requires ALL priors to be mapped. It is the lower-level version:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "new_model = model.mapper_from_prior_arguments(\n", + " {\n", + " model.centre: af.GaussianPrior(mean=50.0, sigma=2.0),\n", + " model.normalization: model.normalization, # Keep unchanged\n", + " model.sigma: model.sigma, # Keep unchanged\n", + " }\n", + ")\n", + "print(f\"\\nFull replacement prior count: {new_model.prior_count}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "``take_attributes`` copies matching attributes from another model or instance. This is the mechanism\n", + "behind \"prior passing\" between search stages:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "source_instance = Gaussian(centre=50.0, normalization=3.0, sigma=10.0)\n", + "\n", + "new_model = af.Model(Gaussian)\n", + "new_model.take_attributes(source_instance)\n", + "\n", + "print(\"\\nAfter take_attributes from instance:\")\n", + "print(f\" centre = {new_model.centre}\") # Now fixed at 50.0\n", + "print(f\" normalization = {new_model.normalization}\") # Now fixed at 3.0\n", + "print(f\" sigma = {new_model.sigma}\") # Now fixed at 10.0\n", + "print(f\" Free parameters: {new_model.total_free_parameters}\") # 0" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "``from_instance`` converts a concrete instance back into a prior model. Parameters become fixed values\n", + "unless the instance type is in ``model_classes``, in which case they become free priors:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "instance = Gaussian(centre=50.0, normalization=3.0, sigma=10.0)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "All parameters fixed (instance conversion):" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "fixed_model = af.AbstractPriorModel.from_instance(instance)\n", + "print(f\"\\nfrom_instance (no model_classes): prior_count = {fixed_model.prior_count}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With model_classes, matching types get free parameters:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "free_model = af.AbstractPriorModel.from_instance(instance, model_classes=(Gaussian,))\n", + "print(f\"from_instance (Gaussian as model_class): prior_count = {free_model.prior_count}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Model Subsetting__\n", + "\n", + "You can create sub-models that contain only a subset of the original model's parameters using ``with_paths`` and\n", + "``without_paths``." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model = af.Collection(gaussian=Gaussian, exponential=Exponential)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Keep only the gaussian component:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "gaussian_only = model.with_paths([(\"gaussian\",)])\n", + "print(f\"\\ngaussian_only prior count: {gaussian_only.prior_count}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Remove the gaussian component, keeping everything else:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "without_gaussian = model.without_paths([(\"gaussian\",)])\n", + "print(f\"without_gaussian prior count: {without_gaussian.prior_count}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can also subset at the parameter level:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "centre_only = model.with_paths([(\"gaussian\", \"centre\"), (\"exponential\", \"centre\")])\n", + "print(f\"centres only prior count: {centre_only.prior_count}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Freezing For Performance__\n", + "\n", + "During a non-linear search, the model structure does not change between iterations but properties\n", + "like ``prior_tuples_ordered_by_id`` are recomputed each time. Freezing a model caches these lookups." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model = af.Collection(gaussian=Gaussian, exponential=Exponential)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Freeze the model (caches results of decorated methods):" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model.freeze()" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Repeated calls now return cached results:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "_ = model.prior_tuples_ordered_by_id # Computed\n", + "_ = model.prior_tuples_ordered_by_id # Cached" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "A frozen model cannot be modified:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "try:\n", + " model.gaussian.centre = 1.0\n", + "except AssertionError as e:\n", + " print(f\"\\nCannot modify frozen model: {e}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Unfreeze to allow modification again:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model.unfreeze()\n", + "model.gaussian.centre = af.UniformPrior(lower_limit=0.0, upper_limit=100.0)\n", + "print(f\"After unfreeze and modification, prior count: {model.prior_count}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Serialization Round Trip__\n", + "\n", + "Models can be serialized to dictionaries and JSON for persistence. The ``dict()`` method produces a\n", + "Python dictionary, and ``from_dict()`` reconstructs the model." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "import json\n", + "from os import path\n", + "import os\n", + "\n", + "model = af.Collection(gaussian=Gaussian, exponential=Exponential)\n", + "model.gaussian.centre = af.UniformPrior(lower_limit=0.0, upper_limit=100.0)\n", + "model.gaussian.normalization = 5.0 # Fix this parameter" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Convert to a dictionary:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model_dict = model.dict()\n", + "print(\"\\nModel dict keys:\", list(model_dict.keys()))" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Save to JSON:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "json_path = path.join(\"scripts\", \"cookbooks\", \"jsons\")\n", + "os.makedirs(json_path, exist_ok=True)\n", + "\n", + "json_file = path.join(json_path, \"model_internal.json\")\n", + "with open(json_file, \"w+\") as f:\n", + " json.dump(model_dict, f, indent=4)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Load from JSON:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "loaded_model = af.AbstractPriorModel.from_json(file=json_file)\n", + "print(f\"Loaded model prior count: {loaded_model.prior_count}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Linked priors are preserved through serialization. If two parameters shared the same prior before saving,\n", + "they will share the same prior after loading." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "\n", + "# %%\n", + "'''\n", + "__Log Prior Computation__\n", + "\n", + "During MCMC sampling (e.g. with emcee), the log prior probability for a parameter vector is needed. The model\n", + "provides this directly:\n", + "'''" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model = af.Model(Gaussian)\n", + "model.centre = af.UniformPrior(lower_limit=0.0, upper_limit=100.0)\n", + "model.normalization = af.UniformPrior(lower_limit=0.0, upper_limit=10.0)\n", + "model.sigma = af.UniformPrior(lower_limit=0.0, upper_limit=50.0)\n", + "\n", + "log_priors = model.log_prior_list_from_vector(vector=[50.0, 5.0, 25.0])\n", + "print(f\"\\nLog priors for [50.0, 5.0, 25.0]: {log_priors}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Values outside the prior bounds return -inf:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "log_priors_oob = model.log_prior_list_from_vector(vector=[150.0, 5.0, 25.0])\n", + "print(f\"Log priors for [150.0, 5.0, 25.0]: {log_priors_oob}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Instance From Path And Name Arguments__\n", + "\n", + "Instead of building a ``{Prior: value}`` dictionary or passing a flat vector, you can create instances using\n", + "path-based or name-based arguments. This is useful for programmatic workflows:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model = af.Collection(gaussian=Gaussian, exponential=Exponential)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Using path arguments (tuple paths):" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "instance = model.instance_from_path_arguments(\n", + " {\n", + " (\"gaussian\", \"centre\"): 50.0,\n", + " (\"gaussian\", \"normalization\"): 3.0,\n", + " (\"gaussian\", \"sigma\"): 10.0,\n", + " (\"exponential\", \"centre\"): 60.0,\n", + " (\"exponential\", \"normalization\"): 2.0,\n", + " (\"exponential\", \"rate\"): 0.5,\n", + " }\n", + ")\n", + "\n", + "print(f\"\\nPath arguments instance:\")\n", + "print(f\" gaussian.centre = {instance.gaussian.centre}\")\n", + "print(f\" exponential.rate = {instance.exponential.rate}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Using name arguments (underscore-joined names):" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "instance = model.instance_from_prior_name_arguments(\n", + " {\n", + " \"gaussian_centre\": 50.0,\n", + " \"gaussian_normalization\": 3.0,\n", + " \"gaussian_sigma\": 10.0,\n", + " \"exponential_centre\": 60.0,\n", + " \"exponential_normalization\": 2.0,\n", + " \"exponential_rate\": 0.5,\n", + " }\n", + ")\n", + "\n", + "print(f\"\\nName arguments instance:\")\n", + "print(f\" gaussian.centre = {instance.gaussian.centre}\")\n", + "print(f\" exponential.rate = {instance.exponential.rate}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Assertions__\n", + "\n", + "Assertions constrain the parameter space without reducing dimensionality. They are checked during instance creation\n", + "and cause a ``FitException`` if violated, prompting the non-linear search to resample." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model = af.Model(Gaussian)\n", + "model.centre = af.UniformPrior(lower_limit=0.0, upper_limit=100.0)\n", + "model.normalization = af.UniformPrior(lower_limit=0.0, upper_limit=10.0)\n", + "model.sigma = af.UniformPrior(lower_limit=0.0, upper_limit=100.0)" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Add an assertion that sigma must be less than centre:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "model.add_assertion(model.sigma < model.centre, name=\"sigma_less_than_centre\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Check the parameter ordering so we know which vector index maps to which parameter:" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "print(\"\\nParameter ordering:\")\n", + "for i, path in enumerate(model.paths):\n", + " print(f\" vector[{i}] -> {path}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This works when the assertion holds (centre > sigma):" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "from autofit import exc\n", + "\n", + "try:\n", + " instance = model.instance_from_vector(vector=[50.0, 3.0, 10.0])\n", + " print(f\"\\nValid instance: centre={instance.centre}, sigma={instance.sigma}\")\n", + "except exc.FitException:\n", + " print(\"Unexpected assertion failure - check parameter ordering\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "And raises ``FitException`` when it does not (sigma > centre):" + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "try:\n", + " instance = model.instance_from_vector(vector=[10.0, 3.0, 50.0])\n", + "except exc.FitException as e:\n", + " print(f\"Assertion failed (as expected): {e}\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Wrap Up__\n", + "\n", + "This cookbook covered the internal model mapping machinery of **PyAutoFit**:\n", + "\n", + " - How prior identity enables linked parameters.\n", + " - The canonical prior ordering that defines the parameter vector layout.\n", + " - Building and using ``{Prior: value}`` argument dictionaries directly.\n", + " - Navigating the model tree with paths and type queries.\n", + " - Creating new models from existing ones for multi-stage pipelines.\n", + " - Subsetting, freezing, and serializing models.\n", + "\n", + "These tools give you fine-grained control over model composition and are the foundation that the high-level API\n", + "is built upon. They are particularly useful when building custom search integrations, analysis pipelines, or\n", + "debugging model-fitting workflows." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [], + "outputs": [], + "execution_count": null + } + ], + "metadata": { + "anaconda-cloud": {}, + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.1" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} \ No newline at end of file diff --git a/notebooks/cookbooks/multiple_datasets.ipynb b/notebooks/cookbooks/multiple_datasets.ipynb index 27002e01..d4729f43 100644 --- a/notebooks/cookbooks/multiple_datasets.ipynb +++ b/notebooks/cookbooks/multiple_datasets.ipynb @@ -30,7 +30,7 @@ "\n", "__Contents__\n", "\n", - " - Model-Fit: Setup a model-fit to 3 datasets to illustrate multi-dataset fitting.\n", + " - Model Fit: Setup a model-fit to 3 datasets to illustrate multi-dataset fitting.\n", " - Analysis List: Create a list of `Analysis` objects, one for each dataset, which are fitted simultaneously.\n", " - Analysis Factor: Wrap each `Analysis` object in an `AnalysisFactor`, which pairs it with the model and prepares it for model fitting.\n", " - Factor Graph: Combine all `AnalysisFactor` objects into a `FactorGraphModel`, which represents a global model fit to multiple datasets.\n", @@ -39,12 +39,13 @@ " stay fixed.\n", " - Relational Model: Fit models where certain parameters vary across the dataset as a user\n", " defined relation (e.g. `y = mx + c`).\n", - " - Different Analysis Classes: Fit multiple datasets where each dataset is fitted by a different `Analysis` class,\n", + " - Different Analysis Objects: Fit multiple datasets where each dataset is fitted by a different `Analysis` class,\n", " meaning that datasets with different formats can be fitted simultaneously.\n", + " - Hierarchical / Graphical Models: Use hierarchical / graphical models to fit multiple datasets simultaneously,\n", + " which fit for global trends in the model across the datasets.\n", " - Interpolation: Fit multiple datasets with a model one-by-one and interpolation over a smoothly varying parameter\n", " (e.g. time) to infer the model between datasets.\n", - " - Hierarchical / Graphical Models: Use hierarchical / graphical models to fit multiple datasets simultaneously,\n", - " which fit for global trends in the model across the datasets." + " - Wrap Up: A summary of multi-dataset fitting in PyAutoFit." ] }, { @@ -97,7 +98,32 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_size = 3\n", + "dataset_size = 3" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(path.join(\"dataset\", \"example_1d\", \"gaussian_x1_identical_0\")):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", "\n", "data_list = []\n", "noise_map_list = []\n", diff --git a/notebooks/cookbooks/result.ipynb b/notebooks/cookbooks/result.ipynb index 7991a16a..0aa9b01b 100644 --- a/notebooks/cookbooks/result.ipynb +++ b/notebooks/cookbooks/result.ipynb @@ -23,6 +23,7 @@ "\n", "An overview of the `Result` object's functionality is given in the following sections:\n", "\n", + " - Simple Fit: Perform a simple model-fit to generate a `Result` object.\n", " - Info: Print the `info` attribute of the `Result` object to display a summary of the model-fit.\n", " - Max Log Likelihood Instance: Getting the maximum likelihood model instance.\n", " - Samples: Getting the samples of the non-linear search from a result.\n", @@ -37,9 +38,10 @@ "\n", "The cookbook next gives examples of how to load all the following results from the database:\n", "\n", - " - Loading Samples: The samples of the non-linear search (e.g. all parameter values, log likelihoods, etc.).\n", + " - Samples: The samples of the non-linear search loaded via the aggregator.\n", " - Loading Model: The model fitted by the non-linear search.\n", " - Loading Search: The search used to perform the model-fit.\n", + " - Loading Samples: The samples of the non-linear search (e.g. all parameter values, log likelihoods, etc.).\n", " - Loading Samples Info: Additional information on the samples.\n", " - Loading Samples Summary: A summary of the samples of the non-linear search (e.g. the maximum log likelihood model).\n", " - Loading Info: The `info` dictionary passed to the search.\n", @@ -54,12 +56,13 @@ " - Querying Searches: Query based on the name of the search.\n", " - Querying Models: Query based on the model that is fitted.\n", " - Querying Results: Query based on the results of the model-fit.\n", - " - Querying Logic: Use logic to combine queries to load specific results (e.g. AND, OR, etc.).\n", + " - Querying with Logic: Use logic to combine queries to load specific results (e.g. AND, OR, etc.).\n", "\n", "The final section describes how to use results built in an sqlite database file:\n", "\n", " - Database: Building a database file from the output folder.\n", " - Unique Identifiers: The unique identifier of each model-fit.\n", + " - Building From Output Folder: Build the database from the output folder on hard-disk.\n", " - Writing Directly To Database: Writing results directly to the database." ] }, @@ -97,7 +100,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/cookbooks/samples.ipynb b/notebooks/cookbooks/samples.ipynb index e7e39a05..cab07e17 100644 --- a/notebooks/cookbooks/samples.ipynb +++ b/notebooks/cookbooks/samples.ipynb @@ -36,8 +36,8 @@ "The following sections outline how to use advanced features of the results, which you may skip on a first read:\n", "\n", " - Derived Quantities: Computing quantities and errors for quantities and parameters not included directly in the model.\n", - " - Result Extension: Extend the `Result` object with new attributes and methods (e.g. `max_log_likelihood_model_data`).\n", - " - Samples Filtering: Filter the `Samples` object to only contain samples fulfilling certain criteria." + " - Derived Errors Manual (Advanced): Manually computing errors on derived quantities from the PDF of samples.\n", + " - Samples Filtering (Advanced): Filter the `Samples` object to only contain samples fulfilling certain criteria." ] }, { @@ -80,7 +80,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", @@ -390,7 +416,7 @@ "The Probability Density Functions (PDF's) of the results can be plotted using the non-linear search in-built \n", "visualization tools.\n", "\n", - "This fit used `Emcee` therefore we use the `MCMCPlotter` for visualization, which wraps the Python library `corner.py`.\n", + "This fit used `Emcee` therefore we use `corner.py` for visualization via the `aplt.corner_cornerpy` function.\n", "\n", "The `autofit_workspace/*/plots` folder illustrates other packages that can be used to make these plots using\n", "the standard output results formats (e.g. `GetDist.py`)." @@ -400,8 +426,7 @@ "cell_type": "code", "metadata": {}, "source": [ - "plotter = aplt.MCMCPlotter(samples=result.samples)\n", - "plotter.corner_cornerpy()" + "aplt.corner_cornerpy(samples=result.samples)" ], "outputs": [], "execution_count": null diff --git a/notebooks/cookbooks/search.ipynb b/notebooks/cookbooks/search.ipynb index 2aa8b589..da61d0d2 100644 --- a/notebooks/cookbooks/search.ipynb +++ b/notebooks/cookbooks/search.ipynb @@ -68,7 +68,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", @@ -256,7 +282,7 @@ "This uses that search's in-built visualization libraries, which are fully described in the `plot` package of the\n", "workspace.\n", "\n", - "For example, `Emcee` has a corresponding `MCMCPlotter`, which is used as follows.\n", + "For example, `Emcee` results can be plotted using the `aplt.corner_cornerpy` function as follows.\n", "\n", "Checkout the `plot` package for a complete description of the plots that can be made for a given search." ] @@ -267,9 +293,8 @@ "source": [ "samples = result.samples\n", "\n", - "plotter = aplt.MCMCPlotter(samples=samples)\n", - "\n", - "plotter.corner_cornerpy(\n", + "aplt.corner_cornerpy(\n", + " samples=samples,\n", " bins=20,\n", " range=None,\n", " color=\"k\",\n", diff --git a/notebooks/features/graphical_models.ipynb b/notebooks/features/graphical_models.ipynb index d5bd9784..8a7fd846 100644 --- a/notebooks/features/graphical_models.ipynb +++ b/notebooks/features/graphical_models.ipynb @@ -26,6 +26,20 @@ "\n", "The **HowToFit** tutorials contain a chapter dedicated to composing and fitting graphical models.\n", "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this script.\n", + "- **Dataset**: Load 3 noisy 1D Gaussian datasets for simultaneous fitting.\n", + "- **Analysis**: Create Analysis objects for each dataset with a log likelihood function.\n", + "- **Model**: Compose a graphical model with a shared prior across multiple model components.\n", + "- **Analysis Factors**: Pair each Model with its corresponding Analysis class at factor graph nodes.\n", + "- **Factor Graph**: Combine the Analysis Factors into a factor graph representing the graphical model.\n", + "- **Search**: Create a non-linear search and fit the factor graph.\n", + "- **Hierarchical Models**: Discuss how shared parameters can be drawn from a common parent distribution.\n", + "- **Expectation Propagation**: Introduce the EP framework for scaling graphical models to high dimensionality.\n", + "\n", "__Example Source Code (`af.ex`)__\n", "\n", "The **PyAutoFit** source code has the following example objects (accessed via `af.ex`) used in this tutorial:\n", @@ -72,7 +86,32 @@ "cell_type": "code", "metadata": {}, "source": [ - "total_datasets = 3\n", + "total_datasets = 3" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(path.join(\"dataset\", \"example_1d\", \"gaussian_x1__low_snr\", \"dataset_0\")):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", "\n", "dataset_name_list = []\n", "data_list = []\n", diff --git a/notebooks/features/interpolate.ipynb b/notebooks/features/interpolate.ipynb index fd478351..376d3e5f 100644 --- a/notebooks/features/interpolate.ipynb +++ b/notebooks/features/interpolate.ipynb @@ -31,7 +31,18 @@ "\n", " - `Gaussian`: a model component representing a 1D Gaussian profile.\n", "\n", - "These are functionally identical to the `Analysis` and `Gaussian` objects you have seen elsewhere in the workspace." + "These are functionally identical to the `Analysis` and `Gaussian` objects you have seen elsewhere in the workspace.\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this script.\n", + "- **Dataset**: Load 3 noisy 1D Gaussian datasets taken at different times.\n", + "- **Fit**: Fit each dataset individually, storing the maximum likelihood instances for interpolation.\n", + "- **Interpolation**: Use a LinearInterpolator to interpolate model parameters as a function of time.\n", + "- **Serialization**: Serialize the interpolator to a JSON file for reuse.\n", + "- **Database**: Load results from hard disk using the Aggregator for interpolation." ] }, { @@ -72,7 +83,32 @@ "cell_type": "code", "metadata": {}, "source": [ - "total_datasets = 3\n", + "total_datasets = 3" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(path.join(\"dataset\", \"example_1d\", \"gaussian_x1_time\", \"time_0\")):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", "\n", "data_list = []\n", "noise_map_list = []\n", diff --git a/notebooks/features/model_comparison.ipynb b/notebooks/features/model_comparison.ipynb index b08199d4..628591fe 100644 --- a/notebooks/features/model_comparison.ipynb +++ b/notebooks/features/model_comparison.ipynb @@ -42,7 +42,19 @@ "\n", " - `Gaussian`: a model component representing a 1D Gaussian profile.\n", "\n", - "These are functionally identical to the `Analysis` and `Gaussian` objects you have seen elsewhere in the workspace." + "These are functionally identical to the `Analysis` and `Gaussian` objects you have seen elsewhere in the workspace.\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Metrics**: Describe the log likelihood and Bayesian evidence metrics used for model comparison.\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this script.\n", + "- **Data**: Load the 1D Gaussian data consisting of two Gaussians.\n", + "- **Model x1 Gaussian**: Create and fit a model with a single Gaussian.\n", + "- **Model x2 Gaussian**: Create and fit a model with two Gaussians.\n", + "- **Model x3 Gaussian**: Create and fit a model with three Gaussians.\n", + "- **Wrap Up**: Summarize the model comparison results using log likelihood and Bayesian evidence." ] }, { @@ -81,7 +93,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x2\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x2\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/features/search_chaining.ipynb b/notebooks/features/search_chaining.ipynb index 15c5f4c8..5a110a69 100644 --- a/notebooks/features/search_chaining.ipynb +++ b/notebooks/features/search_chaining.ipynb @@ -40,6 +40,30 @@ "By initially fitting parameter spaces of reduced complexity we can achieve a more efficient and reliable model-fitting\n", "procedure.\n", "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this script.\n", + "- **Data**: Load the 1D data containing two split Gaussians.\n", + "- **Analysis**: Create the Analysis class for fitting the model to data.\n", + "- **Model**: Define the model for the left Gaussian.\n", + "- **Search 1**: Perform the first search fitting the left Gaussian (N=3).\n", + "- **Result 1**: Examine the results of the first search.\n", + "- **Search 2**: Perform the second search fitting the right Gaussian (N=3).\n", + "- **Model**: Define the model for the right Gaussian using the result of Search 1.\n", + "- **Result 2**: Examine the results of the second search.\n", + "- **Search 3**: Perform the final search fitting both Gaussians (N=6) using prior results.\n", + "- **Prior Passing**: Explain how priors are passed between chained searches.\n", + "- **EXAMPLE**: A concrete example of prior passing with numerical values.\n", + "- **Prerequisites**: Prerequisites for prior passing concepts.\n", + "- **Overview**: Overview of search chaining and prior passing.\n", + "- **Model-Fit**: The model-fit used in prior passing examples.\n", + "- **Instance & Model**: Explain how results are passed as instances and models.\n", + "- **Component Specification**: Specify which model component priors to pass.\n", + "- **Take Attributes**: Use the take_attributes method to pass priors between different model components.\n", + "- **As Model**: Use the as_model method to create a model from a result instance.\n", + "\n", "__Example Source Code (`af.ex`)__\n", "\n", "The **PyAutoFit** source code has the following example objects (accessed via `af.ex`) used in this tutorial:\n", @@ -88,7 +112,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x2_split\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x2_split\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/features/search_grid_search.ipynb b/notebooks/features/search_grid_search.ipynb index c6c667e7..f94075c5 100644 --- a/notebooks/features/search_grid_search.ipynb +++ b/notebooks/features/search_grid_search.ipynb @@ -43,7 +43,19 @@ "\n", " - `Gaussian`: a model component representing a 1D Gaussian profile.\n", "\n", - "These are functionally identical to the `Analysis` and `Gaussian` objects you have seen elsewhere in the workspace." + "These are functionally identical to the `Analysis` and `Gaussian` objects you have seen elsewhere in the workspace.\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this script.\n", + "- **Data**: Load 1D Gaussian data with a small feature at pixel 70.\n", + "- **Model**: Create a model with two Gaussians (main signal + feature).\n", + "- **Analysis**: Create the Analysis class for fitting the model to data.\n", + "- **Search**: Configure a non-linear search for a single fit.\n", + "- **Result**: Plot and visualize the results from the single fit.\n", + "- **Search Grid Search**: Set up and perform a grid search over a parameter subset." ] }, { @@ -72,7 +84,7 @@ "source": [ "__Data__\n", "\n", - "Load data of a 1D Gaussian from a .json file in the directory \n", + "Load data of a 1D Gaussian from a .json file in the directory\n", "`autofit_workspace/dataset/gaussian_x1_with_feature`.\n", "\n", "This 1D data includes a small feature to the right of the central `Gaussian`. This feature is a second `Gaussian` \n", @@ -83,7 +95,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1_with_feature\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1_with_feature\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/features/sensitivity_mapping.ipynb b/notebooks/features/sensitivity_mapping.ipynb index 5afda31a..e379f27a 100644 --- a/notebooks/features/sensitivity_mapping.ipynb +++ b/notebooks/features/sensitivity_mapping.ipynb @@ -32,7 +32,25 @@ "\n", " - `Gaussian`: a model component representing a 1D Gaussian profile.\n", "\n", - "These are functionally identical to the `Analysis` and `Gaussian` objects you have seen elsewhere in the workspace." + "These are functionally identical to the `Analysis` and `Gaussian` objects you have seen elsewhere in the workspace.\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this script.\n", + "- **Data**: Load 1D Gaussian data with a small feature at pixel 70.\n", + "- **Analysis**: Create the Analysis class for fitting the model to data.\n", + "- **Model Comparison**: Perform Bayesian model comparison on the original data.\n", + "- **Sensitivity Mapping**: Introduce the sensitivity mapping procedure.\n", + "- **Base Model**: Define the simpler model used to simulate datasets.\n", + "- **Perturb Model**: Define the model component used to perturb the base model during sensitivity mapping.\n", + "- **Mapping Grid**: Specify the grid of model parameters for sensitivity mapping.\n", + "- **Simulation Instance**: Provide the instance used for dataset simulation.\n", + "- **Simulate Function Class**: Define the Dataset and Analysis classes for sensitivity mapping.\n", + "- **Base Fit**: Define how the base model is fitted to simulated datasets.\n", + "- **Perturb Fit**: Define how the perturbed model is fitted to simulated datasets.\n", + "- **Results**: Interpret the sensitivity mapping results." ] }, { @@ -61,10 +79,10 @@ "source": [ "__Data__\n", "\n", - "Load data of a 1D Gaussian from a .json file in the directory \n", + "Load data of a 1D Gaussian from a .json file in the directory\n", "`autofit_workspace/dataset/gaussian_x1_with_feature`.\n", "\n", - "This 1D data includes a small feature to the right of the central `Gaussian`. This feature is a second `Gaussian` \n", + "This 1D data includes a small feature to the right of the central `Gaussian`. This feature is a second `Gaussian`\n", "centred on pixel 70. " ] }, @@ -72,7 +90,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1_with_feature\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1_with_feature\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/howtofit/chapter_1_introduction/tutorial_2_fitting_data.ipynb b/notebooks/howtofit/chapter_1_introduction/tutorial_2_fitting_data.ipynb index 71a0ecaa..674accfa 100644 --- a/notebooks/howtofit/chapter_1_introduction/tutorial_2_fitting_data.ipynb +++ b/notebooks/howtofit/chapter_1_introduction/tutorial_2_fitting_data.ipynb @@ -49,7 +49,11 @@ "- **Chi Squared**: Compute and visualize the chi-squared map, a measure of the overall goodness-of-fit.\n", "- **Noise Normalization**: Compute the noise normalization term which describes the noise properties of the data.\n", "- **Likelihood**: Compute the log likelihood, a key measure of the goodness-of-fit of the model to the data.\n", + "- **Recap**: Summarize the standard metrics for quantifying model fit quality.\n", "- **Fitting Models**: Fit the `Gaussian` model to the 1D data and compute the log likelihood, by guessing parameters.\n", + "- **Guess 1**: A first parameter guess with an explanation of the resulting log likelihood.\n", + "- **Guess 2**: An improved parameter guess with a better log likelihood.\n", + "- **Guess 3**: The optimal parameter guess providing the best fit to the data.\n", "- **Extensibility**: Use the `Collection` object for fitting models with multiple components.\n", "- **Wrap Up**: Summarize the key concepts of this tutorial." ] @@ -98,7 +102,32 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "\n", diff --git a/notebooks/howtofit/chapter_1_introduction/tutorial_3_non_linear_search.ipynb b/notebooks/howtofit/chapter_1_introduction/tutorial_3_non_linear_search.ipynb index f181f347..3476cc8c 100644 --- a/notebooks/howtofit/chapter_1_introduction/tutorial_3_non_linear_search.ipynb +++ b/notebooks/howtofit/chapter_1_introduction/tutorial_3_non_linear_search.ipynb @@ -51,9 +51,7 @@ "- **Maximum Likelihood Estimation (MLE)**: Perform a model-fit using the MLE search.\n", "- **Markov Chain Monte Carlo (MCMC)**: Perform a model-fit using the MCMC search.\n", "- **Nested Sampling**: Perform a model-fit using the nested sampling search.\n", - "- **Result**: The result of the model-fit, including the maximum likelihood model.\n", - "- **Samples**: The samples of the non-linear search, used to compute parameter estimates and uncertainties.\n", - "- **Customizing Searches**: How to customize the settings of the non-linear search.\n", + "- **What is The Best Search To Use?**: Compare the strengths and weaknesses of each search method.\n", "- **Wrap Up**: A summary of the concepts introduced in this tutorial.\n", "\n", "__Parameter Space__\n", @@ -185,7 +183,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/howtofit/chapter_1_introduction/tutorial_4_why_modeling_is_hard.ipynb b/notebooks/howtofit/chapter_1_introduction/tutorial_4_why_modeling_is_hard.ipynb index f8c4553e..e506e863 100644 --- a/notebooks/howtofit/chapter_1_introduction/tutorial_4_why_modeling_is_hard.ipynb +++ b/notebooks/howtofit/chapter_1_introduction/tutorial_4_why_modeling_is_hard.ipynb @@ -31,15 +31,18 @@ "- **Data**: Load and plot the 1D Gaussian dataset we'll fit, which is more complex than the previous tutorial.\n", "- **Model**: The `Gaussian` model component that we will fit to the data.\n", "- **Analysis**: The log likelihood function used to fit the model to the data.\n", + "- **Alternative Syntax**: An alternative loop-based approach for creating a summed profile from multiple model components.\n", "- **Collection**: The `Collection` model used to compose the model-fit.\n", + "- **Search**: Set up the nested sampling search (Dynesty) for the model-fit.\n", "- **Model Fit**: Perform the model-fit and examine the results.\n", "- **Result**: Determine if the model-fit was successful and what can be done to ensure a good model-fit.\n", "- **Why Modeling is Hard**: Introduce the concept of randomness and local maxima and why they make model-fitting challenging.\n", "- **Prior Tuning**: Adjust the priors of the model to help the non-linear search find the global maxima solution.\n", "- **Reducing Complexity**: Simplify the model to reduce the dimensionality of the parameter space.\n", "- **Search More Thoroughly**: Adjust the non-linear search settings to search parameter space more thoroughly.\n", + "- **Summary**: Summarize the three strategies for ensuring successful model-fitting.\n", "- **Run Times**: Discuss how the likelihood function and complexity of a model impacts the run-time of a model-fit.\n", - "- **Model Mismatches**: Introduce the concept of model mismatches and how it makes inferring the correct model challenging.\n", + "- **Model Mismatch**: Introduce the concept of model mismatches and how it makes inferring the correct model challenging.\n", "- **Astronomy Example**: How the concepts of this tutorial are applied to real astronomical problems.\n", "- **Wrap Up**: A summary of the key takeaways of this tutorial." ] @@ -79,7 +82,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x5\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x5\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/howtofit/chapter_1_introduction/tutorial_5_results_and_samples.ipynb b/notebooks/howtofit/chapter_1_introduction/tutorial_5_results_and_samples.ipynb index 97a0011b..98e70c87 100644 --- a/notebooks/howtofit/chapter_1_introduction/tutorial_5_results_and_samples.ipynb +++ b/notebooks/howtofit/chapter_1_introduction/tutorial_5_results_and_samples.ipynb @@ -11,7 +11,32 @@ "\n", "We used this object at various points in the chapter. The bulk of material covered here is described in the example\n", "script `autofit_workspace/overview/simple/result.py`. Nevertheless, it is a good idea to refresh ourselves about how\n", - "results in **PyAutoFit** work before covering more advanced material." + "results in **PyAutoFit** work before covering more advanced material.\n", + "\n", + "__Contents__\n", + "\n", + "This tutorial is split into the following sections:\n", + "\n", + "- **Data**: Load the dataset from the autofit_workspace/dataset folder.\n", + "- **Reused Functions**: Reuse the `plot_profile_1d` and `Analysis` classes from the previous tutorial.\n", + "- **Model Fit**: Run a non-linear search to generate a `Result` object.\n", + "- **Result**: Examine the `Result` object and its info attribute.\n", + "- **Samples**: Introduce the `Samples` object containing the non-linear search samples.\n", + "- **Parameters**: Access parameter values from the samples.\n", + "- **Figures of Merit**: Examine log likelihood, log prior, and log posterior values.\n", + "- **Instances**: Return results as model instances from samples.\n", + "- **Vectors**: Return results as 1D parameter vectors.\n", + "- **Labels**: Access the paths, names, and labels for model parameters.\n", + "- **Posterior / PDF**: Access median PDF estimates for the model parameters.\n", + "- **Plot**: Visualize model fit results using instances.\n", + "- **Errors**: Compute parameter error estimates at specified sigma confidence limits.\n", + "- **PDF**: Plot Probability Density Functions using corner.py.\n", + "- **Other Results**: Access maximum log posterior and other sample statistics.\n", + "- **Sample Instance**: Create instances from individual samples in the sample list.\n", + "- **Bayesian Evidence**: Access the log evidence for nested sampling searches.\n", + "- **Derived Errors (PDF from samples)**: Compute errors on derived quantities from sample PDFs.\n", + "- **Samples Filtering**: Filter samples by parameter paths for specific parameter analysis.\n", + "- **Latex**: Generate LaTeX table code for modeling results." ] }, { @@ -48,7 +73,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1__exponential_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1__exponential_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", @@ -643,16 +694,15 @@ "source": [ "__PDF__\n", "\n", - "The Probability Density Functions (PDF's) of the results can be plotted using the Emcee's visualization \n", - "tool `corner.py`, which is wrapped via the `MCMCPlotter` object." + "The Probability Density Functions (PDF's) of the results can be plotted using the Emcee's visualization\n", + "tool `corner.py`, which is wrapped via the `aplt.corner_cornerpy` function." ] }, { "cell_type": "code", "metadata": {}, "source": [ - "plotter = aplt.MCMCPlotter(samples=result.samples)\n", - "plotter.corner_cornerpy()" + "aplt.corner_cornerpy(samples=result.samples)" ], "outputs": [], "execution_count": null diff --git a/notebooks/howtofit/chapter_1_introduction/tutorial_8_astronomy_example.ipynb b/notebooks/howtofit/chapter_1_introduction/tutorial_8_astronomy_example.ipynb index cd827b65..9e0f90d5 100644 --- a/notebooks/howtofit/chapter_1_introduction/tutorial_8_astronomy_example.ipynb +++ b/notebooks/howtofit/chapter_1_introduction/tutorial_8_astronomy_example.ipynb @@ -61,7 +61,9 @@ "- **Model Fit**: Fit the model to the data and display the results.\n", "- **Result**: Interpret the model fit to determine whether the galaxy is an early or late-type galaxy.\n", "- **Bulgey**: Repeat the fit using a bulgey light profile to determine the galaxy's type.\n", - "- **Extensions**: Illustrate examples of how this problem can be extended and the challenges that arise." + "- **Model Mismatch**: Analyze the challenges from model mismatches in galaxy classification.\n", + "- **Extensions**: Illustrate examples of how this problem can be extended and the challenges that arise.\n", + "- **Chapter Wrap Up**: Summarize the completion of Chapter 1 and its applications to real astronomy." ] }, { @@ -142,7 +144,32 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"howtofit\", \"chapter_1\", \"astro\", \"simple\")\n", + "dataset_path = path.join(\"dataset\", \"howtofit\", \"chapter_1\", \"astro\", \"simple\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", "\n", "data = np.load(file=path.join(dataset_path, \"data.npy\"))\n", "plot_array(array=data, title=\"Image of Galaxy\")\n", diff --git a/notebooks/howtofit/chapter_3_graphical_models/tutorial_1_individual_models.ipynb b/notebooks/howtofit/chapter_3_graphical_models/tutorial_1_individual_models.ipynb index 5f47a336..c748f55c 100644 --- a/notebooks/howtofit/chapter_3_graphical_models/tutorial_1_individual_models.ipynb +++ b/notebooks/howtofit/chapter_3_graphical_models/tutorial_1_individual_models.ipynb @@ -39,7 +39,21 @@ "In healthcare, there may also be many datasets available, with different formats that require slightly different models\n", "to fit them. The high levels of customization possible in model composition and defining the analysis class mean\n", "that fitting diverse datasets with hierarchical models is feasible. This also means that a common problem in healthcare\n", - "data, missing data, can be treated in a statistically robust manner." + "data, missing data, can be treated in a statistically robust manner.\n", + "\n", + "__Contents__\n", + "\n", + "This tutorial is split into the following sections:\n", + "\n", + "- **Real World Example**: A healthcare example illustrating the value of hierarchical models.\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this tutorial.\n", + "- **Model**: Define a simple Gaussian model in a Collection.\n", + "- **Data**: Load and set up 5 noisy 1D Gaussian datasets.\n", + "- **Model Fits (one-by-one)**: Fit each dataset individually using a separate non-linear search.\n", + "- **Results**: Analyze the fit results and error estimates for each dataset.\n", + "- **Estimating the Centre**: Combine centre estimates using a weighted average approach.\n", + "- **Posterior Multiplication**: Discuss KDE-based posterior multiplication as an alternative method.\n", + "- **Wrap Up**: Summary and transition to the graphical model approach in the next tutorial." ] }, { @@ -113,7 +127,32 @@ "cell_type": "code", "metadata": {}, "source": [ - "total_datasets = 5\n", + "total_datasets = 5" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(path.join(\"dataset\", \"example_1d\", \"gaussian_x1__low_snr\", \"dataset_0\")):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", "\n", "dataset_name_list = []\n", "\n", @@ -127,9 +166,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "For each 1D Gaussian dataset we now set up the correct path, load it, and plot it. \n", + "For each 1D Gaussian dataset we now set up the correct path, load it, and plot it.\n", "\n", - "Notice how much lower the signal-to-noise is than you are used too, you probably find it difficult to estimate \n", + "Notice how much lower the signal-to-noise is than you are used too, you probably find it difficult to estimate\n", "the centre of some of the Gaussians by eye!" ] }, @@ -286,8 +325,7 @@ "source": [ "\n", "for samples in samples_list:\n", - " plotter = aplt.NestPlotter(samples=samples)\n", - " plotter.corner_cornerpy()" + " aplt.corner_cornerpy(samples=samples)" ], "outputs": [], "execution_count": null diff --git a/notebooks/howtofit/chapter_3_graphical_models/tutorial_2_graphical_model.ipynb b/notebooks/howtofit/chapter_3_graphical_models/tutorial_2_graphical_model.ipynb index c551ff69..7d157c86 100644 --- a/notebooks/howtofit/chapter_3_graphical_models/tutorial_2_graphical_model.ipynb +++ b/notebooks/howtofit/chapter_3_graphical_models/tutorial_2_graphical_model.ipynb @@ -23,7 +23,21 @@ "Gaussians each with their own model parameters but a single shared centre:\n", "\n", " - Each Gaussian has 2 free parameters from the components that are not shared (`normalization`, `sigma`).\n", - " - There is one additional free parameter, which is the `centre` shared by all 5 Gaussians." + " - There is one additional free parameter, which is the `centre` shared by all 5 Gaussians.\n", + "\n", + "__Contents__\n", + "\n", + "This tutorial is split into the following sections:\n", + "\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this tutorial.\n", + "- **Dataset**: Load the 5 noisy 1D Gaussian datasets for simultaneous fitting.\n", + "- **Analysis**: Create Analysis objects for each dataset.\n", + "- **Model**: Set up the graphical model with a shared prior for the centre parameter.\n", + "- **Analysis Factors**: Pair each model with its corresponding Analysis class at factor graph nodes.\n", + "- **Factor Graph**: Combine the Analysis Factors into a factor graph representing the graphical model.\n", + "- **Search**: Configure and run the non-linear search to fit the factor graph.\n", + "- **Result**: Inspect and compare the graphical model results to the individual fits.\n", + "- **Wrap Up**: Summary and discussion of the benefits of graphical models." ] }, { @@ -75,7 +89,32 @@ "cell_type": "code", "metadata": {}, "source": [ - "total_datasets = 5\n", + "total_datasets = 5" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(path.join(\"dataset\", \"example_1d\", \"gaussian_x1__low_snr\", \"dataset_0\")):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", "\n", "dataset_name_list = []\n", "data_list = []\n", diff --git a/notebooks/howtofit/chapter_3_graphical_models/tutorial_3_graphical_benefits.ipynb b/notebooks/howtofit/chapter_3_graphical_models/tutorial_3_graphical_benefits.ipynb index 6f83dc24..0d406fb4 100644 --- a/notebooks/howtofit/chapter_3_graphical_models/tutorial_3_graphical_benefits.ipynb +++ b/notebooks/howtofit/chapter_3_graphical_models/tutorial_3_graphical_benefits.ipynb @@ -19,7 +19,28 @@ "__The Model__\n", "\n", "In this tutorial, each dataset now contains two Gaussians, and they all have the same shared centres, located at\n", - "pixels 40 and 60." + "pixels 40 and 60.\n", + "\n", + "__Contents__\n", + "\n", + "This tutorial is split into the following sections:\n", + "\n", + "- **The Model**: Describe the two-Gaussian model fitted in this tutorial.\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this tutorial.\n", + "- **Dataset**: Load datasets where each contains two Gaussians with shared centres.\n", + "- **Analysis**: Create Analysis objects for each dataset.\n", + "- **Model (one-by-one)**: Set up individual models with two Gaussians for one-by-one fitting.\n", + "- **Model Fits (one-by-one)**: Fit each dataset individually using separate non-linear searches.\n", + "- **Centre Estimates (Weighted Average)**: Compute centre estimates and errors using a weighted average.\n", + "- **Discussion**: Analyze the limitations of the one-by-one fitting approach.\n", + "- **Model (Graphical)**: Set up the graphical model with shared centre priors across datasets.\n", + "- **Analysis Factors**: Create Analysis Factors pairing models with Analysis objects.\n", + "- **Factor Graph**: Combine Analysis Factors into a factor graph.\n", + "- **Search**: Configure and run the non-linear search for the graphical model.\n", + "- **Result**: Inspect the graphical model results and compare to individual fits.\n", + "- **Discussion**: Discuss the benefits of graphical models over one-by-one fitting.\n", + "- **Posterior Multiplication**: Discuss KDE-based posterior multiplication as an alternative method.\n", + "- **Wrap Up**: Summary comparing the different methods and transition to hierarchical models." ] }, { @@ -71,7 +92,32 @@ "cell_type": "code", "metadata": {}, "source": [ - "total_datasets = 5\n", + "total_datasets = 5" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(path.join(\"dataset\", \"example_1d\", \"gaussian_x2__offset_centres\", \"dataset_0\")):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", "\n", "dataset_name_list = []\n", "data_list = []\n", @@ -307,8 +353,7 @@ "cell_type": "code", "metadata": {}, "source": [ - "plotter = aplt.NestPlotter(samples=result_list[0].samples)\n", - "plotter.corner_cornerpy()" + "aplt.corner_cornerpy(samples=result_list[0].samples)" ], "outputs": [], "execution_count": null diff --git a/notebooks/howtofit/chapter_3_graphical_models/tutorial_4_hierachical_models.ipynb b/notebooks/howtofit/chapter_3_graphical_models/tutorial_4_hierachical_models.ipynb index ab4811f5..6cc9bfef 100644 --- a/notebooks/howtofit/chapter_3_graphical_models/tutorial_4_hierachical_models.ipynb +++ b/notebooks/howtofit/chapter_3_graphical_models/tutorial_4_hierachical_models.ipynb @@ -20,7 +20,24 @@ "This is called a hierarchical model, which we fit in this tutorial. The `centre` of each 1D Gaussian is now no\n", "longer the same in each dataset and they are instead drawn from a shared parent Gaussian distribution\n", "(with `mean=50.0` and `sigma=10.0`). The hierarchical model will recover the `mean` and `sigma` values of the parent\n", - "distribution'." + "distribution'.\n", + "\n", + "__Contents__\n", + "\n", + "This tutorial is split into the following sections:\n", + "\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this tutorial.\n", + "- **Dataset**: Load the hierarchical Gaussian datasets with variable centres.\n", + "- **Analysis**: Create Analysis objects for each dataset.\n", + "- **Model Individual Factors**: Set up individual Gaussian models with independent priors.\n", + "- **Analysis Factors**: Compose Analysis Factors pairing models with Analysis objects.\n", + "- **Model**: Create a HierarchicalFactor with a parent Gaussian distribution for the centres.\n", + "- **Factor Graph**: Compose the factor graph including the hierarchical factor.\n", + "- **Search**: Configure and run the non-linear search for the hierarchical model.\n", + "- **Result**: Inspect the inferred hierarchical distribution parameters.\n", + "- **Comparison to One-by-One Fits**: Compare the hierarchical model results to simpler individual fits.\n", + "- **Benefits of Graphical Model**: Discuss how datasets inform one another through the hierarchical model.\n", + "- **Wrap Up**: Summary and introduction to expectation propagation." ] }, { @@ -72,7 +89,32 @@ "cell_type": "code", "metadata": {}, "source": [ - "total_datasets = 5\n", + "total_datasets = 5" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(path.join(\"dataset\", \"example_1d\", \"gaussian_x1__hierarchical\", \"dataset_0\")):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", "\n", "dataset_name_list = []\n", "data_list = []\n", @@ -102,7 +144,7 @@ "metadata": {}, "source": [ "By plotting the Gaussians we can just about make out that their centres are not all at pixel 50, and are spread out\n", - "around it (albeit its difficult to be sure, due to the low signal-to-noise of the data). " + "around it (albeit its difficult to be sure, due to the low signal-to-noise of the data)." ] }, { diff --git a/notebooks/howtofit/chapter_3_graphical_models/tutorial_5_expectation_propagation.ipynb b/notebooks/howtofit/chapter_3_graphical_models/tutorial_5_expectation_propagation.ipynb index ea68fbfc..bdb58095 100644 --- a/notebooks/howtofit/chapter_3_graphical_models/tutorial_5_expectation_propagation.ipynb +++ b/notebooks/howtofit/chapter_3_graphical_models/tutorial_5_expectation_propagation.ipynb @@ -21,7 +21,23 @@ "fit every dataset simultaneously.\n", "\n", "This tutorial fits a global model with a shared parameter and does not use a hierarchical model. The optional tutorial\n", - "`tutorial_optional_hierarchical_ep` shows an example fit of a hierarchical model with EP." + "`tutorial_optional_hierarchical_ep` shows an example fit of a hierarchical model with EP.\n", + "\n", + "__Contents__\n", + "\n", + "This tutorial is split into the following sections:\n", + "\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this tutorial.\n", + "- **Dataset**: Load the noisy 1D Gaussian datasets with a shared centre.\n", + "- **Analysis**: Create Analysis objects for each dataset.\n", + "- **Model**: Set up the model with a shared centre prior across all datasets.\n", + "- **Analysis Factors**: Create Analysis Factors with individual searches for each dataset.\n", + "- **Factor Graph**: Compose the factor graph for the EP framework.\n", + "- **Expectation Propagation**: Explain the EP message passing algorithm.\n", + "- **Cyclic Fitting**: Describe the iterative EP convergence process.\n", + "- **Result**: Access the result of the EP fit.\n", + "- **Output**: Describe the output directory structure and files generated by the EP fit.\n", + "- **Results**: Use the MeanField object to infer parameter estimates and errors." ] }, { @@ -74,7 +90,32 @@ "cell_type": "code", "metadata": {}, "source": [ - "total_datasets = 3\n", + "total_datasets = 3" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(path.join(\"dataset\", \"example_1d\", \"gaussian_x1__low_snr\", \"dataset_0\")):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", "\n", "dataset_name_list = []\n", "data_list = []\n", diff --git a/notebooks/howtofit/chapter_3_graphical_models/tutorial_optional_hierarchical_ep.ipynb b/notebooks/howtofit/chapter_3_graphical_models/tutorial_optional_hierarchical_ep.ipynb index 5cb27d0f..be5c21e5 100644 --- a/notebooks/howtofit/chapter_3_graphical_models/tutorial_optional_hierarchical_ep.ipynb +++ b/notebooks/howtofit/chapter_3_graphical_models/tutorial_optional_hierarchical_ep.ipynb @@ -9,7 +9,20 @@ "\n", "This optional tutorial gives an example of fitting a hierarchical model using EP.\n", "\n", - "The API is a straightforward combination of tutorials 3 and 4." + "The API is a straightforward combination of tutorials 3 and 4.\n", + "\n", + "__Contents__\n", + "\n", + "This tutorial is split into the following sections:\n", + "\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this tutorial.\n", + "- **Dataset**: Load the hierarchical Gaussian datasets with variable centres.\n", + "- **Analysis**: Create Analysis objects for each dataset.\n", + "- **Model Individual Factors**: Set up individual Gaussian models with independent priors.\n", + "- **Analysis Factors**: Compose Analysis Factors with individual searches for each dataset.\n", + "- **Model**: Create a HierarchicalFactor with a parent Gaussian distribution.\n", + "- **Factor Graph**: Compose the factor graph including the hierarchical factor.\n", + "- **Model Fit**: Run the EP fit of the hierarchical model." ] }, { @@ -61,7 +74,32 @@ "cell_type": "code", "metadata": {}, "source": [ - "total_datasets = 5\n", + "total_datasets = 5" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(path.join(\"dataset\", \"example_1d\", \"gaussian_x1__hierarchical\", \"dataset_0\")):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", "\n", "dataset_name_list = []\n", "data_list = []\n", @@ -91,7 +129,7 @@ "metadata": {}, "source": [ "By plotting the Gaussians we can just about make out that their centres are not all at pix 50, and are spreasd out\n", - "around it (albeit its difficult to be sure, due to the low signal-to-noise of the data). " + "around it (albeit its difficult to be sure, due to the low signal-to-noise of the data)." ] }, { diff --git a/notebooks/howtofit/chapter_3_graphical_models/tutorial_optional_hierarchical_individual.ipynb b/notebooks/howtofit/chapter_3_graphical_models/tutorial_optional_hierarchical_individual.ipynb index 702c6d28..e50ce460 100644 --- a/notebooks/howtofit/chapter_3_graphical_models/tutorial_optional_hierarchical_individual.ipynb +++ b/notebooks/howtofit/chapter_3_graphical_models/tutorial_optional_hierarchical_individual.ipynb @@ -13,7 +13,21 @@ "\n", "This script illustrates how the hierarchical parameters can be estimated using a simpler approach, which fits\n", "each dataset one-by-one and estimates the hierarchical parameters afterwards by fitting the inferred `centres`\n", - "with a Gaussian distribution." + "with a Gaussian distribution.\n", + "\n", + "__Contents__\n", + "\n", + "This tutorial is split into the following sections:\n", + "\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this tutorial.\n", + "- **Dataset**: Load the hierarchical Gaussian datasets with variable centres.\n", + "- **Analysis**: Create Analysis objects for each dataset.\n", + "- **Model**: Define a simple Gaussian model with uniform priors.\n", + "- **Model Fits (one-by-one)**: Fit each dataset individually using separate non-linear searches.\n", + "- **Results**: Analyze and plot the results of each individual fit.\n", + "- **Overall Gaussian Parent Distribution**: Fit a parent Gaussian distribution to the inferred centres.\n", + "- **Model**: Set up a ParentGaussian model for the parent distribution fitting.\n", + "- **Analysis + Search**: Create the analysis and search for the parent distribution fit." ] }, { @@ -65,7 +79,32 @@ "cell_type": "code", "metadata": {}, "source": [ - "total_datasets = 5\n", + "total_datasets = 5" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(path.join(\"dataset\", \"example_1d\", \"gaussian_x1__hierarchical\", \"dataset_0\")):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", "\n", "dataset_name_list = []\n", "data_list = []\n", @@ -95,7 +134,7 @@ "metadata": {}, "source": [ "By plotting the Gaussians we can just about make out that their centres are not all at pix 50, and are spread out\n", - "around it (albeit its difficult to be sure, due to the low signal-to-noise of the data). " + "around it (albeit its difficult to be sure, due to the low signal-to-noise of the data)." ] }, { diff --git a/notebooks/overview/overview_1_the_basics.ipynb b/notebooks/overview/overview_1_the_basics.ipynb index 0b0a72f9..57ce7523 100644 --- a/notebooks/overview/overview_1_the_basics.ipynb +++ b/notebooks/overview/overview_1_the_basics.ipynb @@ -27,6 +27,24 @@ "This overviews provides a high level of the basic API, with more advanced functionality described in the following\n", "overviews and the **PyAutoFit** cookbooks.\n", "\n", + "__Contents__\n", + "\n", + "This overview is split into the following sections:\n", + "\n", + "- **Example Use Case**: Introduce the 1D Gaussian profile fitting example used throughout this overview.\n", + "- **Model**: Define a 1D Gaussian as a PyAutoFit model via a Python class.\n", + "- **Instances**: Create model instances by mapping parameter vectors to Python class instances.\n", + "- **Analysis**: Define an ``Analysis`` class with a ``log_likelihood_function`` for fitting the model to data.\n", + "- **Non Linear Search**: Select and configure a non-linear search algorithm (Dynesty nested sampling).\n", + "- **Model Fit**: Execute the non-linear search to fit the model to the data.\n", + "- **Result**: Examine the result and maximum likelihood instance from the search.\n", + "- **Samples**: Access parameter samples and posterior information to visualize results.\n", + "- **Multiple Datasets**: Fit multiple datasets simultaneously using AnalysisFactor objects.\n", + "- **Factor Graph**: Combine AnalysisFactors into a FactorGraphModel for global model fitting.\n", + "- **Wrap Up**: Summary of the basic PyAutoFit functionality.\n", + "- **Resources**: Links to cookbooks and documentation for advanced features.\n", + "- **Extending Models**: Example of composing multi-component models (Gaussian + Exponential).\n", + "\n", "To begin, lets import ``autofit`` (and ``numpy``) using the convention below:" ] }, @@ -64,7 +82,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", @@ -700,8 +744,7 @@ "cell_type": "code", "metadata": {}, "source": [ - "plotter = aplt.NestPlotter(samples=result.samples)\n", - "plotter.corner_anesthetic()" + "aplt.corner_anesthetic(samples=result.samples)" ], "outputs": [], "execution_count": null diff --git a/notebooks/overview/overview_2_scientific_workflow.ipynb b/notebooks/overview/overview_2_scientific_workflow.ipynb index 9a2b5549..1cec58b0 100644 --- a/notebooks/overview/overview_2_scientific_workflow.ipynb +++ b/notebooks/overview/overview_2_scientific_workflow.ipynb @@ -28,7 +28,24 @@ "- **Searches**: Support for various non-linear searches (e.g., nested sampling, MCMC), including gradient based fitting using JAX, to find the right method for your problem.\n", "- **Configs**: Configuration files that set default model, fitting, and visualization behaviors, streamlining model fitting.\n", "- **Database**: Store results in a relational SQLite3 database, enabling efficient management of large modeling results.\n", - "- **Scaling Up**: Guidance on scaling up your scientific workflow from small to large datasets." + "- **Scaling Up**: Guidance on scaling up your scientific workflow from small to large datasets.\n", + "\n", + "__Contents__\n", + "\n", + "This overview is split into the following sections:\n", + "\n", + "- **Data**: Load the 1D Gaussian data from disk to illustrate the scientific workflow.\n", + "- **On The Fly**: Display intermediate results during model fitting for instant feedback.\n", + "- **Hard Disk Output**: Enable persistent saving of search results with customizable output structure.\n", + "- **Visualization**: Generate model-specific visualizations saved to disk during fitting.\n", + "- **Loading Results**: Use the Aggregator API to load and inspect results from hard disk.\n", + "- **Result Customization**: Extend the Result class with custom properties specific to the model-fitting problem.\n", + "- **Model Composition**: Construct diverse models with parameter assignments and complex hierarchies.\n", + "- **Searches**: Select and customize non-linear search methods appropriate for the problem.\n", + "- **Configs**: Use configuration files to define default model priors and search settings.\n", + "- **Database**: Store and query results in a SQLite3 relational database.\n", + "- **Scaling Up**: Guidance on expanding workflows from small to large datasets.\n", + "- **Wrap Up**: Summary of scientific workflow features in PyAutoFit." ] }, { @@ -65,7 +82,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/overview/overview_3_statistical_methods.ipynb b/notebooks/overview/overview_3_statistical_methods.ipynb index 0692dd40..23880750 100644 --- a/notebooks/overview/overview_3_statistical_methods.ipynb +++ b/notebooks/overview/overview_3_statistical_methods.ipynb @@ -114,7 +114,19 @@ "\n", "A full description of using sensitivity mapping is given below:\n", "\n", - "https://github.com/Jammy2211/autofit_workspace/blob/release/notebooks/features/sensitivity_mapping.ipynb" + "https://github.com/Jammy2211/autofit_workspace/blob/release/notebooks/features/sensitivity_mapping.ipynb\n", + "\n", + "__Contents__\n", + "\n", + "This overview is split into the following sections:\n", + "\n", + "- **Graphical Models**: Fit global and local parameters across multiple interdependent datasets simultaneously.\n", + "- **Hierarchical Models**: Infer parent distribution parameters when model parameters are drawn from a common distribution.\n", + "- **Model Comparison**: Fit multiple models and compare them using Bayesian evidence.\n", + "- **Interpolation**: Interpolate model parameters across similar datasets that vary smoothly.\n", + "- **Search Grid Search**: Perform a grid search over a parameter subset while using a non-linear search for others.\n", + "- **Search Chaining**: Chain a sequence of non-linear searches with increasing model complexity.\n", + "- **Sensitivity Mapping**: Determine the data quality needed for complex models to be favored." ] }, { diff --git a/notebooks/plot/DynestyPlotter.ipynb b/notebooks/plot/DynestyPlotter.ipynb index fe57298c..663d4cce 100644 --- a/notebooks/plot/DynestyPlotter.ipynb +++ b/notebooks/plot/DynestyPlotter.ipynb @@ -8,7 +8,16 @@ "=====================\n", "\n", "This example illustrates how to plot visualization summarizing the results of a dynesty non-linear search using\n", - "a `NestPlotter`." + "the `autofit.plot` module-level functions.\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Notation**: How parameter labels and superscripts are customized for plots.\n", + "- **Plotting**: Using the plot functions to visualize Dynesty search results.\n", + "- **Search Specific Visualization**: Accessing the native Dynesty sampler for custom visualizations.\n", + "- **Plots**: Producing Dynesty-specific diagnostic plots." ] }, { @@ -43,7 +52,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", @@ -78,15 +113,14 @@ "\n", "__Plotting__\n", "\n", - "We now pass the samples to a `NestPlotter` which will allow us to use dynesty's in-built plotting libraries to \n", - "make figures.\n", + "We now use the `autofit.plot` module-level functions to visualize the results.\n", "\n", - "The dynesty readthedocs describes fully all of the methods used below \n", + "The dynesty readthedocs describes fully all of the methods used below\n", "\n", " - https://dynesty.readthedocs.io/en/latest/quickstart.html\n", " - https://dynesty.readthedocs.io/en/latest/api.html#module-dynesty.plotting\n", - " \n", - "In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are \n", + "\n", + "In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are\n", "described in the API docs.\n", "\n", "Dynesty plotters use `_kwargs` dictionaries to pass visualization settings to matplotlib lib. For example, below,\n", @@ -94,7 +128,7 @@ "\n", " - Set the fontsize of the x and y labels by passing `label_kwargs={\"fontsize\": 16}`.\n", " - Set the fontsize of the title by passing `title_kwargs={\"fontsize\": \"10\"}`.\n", - " \n", + "\n", "There are other `_kwargs` inputs we pass as None, you should check out the Dynesty docs if you need to customize your\n", "figure." ] @@ -103,23 +137,19 @@ "cell_type": "code", "metadata": {}, "source": [ - "plotter = aplt.NestPlotter(samples=samples)" + "# %%\n", + "'''\n", + "The `corner_anesthetic` function produces a triangle of 1D and 2D PDF's of every parameter using the library `anesthetic`.\n", + "'''" ], "outputs": [], "execution_count": null }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The `corner_anesthetic` method produces a triangle of 1D and 2D PDF's of every parameter using the library `anesthetic`." - ] - }, { "cell_type": "code", "metadata": {}, "source": [ - "plotter.corner_anesthetic()" + "aplt.corner_anesthetic(samples=samples)" ], "outputs": [], "execution_count": null @@ -128,14 +158,15 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The `corner_cornerpy` method produces a triangle of 1D and 2D PDF's of every parameter using the library `corner.py`." + "The `corner_cornerpy` function produces a triangle of 1D and 2D PDF's of every parameter using the library `corner.py`." ] }, { "cell_type": "code", "metadata": {}, "source": [ - "plotter.corner_cornerpy(\n", + "aplt.corner_cornerpy(\n", + " samples=samples,\n", " dims=None,\n", " span=None,\n", " quantiles=[0.025, 0.5, 0.975],\n", diff --git a/notebooks/plot/EmceePlotter.ipynb b/notebooks/plot/EmceePlotter.ipynb index 74fe3d6e..0a359205 100644 --- a/notebooks/plot/EmceePlotter.ipynb +++ b/notebooks/plot/EmceePlotter.ipynb @@ -4,11 +4,19 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Plots: MCMCPlotter\n", + "Plots: EmceePlotter\n", "===================\n", "\n", "This example illustrates how to plot visualization summarizing the results of a emcee non-linear search using\n", - "a `MCMCPlotter`." + "the `autofit.plot` module-level functions.\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Notation**: How parameter labels and superscripts are customized for plots.\n", + "- **Plotting**: Using the plot functions to visualize Emcee search results.\n", + "- **Search Specific Visualization**: Accessing the native Emcee sampler for custom visualizations." ] }, { @@ -43,7 +51,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", @@ -82,18 +116,17 @@ "\n", "__Plotting__\n", "\n", - "We now pass the samples to a `MCMCPlotter` which will allow us to use emcee's in-built plotting libraries to \n", - "make figures.\n", + "We now use the `autofit.plot` module-level functions to visualize the results.\n", "\n", - "The emcee readthedocs describes fully all of the methods used below \n", + "The emcee readthedocs describes fully all of the methods used below\n", "\n", " - https://emcee.readthedocs.io/en/stable/user/sampler/\n", - " \n", - " The plotter wraps the `corner` method of the library `corner.py` to make corner plots of the PDF:\n", + "\n", + "The `aplt.corner_cornerpy` function wraps the library `corner.py` to make corner plots of the PDF:\n", "\n", "- https://corner.readthedocs.io/en/latest/index.html\n", - " \n", - "In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are \n", + "\n", + "In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are\n", "described in the API docs." ] }, @@ -101,9 +134,7 @@ "cell_type": "code", "metadata": {}, "source": [ - "samples = result.samples\n", - "\n", - "plotter = aplt.MCMCPlotter(samples=samples)" + "samples = result.samples" ], "outputs": [], "execution_count": null @@ -112,14 +143,15 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The `corner` method produces a triangle of 1D and 2D PDF's of every parameter using the library `corner.py`." + "The `corner_cornerpy` function produces a triangle of 1D and 2D PDF's of every parameter using the library `corner.py`." ] }, { "cell_type": "code", "metadata": {}, "source": [ - "plotter.corner_cornerpy(\n", + "aplt.corner_cornerpy(\n", + " samples=samples,\n", " bins=20,\n", " range=None,\n", " color=\"k\",\n", diff --git a/notebooks/plot/GetDist.ipynb b/notebooks/plot/GetDist.ipynb index 67860230..75e7cc87 100644 --- a/notebooks/plot/GetDist.ipynb +++ b/notebooks/plot/GetDist.ipynb @@ -23,7 +23,22 @@ "\n", "Because GetDist is an optional library, you will likely have to install it manually via the command:\n", "\n", - "`pip install getdist`" + "`pip install getdist`\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Model Fit**: Create a Dynesty result for visualization with GetDist.\n", + "- **Param Names**: Generate the GetDist parameter names file.\n", + "- **GetDist MCSamples**: Create a GetDist MCSamples object from PyAutoFit samples.\n", + "- **Parameter Names**: Document the parameter naming conventions.\n", + "- **GetDist Plotter**: Create a GetDist plotter object.\n", + "- **GetDist Subplots**: Create triangle plots and other multi-parameter plots.\n", + "- **GetDist Single Plots**: Create individual 1D, 2D, and 3D PDF plots.\n", + "- **Output**: Save figures to disk.\n", + "- **GetDist Other Plots**: Reference additional plot options available in GetDist.\n", + "- **Plotting Multiple Samples**: Demonstrate plotting results from multiple searches." ] }, { @@ -67,7 +82,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/plot/NautilusPlotter.ipynb b/notebooks/plot/NautilusPlotter.ipynb index da17ec80..c18e2b38 100644 --- a/notebooks/plot/NautilusPlotter.ipynb +++ b/notebooks/plot/NautilusPlotter.ipynb @@ -8,7 +8,16 @@ "======================\n", "\n", "This example illustrates how to plot visualization summarizing the results of a nautilus non-linear search using\n", - "a `MCMCPlotter`." + "the `autofit.plot` module-level functions.\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Notation**: How parameter labels and superscripts are customized for plots.\n", + "- **Plotting**: Using the plot functions to visualize Nautilus search results.\n", + "- **Search Specific Visualization**: Accessing the native Nautilus sampler for custom visualizations.\n", + "- **Plots**: Producing Nautilus-specific diagnostic plots." ] }, { @@ -43,7 +52,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", @@ -82,14 +117,13 @@ "\n", "__Plotting__\n", "\n", - "We now pass the samples to a `NestPlotter` which will allow us to use nautilus's in-built plotting libraries to \n", - "make figures.\n", + "We now use the `autofit.plot` module-level functions to visualize the results.\n", "\n", - "The nautilus readthedocs describes fully all of the methods used below \n", + "The nautilus readthedocs describes fully all of the methods used below\n", "\n", " - https://nautilus-sampler.readthedocs.io/en/stable/guides/crash_course.html\n", - " \n", - "In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are \n", + "\n", + "In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are\n", "described in the API docs.\n", "\n", "Nautilus plotters use `_kwargs` dictionaries to pass visualization settings to matplotlib lib. For example, below,\n", @@ -97,7 +131,7 @@ "\n", " - Set the fontsize of the x and y labels by passing `label_kwargs={\"fontsize\": 16}`.\n", " - Set the fontsize of the title by passing `title_kwargs={\"fontsize\": \"10\"}`.\n", - " \n", + "\n", "There are other `_kwargs` inputs we pass as None, you should check out the Nautilus docs if you need to customize your\n", "figure." ] @@ -106,25 +140,19 @@ "cell_type": "code", "metadata": {}, "source": [ - "plotter = aplt.NestPlotter(\n", - " samples=samples,\n", - ")" + "# %%\n", + "'''\n", + "The `corner_anesthetic` function produces a triangle of 1D and 2D PDF's of every parameter using the library `anesthetic`.\n", + "'''" ], "outputs": [], "execution_count": null }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The `corner_anesthetic` method produces a triangle of 1D and 2D PDF's of every parameter using the library `anesthetic`." - ] - }, { "cell_type": "code", "metadata": {}, "source": [ - "plotter.corner_anesthetic()" + "aplt.corner_anesthetic(samples=samples)" ], "outputs": [], "execution_count": null @@ -133,14 +161,15 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The `corner` method produces a triangle of 1D and 2D PDF's of every parameter using the library `corner.py`." + "The `corner_cornerpy` function produces a triangle of 1D and 2D PDF's of every parameter using the library `corner.py`." ] }, { "cell_type": "code", "metadata": {}, "source": [ - "plotter.corner_cornerpy(\n", + "aplt.corner_cornerpy(\n", + " samples=samples,\n", " panelsize=3.5,\n", " yticksize=16,\n", " xticksize=16,\n", diff --git a/notebooks/plot/PySwarmsPlotter.ipynb b/notebooks/plot/PySwarmsPlotter.ipynb index 67f66bf8..0c87b27d 100644 --- a/notebooks/plot/PySwarmsPlotter.ipynb +++ b/notebooks/plot/PySwarmsPlotter.ipynb @@ -4,11 +4,19 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Plots: MLEPlotter\n", + "Plots: PySwarmsPlotter\n", "======================\n", "\n", "This example illustrates how to plot visualization summarizing the results of a pyswarms non-linear search using\n", - "a `MLEPlotter`." + "the `autofit.plot` module-level functions.\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Notation**: How parameter labels and superscripts are customized for plots.\n", + "- **Plotting**: Using the plot functions to visualize PySwarms search results.\n", + "- **Search Specific Visualization**: Accessing the native PySwarms optimizer for custom visualizations." ] }, { @@ -25,8 +33,7 @@ "import matplotlib.pyplot as plt\n", "from os import path\n", "\n", - "import autofit as af\n", - "import autofit.plot as aplt" + "import autofit as af" ], "outputs": [], "execution_count": null @@ -43,7 +50,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", @@ -84,7 +117,7 @@ "\n", "__Plotting__\n", "\n", - "We now pass the samples to a `MLEPlotter` which will allow us to use pyswarms's in-built plotting libraries to \n", + "We now use the `autofit.plot` module-level functions and pyswarms's in-built plotting libraries to\n", "make figures.\n", "\n", "The pyswarms readthedocs describes fully all of the methods used below \n", @@ -99,15 +132,8 @@ "cell_type": "code", "metadata": {}, "source": [ - "plotter = aplt.MLEPlotter(samples=samples)" - ], - "outputs": [], - "execution_count": null - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ + "# %%\n", + "'''\n", "__Search Specific Visualization__\n", "\n", "PySwarms has bespoke in-built visualization tools that can be used to plot its results.\n", @@ -117,8 +143,11 @@ "\n", "If you rerun the fit on a completed result, it will not be available in memory, and therefore\n", "will be loaded from the `files/search_internal` folder. The `search_internal` entry of the `output.yaml` must be true \n", - "for this to be possible." - ] + "for this to be possible.\n", + "'''" + ], + "outputs": [], + "execution_count": null }, { "cell_type": "code", diff --git a/notebooks/plot/UltraNestPlotter.ipynb b/notebooks/plot/UltraNestPlotter.ipynb index d78225b7..6764e22a 100644 --- a/notebooks/plot/UltraNestPlotter.ipynb +++ b/notebooks/plot/UltraNestPlotter.ipynb @@ -8,14 +8,23 @@ "=======================\n", "\n", "This example illustrates how to plot visualization summarizing the results of a ultranest non-linear search using\n", - "a `NestPlotter`.\n", + "the `autofit.plot` module-level functions.\n", "\n", "Installation\n", "------------\n", "\n", "Because UltraNest is an optional library, you will likely have to install it manually via the command:\n", "\n", - "`pip install ultranest`" + "`pip install ultranest`\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Notation**: How parameter labels and superscripts are customized for plots.\n", + "- **Plotting**: Using the plot functions to visualize UltraNest search results.\n", + "- **Search Specific Visualization**: Accessing the native UltraNest sampler for custom visualizations.\n", + "- **Plots**: Producing UltraNest-specific diagnostic plots." ] }, { @@ -49,7 +58,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", @@ -84,8 +119,7 @@ "\n", "__Plotting__\n", "\n", - "We now pass the samples to a `NestPlotter` which will allow us to use ultranest's in-built plotting libraries to \n", - "make figures.\n", + "We now use the `autofit.plot` module-level functions to visualize the results.\n", "\n", "The ultranest readthedocs describes fully all of the methods used below \n", "\n", @@ -100,23 +134,19 @@ "cell_type": "code", "metadata": {}, "source": [ - "plotter = aplt.NestPlotter(samples=samples)" + "# %%\n", + "'''\n", + "The `corner_anesthetic` function produces a triangle of 1D and 2D PDF's of every parameter using the library `anesthetic`.\n", + "'''" ], "outputs": [], "execution_count": null }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The `corner_anesthetic` method produces a triangle of 1D and 2D PDF's of every parameter using the library `anesthetic`." - ] - }, { "cell_type": "code", "metadata": {}, "source": [ - "plotter.corner_anesthetic()" + "aplt.corner_anesthetic(samples=samples)" ], "outputs": [], "execution_count": null @@ -125,14 +155,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The `corner` method produces a triangle of 1D and 2D PDF's of every parameter using the library `corner.py`." + "The `corner_cornerpy` function produces a triangle of 1D and 2D PDF's of every parameter using the library `corner.py`." ] }, { "cell_type": "code", "metadata": {}, "source": [ - "plotter.corner_cornerpy()" + "aplt.corner_cornerpy(samples=samples)" ], "outputs": [], "execution_count": null diff --git a/notebooks/plot/ZeusPlotter.ipynb b/notebooks/plot/ZeusPlotter.ipynb index d2f8133f..2809751d 100644 --- a/notebooks/plot/ZeusPlotter.ipynb +++ b/notebooks/plot/ZeusPlotter.ipynb @@ -4,11 +4,19 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Plots: MCMCPlotter\n", + "Plots: ZeusPlotter\n", "==================\n", "\n", "This example illustrates how to plot visualization summarizing the results of a zeus non-linear search using\n", - "a `MCMCPlotter`." + "the `autofit.plot` module-level functions.\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Notation**: How parameter labels and superscripts are customized for plots.\n", + "- **Plotting**: Using the plot functions to visualize Zeus search results.\n", + "- **Search Specific Visualization**: Accessing the native Zeus sampler for custom visualizations." ] }, { @@ -43,7 +51,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", @@ -84,19 +118,18 @@ "\n", "__Plotting__\n", "\n", - "We now pass the samples to a `MCMCPlotter` which will allow us to use dynesty's in-built plotting libraries to \n", - "make figures.\n", + "We now use the `autofit.plot` module-level functions to visualize the results.\n", "\n", - "The zeus readthedocs describes fully all of the methods used below \n", + "The zeus readthedocs describes fully all of the methods used below\n", "\n", " - https://zeus-mcmc.readthedocs.io/en/latest/api/plotting.html#cornerplot\n", " - https://zeus-mcmc.readthedocs.io/en/latest/notebooks/normal_distribution.html\n", - " \n", - " The plotter wraps the `corner` method of the library `corner.py` to make corner plots of the PDF:\n", + "\n", + "The `aplt.corner_cornerpy` function wraps the library `corner.py` to make corner plots of the PDF:\n", "\n", "- https://corner.readthedocs.io/en/latest/index.html\n", - " \n", - "In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are \n", + "\n", + "In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are\n", "described in the API docs." ] }, @@ -104,23 +137,20 @@ "cell_type": "code", "metadata": {}, "source": [ - "plotter = aplt.MCMCPlotter(samples=samples)" + "# %%\n", + "'''\n", + "The `corner_cornerpy` function produces a triangle of 1D and 2D PDF's of every parameter using the library `corner.py`.\n", + "'''" ], "outputs": [], "execution_count": null }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The `corner` method produces a triangle of 1D and 2D PDF's of every parameter using the library `corner.py`." - ] - }, { "cell_type": "code", "metadata": {}, "source": [ - "plotter.corner_cornerpy(\n", + "aplt.corner_cornerpy(\n", + " samples=samples,\n", " weight_list=None,\n", " levels=None,\n", " span=None,\n", diff --git a/notebooks/searches/mcmc/Emcee.ipynb b/notebooks/searches/mcmc/Emcee.ipynb index 422419f8..21a05606 100644 --- a/notebooks/searches/mcmc/Emcee.ipynb +++ b/notebooks/searches/mcmc/Emcee.ipynb @@ -12,7 +12,17 @@ "Information about Emcee can be found at the following links:\n", "\n", " - https://github.com/dfm/emcee\n", - " - https://emcee.readthedocs.io/en/stable/" + " - https://emcee.readthedocs.io/en/stable/\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Data**: Loading and plotting the 1D Gaussian dataset used to demonstrate the search.\n", + "- **Model + Analysis**: Setting up the model and analysis for the fitting example.\n", + "- **Search**: Configuring and running the Emcee MCMC sampler.\n", + "- **Result**: Inspecting the result and comparing the maximum log likelihood model to the data.\n", + "- **Search Internal**: Accessing the internal Emcee sampler for advanced use." ] }, { @@ -48,7 +58,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/searches/mcmc/Zeus.ipynb b/notebooks/searches/mcmc/Zeus.ipynb index 126cc605..b2a44c59 100644 --- a/notebooks/searches/mcmc/Zeus.ipynb +++ b/notebooks/searches/mcmc/Zeus.ipynb @@ -12,7 +12,17 @@ "Information about Zeus can be found at the following links:\n", "\n", " - https://github.com/minaskar/zeus\n", - " - https://zeus-mcmc.readthedocs.io/en/latest/" + " - https://zeus-mcmc.readthedocs.io/en/latest/\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Data**: Loading and plotting the 1D Gaussian dataset used to demonstrate the search.\n", + "- **Model + Analysis**: Setting up the model and analysis for the fitting example.\n", + "- **Search**: Configuring and running the Zeus MCMC sampler.\n", + "- **Result**: Inspecting the result and comparing the maximum log likelihood model to the data.\n", + "- **Search Internal**: Accessing the internal Zeus sampler for advanced use." ] }, { @@ -48,7 +58,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/searches/mle/Drawer.ipynb b/notebooks/searches/mle/Drawer.ipynb index 18f68182..35e917f2 100644 --- a/notebooks/searches/mle/Drawer.ipynb +++ b/notebooks/searches/mle/Drawer.ipynb @@ -25,7 +25,16 @@ "\n", " - For advanced modeling tools, for example sensitivity mapping performed via the `Sensitivity` object,\n", " the `Drawer` search may be sufficient to perform the overall modeling task, without the need of performing\n", - " an actual parameter space search." + " an actual parameter space search.\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Data**: Loading and plotting the 1D Gaussian dataset used to demonstrate the search.\n", + "- **Model + Analysis**: Setting up the model and analysis for the fitting example.\n", + "- **Search**: Configuring and running the Drawer search to draw samples from the priors.\n", + "- **Result**: Inspecting the result and comparing the maximum log likelihood model to the data." ] }, { @@ -61,7 +70,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/searches/mle/LBFGS.ipynb b/notebooks/searches/mle/LBFGS.ipynb index c048555e..7cab2c5a 100644 --- a/notebooks/searches/mle/LBFGS.ipynb +++ b/notebooks/searches/mle/LBFGS.ipynb @@ -11,7 +11,17 @@ "\n", "Information about the L-BFGS method can be found at the following links:\n", "\n", - " - https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html" + " - https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Data**: Loading and plotting the 1D Gaussian dataset used to demonstrate the search.\n", + "- **Model + Analysis**: Setting up the model and analysis for the fitting example.\n", + "- **Search**: Configuring and running the L-BFGS optimization algorithm.\n", + "- **Result**: Inspecting the result and comparing the maximum log likelihood model to the data.\n", + "- **Search Internal**: Accessing the internal scipy optimizer for advanced use." ] }, { @@ -47,7 +57,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/searches/mle/PySwarmsGlobal.ipynb b/notebooks/searches/mle/PySwarmsGlobal.ipynb index 6612511d..44195393 100644 --- a/notebooks/searches/mle/PySwarmsGlobal.ipynb +++ b/notebooks/searches/mle/PySwarmsGlobal.ipynb @@ -13,7 +13,17 @@ "\n", " - https://github.com/ljvmiranda921/pyswarms\n", " - https://pyswarms.readthedocs.io/en/latest/index.html\n", - " - https://pyswarms.readthedocs.io/en/latest/api/pyswarms.single.html#module-pyswarms.single.global_best" + " - https://pyswarms.readthedocs.io/en/latest/api/pyswarms.single.html#module-pyswarms.single.global_best\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Data**: Loading and plotting the 1D Gaussian dataset used to demonstrate the search.\n", + "- **Model + Analysis**: Setting up the model and analysis for the fitting example.\n", + "- **Search**: Configuring and running the PySwarmsGlobal particle swarm optimizer.\n", + "- **Result**: Inspecting the result and comparing the maximum log likelihood model to the data.\n", + "- **Search Internal**: Accessing the internal PySwarms optimizer for advanced use." ] }, { @@ -49,7 +59,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/searches/mle/PySwarmsLocal.ipynb b/notebooks/searches/mle/PySwarmsLocal.ipynb index 1d7c8adb..a91f1eeb 100644 --- a/notebooks/searches/mle/PySwarmsLocal.ipynb +++ b/notebooks/searches/mle/PySwarmsLocal.ipynb @@ -13,7 +13,17 @@ "\n", " - https://github.com/ljvmiranda921/pyswarms\n", " - https://pyswarms.readthedocs.io/en/latest/index.html\n", - " - https://pyswarms.readthedocs.io/en/latest/api/pyswarms.single.html#module-pyswarms.single.local_best" + " - https://pyswarms.readthedocs.io/en/latest/api/pyswarms.single.html#module-pyswarms.single.local_best\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Data**: Loading and plotting the 1D Gaussian dataset used to demonstrate the search.\n", + "- **Model + Analysis**: Setting up the model and analysis for the fitting example.\n", + "- **Search**: Configuring and running the PySwarmsLocal particle swarm optimizer.\n", + "- **Result**: Inspecting the result and comparing the maximum log likelihood model to the data.\n", + "- **Search Internal**: Accessing the internal PySwarms optimizer for advanced use." ] }, { @@ -49,7 +59,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/searches/nest/DynestyDynamic.ipynb b/notebooks/searches/nest/DynestyDynamic.ipynb index da767cb5..f82aac6d 100644 --- a/notebooks/searches/nest/DynestyDynamic.ipynb +++ b/notebooks/searches/nest/DynestyDynamic.ipynb @@ -12,7 +12,17 @@ "Information about Dynesty can be found at the following links:\n", "\n", " - https://github.com/joshspeagle/dynesty\n", - " - https://dynesty.readthedocs.io/en/latest/" + " - https://dynesty.readthedocs.io/en/latest/\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Data**: Loading and plotting the 1D Gaussian dataset used to demonstrate the search.\n", + "- **Model + Analysis**: Setting up the model and analysis for the fitting example.\n", + "- **Search**: Configuring and running the DynestyDynamic nested sampler.\n", + "- **Result**: Inspecting the result and comparing the maximum log likelihood model to the data.\n", + "- **Search Internal**: Accessing the internal Dynesty sampler for advanced use." ] }, { @@ -48,7 +58,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/searches/nest/DynestyStatic.ipynb b/notebooks/searches/nest/DynestyStatic.ipynb index e5acab08..973773ad 100644 --- a/notebooks/searches/nest/DynestyStatic.ipynb +++ b/notebooks/searches/nest/DynestyStatic.ipynb @@ -12,7 +12,17 @@ "Information about Dynesty can be found at the following links:\n", "\n", " - https://github.com/joshspeagle/dynesty\n", - " - https://dynesty.readthedocs.io/en/latest/" + " - https://dynesty.readthedocs.io/en/latest/\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Data**: Loading and plotting the 1D Gaussian dataset used to demonstrate the search.\n", + "- **Model + Analysis**: Setting up the model and analysis for the fitting example.\n", + "- **Search**: Configuring and running the DynestyStatic nested sampler.\n", + "- **Result**: Inspecting the result and comparing the maximum log likelihood model to the data.\n", + "- **Search Internal**: Accessing the internal Dynesty sampler for advanced use." ] }, { @@ -48,7 +58,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/searches/nest/Nautilus.ipynb b/notebooks/searches/nest/Nautilus.ipynb index 5e08a8fe..03c350b5 100644 --- a/notebooks/searches/nest/Nautilus.ipynb +++ b/notebooks/searches/nest/Nautilus.ipynb @@ -12,7 +12,17 @@ "Information about Nautilus can be found at the following links:\n", "\n", " - https://nautilus-sampler.readthedocs.io/en/stable/index.html\n", - " - https://github.com/johannesulf/nautilus" + " - https://github.com/johannesulf/nautilus\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Data**: Loading and plotting the 1D Gaussian dataset used to demonstrate the search.\n", + "- **Model + Analysis**: Setting up the model and analysis for the fitting example.\n", + "- **Search**: Configuring and running the Nautilus nested sampler.\n", + "- **Result**: Inspecting the result and comparing the maximum log likelihood model to the data.\n", + "- **Search Internal**: Accessing the internal Nautilus sampler for advanced use." ] }, { @@ -48,7 +58,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/searches/nest/UltraNest.ipynb b/notebooks/searches/nest/UltraNest.ipynb index d51905a9..9bdc0800 100644 --- a/notebooks/searches/nest/UltraNest.ipynb +++ b/notebooks/searches/nest/UltraNest.ipynb @@ -15,7 +15,16 @@ "Information about UltraNest can be found at the following links:\n", "\n", " - https://github.com/JohannesBuchner/UltraNest\n", - " - https://johannesbuchner.github.io/UltraNest/readme.html" + " - https://johannesbuchner.github.io/UltraNest/readme.html\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Data**: Loading and plotting the 1D Gaussian dataset used to demonstrate the search.\n", + "- **Model + Analysis**: Setting up the model and analysis for the fitting example.\n", + "- **Search**: Configuring and running the UltraNest nested sampler.\n", + "- **Result**: Inspecting the result and comparing the maximum log likelihood model to the data." ] }, { @@ -51,7 +60,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/searches/start_point.ipynb b/notebooks/searches/start_point.ipynb index 3c6199e2..95a40a59 100644 --- a/notebooks/searches/start_point.ipynb +++ b/notebooks/searches/start_point.ipynb @@ -44,6 +44,19 @@ "\n", "These are functionally identical to the `Analysis` and `Gaussian` objects you have seen elsewhere in the workspace.\n", "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Comparison to Priors**: Explain the differences between start-point and prior customization approaches.\n", + "- **Example Source Code (`af.ex`)**: The example objects used in this script.\n", + "- **Start Here Notebook**: Reference to the related tutorial notebook.\n", + "- **Data**: Load and plot the 1D Gaussian dataset.\n", + "- **Start Point Priors**: Define a model with broad uniform priors for start-point demonstration.\n", + "- **Start Point**: Set parameter start point ranges for initializing the search.\n", + "- **Search + Analysis + Model-Fit**: Perform the model-fit with the configured start point.\n", + "- **Result**: Extract and display the initial walker samples and fit results.\n", + "\n", "__Start Here Notebook__\n", "\n", "If any code in this script is unclear, refer to the `modeling/start_here.ipynb` notebook." @@ -80,7 +93,33 @@ "cell_type": "code", "metadata": {}, "source": [ - "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")\n", + "dataset_path = path.join(\"dataset\", \"example_1d\", \"gaussian_x1\")" + ], + "outputs": [], + "execution_count": null + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "__Dataset Auto-Simulation__\n", + "\n", + "If the dataset does not already exist on your system, it will be created by running the corresponding\n", + "simulator script. This ensures that all example scripts can be run without manually simulating data first." + ] + }, + { + "cell_type": "code", + "metadata": {}, + "source": [ + "if not path.exists(dataset_path):\n", + " import subprocess\n", + " import sys\n", + " subprocess.run(\n", + " [sys.executable, \"scripts/simulators/simulators.py\"],\n", + " check=True,\n", + " )\n", + "\n", "data = af.util.numpy_array_from_json(file_path=path.join(dataset_path, \"data.json\"))\n", "noise_map = af.util.numpy_array_from_json(\n", " file_path=path.join(dataset_path, \"noise_map.json\")\n", diff --git a/notebooks/simulators/simulators.ipynb b/notebooks/simulators/simulators.ipynb index d5e9cd04..3f85f5d5 100644 --- a/notebooks/simulators/simulators.ipynb +++ b/notebooks/simulators/simulators.ipynb @@ -6,7 +6,29 @@ "source": [ "__Simulators__\n", "\n", - "These scripts simulate the 1D Gaussian datasets used to demonstrate model-fitting." + "These scripts simulate the 1D Gaussian datasets used to demonstrate model-fitting.\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Gaussian x1**: Simulate a single 1D Gaussian dataset.\n", + "- **Gaussian x1 (0)**: Simulate a single Gaussian with sigma=1.0.\n", + "- **Gaussian x1 (1)**: Simulate a single Gaussian with sigma=5.0.\n", + "- **Gaussian x1 (2)**: Simulate a single Gaussian with sigma=10.0.\n", + "- **Gaussian x1 (Identical 0)**: Simulate an identical single Gaussian dataset (copy 0).\n", + "- **Gaussian x1 (Identical 1)**: Simulate an identical single Gaussian dataset (copy 1).\n", + "- **Gaussian x1 (Identical 2)**: Simulate an identical single Gaussian dataset (copy 2).\n", + "- **Gaussian x1 + Exponential x1**: Simulate a dataset with one Gaussian and one Exponential.\n", + "- **Gaussian x2 + Exponential x1**: Simulate a dataset with two Gaussians and one Exponential.\n", + "- **Gaussian x2**: Simulate a dataset with two Gaussians.\n", + "- **Gaussian x3**: Simulate a dataset with three Gaussians.\n", + "- **Gaussian x5**: Simulate a dataset with five Gaussians.\n", + "- **Gaussian x1 unconvolved**: Simulate a single Gaussian without convolution.\n", + "- **Gaussian x1 convolved**: Simulate a single Gaussian with kernel convolution.\n", + "- **Gaussian x1 with feature**: Simulate a Gaussian with a small feature bump.\n", + "- **Gaussian x2 split**: Simulate two separated Gaussians.\n", + "- **Gaussian x1 time**: Simulate time-varying Gaussian datasets." ] }, { diff --git a/notebooks/simulators/simulators_sample.ipynb b/notebooks/simulators/simulators_sample.ipynb index 02aecf36..b7e5133d 100644 --- a/notebooks/simulators/simulators_sample.ipynb +++ b/notebooks/simulators/simulators_sample.ipynb @@ -7,7 +7,15 @@ "__Simulators__\n", "\n", "These scripts simulates many 1D Gaussian datasets with a low signal to noise ratio, which are used to demonstrate\n", - "model-fitting." + "model-fitting.\n", + "\n", + "__Contents__\n", + "\n", + "This script is split into the following sections:\n", + "\n", + "- **Gaussian x1 low snr (centre fixed to 50.0)**: Simulate low signal-to-noise Gaussian datasets with a fixed centre.\n", + "- **Gaussian x1 low snr (centre drawn from parent Gaussian distribution to 50.0)**: Simulate hierarchical Gaussian datasets with centres drawn from a parent distribution.\n", + "- **Gaussian x2 offset centre**: Simulate datasets with two Gaussians with offset centres for graphical model demonstrations." ] }, { diff --git a/scripts/cookbooks/samples.py b/scripts/cookbooks/samples.py index 6e4359ed..ec8fe16e 100644 --- a/scripts/cookbooks/samples.py +++ b/scripts/cookbooks/samples.py @@ -268,13 +268,12 @@ The Probability Density Functions (PDF's) of the results can be plotted using the non-linear search in-built visualization tools. -This fit used `Emcee` therefore we use the `MCMCPlotter` for visualization, which wraps the Python library `corner.py`. +This fit used `Emcee` therefore we use `corner.py` for visualization via the `aplt.corner_cornerpy` function. The `autofit_workspace/*/plots` folder illustrates other packages that can be used to make these plots using the standard output results formats (e.g. `GetDist.py`). """ -plotter = aplt.MCMCPlotter(samples=result.samples) -plotter.corner_cornerpy() +aplt.corner_cornerpy(samples=result.samples) """ __Maximum Likelihood__ diff --git a/scripts/cookbooks/search.py b/scripts/cookbooks/search.py index eb62bea5..b53fe802 100644 --- a/scripts/cookbooks/search.py +++ b/scripts/cookbooks/search.py @@ -189,15 +189,14 @@ This uses that search's in-built visualization libraries, which are fully described in the `plot` package of the workspace. -For example, `Emcee` has a corresponding `MCMCPlotter`, which is used as follows. +For example, `Emcee` results can be plotted using the `aplt.corner_cornerpy` function as follows. Checkout the `plot` package for a complete description of the plots that can be made for a given search. """ samples = result.samples -plotter = aplt.MCMCPlotter(samples=samples) - -plotter.corner_cornerpy( +aplt.corner_cornerpy( + samples=samples, bins=20, range=None, color="k", diff --git a/scripts/howtofit/chapter_1_introduction/tutorial_5_results_and_samples.py b/scripts/howtofit/chapter_1_introduction/tutorial_5_results_and_samples.py index 9746b5df..c26b6322 100644 --- a/scripts/howtofit/chapter_1_introduction/tutorial_5_results_and_samples.py +++ b/scripts/howtofit/chapter_1_introduction/tutorial_5_results_and_samples.py @@ -25,7 +25,7 @@ - **Posterior / PDF**: Access median PDF estimates for the model parameters. - **Plot**: Visualize model fit results using instances. - **Errors**: Compute parameter error estimates at specified sigma confidence limits. -- **PDF**: Plot Probability Density Functions using the MCMCPlotter. +- **PDF**: Plot Probability Density Functions using corner.py. - **Other Results**: Access maximum log posterior and other sample statistics. - **Sample Instance**: Create instances from individual samples in the sample list. - **Bayesian Evidence**: Access the log evidence for nested sampling searches. @@ -515,11 +515,10 @@ def model_data_from(self, xvalues: np.ndarray): """ __PDF__ -The Probability Density Functions (PDF's) of the results can be plotted using the Emcee's visualization -tool `corner.py`, which is wrapped via the `MCMCPlotter` object. +The Probability Density Functions (PDF's) of the results can be plotted using the Emcee's visualization +tool `corner.py`, which is wrapped via the `aplt.corner_cornerpy` function. """ -plotter = aplt.MCMCPlotter(samples=result.samples) -plotter.corner_cornerpy() +aplt.corner_cornerpy(samples=result.samples) """ __Other Results__ diff --git a/scripts/howtofit/chapter_3_graphical_models/tutorial_1_individual_models.py b/scripts/howtofit/chapter_3_graphical_models/tutorial_1_individual_models.py index 2e7b9022..ab691677 100644 --- a/scripts/howtofit/chapter_3_graphical_models/tutorial_1_individual_models.py +++ b/scripts/howtofit/chapter_3_graphical_models/tutorial_1_individual_models.py @@ -239,8 +239,7 @@ """ for samples in samples_list: - plotter = aplt.NestPlotter(samples=samples) - plotter.corner_cornerpy() + aplt.corner_cornerpy(samples=samples) """ We can also print the values of each centre estimate, including their estimates at 3.0 sigma. diff --git a/scripts/howtofit/chapter_3_graphical_models/tutorial_3_graphical_benefits.py b/scripts/howtofit/chapter_3_graphical_models/tutorial_3_graphical_benefits.py index 864e172f..5e19bc53 100644 --- a/scripts/howtofit/chapter_3_graphical_models/tutorial_3_graphical_benefits.py +++ b/scripts/howtofit/chapter_3_graphical_models/tutorial_3_graphical_benefits.py @@ -255,8 +255,7 @@ We can see this by inspecting the probability distribution function (PDF) of the fit, placing particular focus on the 2D degeneracy between the Gaussians centres. """ -plotter = aplt.NestPlotter(samples=result_list[0].samples) -plotter.corner_cornerpy() +aplt.corner_cornerpy(samples=result_list[0].samples) """ The problem is that the simple approach of taking a weighted average does not capture the curved banana-like shape diff --git a/scripts/overview/overview_1_the_basics.py b/scripts/overview/overview_1_the_basics.py index cc0f1e4c..cf65a478 100644 --- a/scripts/overview/overview_1_the_basics.py +++ b/scripts/overview/overview_1_the_basics.py @@ -494,8 +494,7 @@ def log_likelihood_function(self, instance) -> float: Below we use the samples to plot the probability density function cornerplot of the results. """ -plotter = aplt.NestPlotter(samples=result.samples) -plotter.corner_anesthetic() +aplt.corner_anesthetic(samples=result.samples) """ The `results cookbook `_ also provides diff --git a/scripts/plot/DynestyPlotter.py b/scripts/plot/DynestyPlotter.py index 7e89fd93..417a7ea4 100644 --- a/scripts/plot/DynestyPlotter.py +++ b/scripts/plot/DynestyPlotter.py @@ -3,14 +3,14 @@ ===================== This example illustrates how to plot visualization summarizing the results of a dynesty non-linear search using -a `NestPlotter`. +the `autofit.plot` module-level functions. __Contents__ This script is split into the following sections: - **Notation**: How parameter labels and superscripts are customized for plots. -- **Plotting**: Using the NestPlotter to visualize Dynesty search results. +- **Plotting**: Using the plot functions to visualize Dynesty search results. - **Search Specific Visualization**: Accessing the native Dynesty sampler for custom visualizations. - **Plots**: Producing Dynesty-specific diagnostic plots. """ @@ -75,15 +75,14 @@ __Plotting__ -We now pass the samples to a `NestPlotter` which will allow us to use dynesty's in-built plotting libraries to -make figures. +We now use the `autofit.plot` module-level functions to visualize the results. -The dynesty readthedocs describes fully all of the methods used below +The dynesty readthedocs describes fully all of the methods used below - https://dynesty.readthedocs.io/en/latest/quickstart.html - https://dynesty.readthedocs.io/en/latest/api.html#module-dynesty.plotting - -In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are + +In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are described in the API docs. Dynesty plotters use `_kwargs` dictionaries to pass visualization settings to matplotlib lib. For example, below, @@ -91,7 +90,7 @@ - Set the fontsize of the x and y labels by passing `label_kwargs={"fontsize": 16}`. - Set the fontsize of the title by passing `title_kwargs={"fontsize": "10"}`. - + There are other `_kwargs` inputs we pass as None, you should check out the Dynesty docs if you need to customize your figure. """ diff --git a/scripts/plot/EmceePlotter.py b/scripts/plot/EmceePlotter.py index faea689d..8016e793 100644 --- a/scripts/plot/EmceePlotter.py +++ b/scripts/plot/EmceePlotter.py @@ -1,16 +1,16 @@ """ -Plots: MCMCPlotter +Plots: EmceePlotter =================== This example illustrates how to plot visualization summarizing the results of a emcee non-linear search using -a `MCMCPlotter`. +the `autofit.plot` module-level functions. __Contents__ This script is split into the following sections: - **Notation**: How parameter labels and superscripts are customized for plots. -- **Plotting**: Using the MCMCPlotter to visualize Emcee search results. +- **Plotting**: Using the plot functions to visualize Emcee search results. - **Search Specific Visualization**: Accessing the native Emcee sampler for custom visualizations. """ @@ -78,18 +78,17 @@ __Plotting__ -We now pass the samples to a `MCMCPlotter` which will allow us to use emcee's in-built plotting libraries to -make figures. +We now use the `autofit.plot` module-level functions to visualize the results. -The emcee readthedocs describes fully all of the methods used below +The emcee readthedocs describes fully all of the methods used below - https://emcee.readthedocs.io/en/stable/user/sampler/ - - The plotter wraps the `corner` method of the library `corner.py` to make corner plots of the PDF: + +The `aplt.corner_cornerpy` function wraps the library `corner.py` to make corner plots of the PDF: - https://corner.readthedocs.io/en/latest/index.html - -In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are + +In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are described in the API docs. """ samples = result.samples diff --git a/scripts/plot/NautilusPlotter.py b/scripts/plot/NautilusPlotter.py index 14944c09..40f97fcd 100644 --- a/scripts/plot/NautilusPlotter.py +++ b/scripts/plot/NautilusPlotter.py @@ -3,14 +3,14 @@ ====================== This example illustrates how to plot visualization summarizing the results of a nautilus non-linear search using -a `MCMCPlotter`. +the `autofit.plot` module-level functions. __Contents__ This script is split into the following sections: - **Notation**: How parameter labels and superscripts are customized for plots. -- **Plotting**: Using the NestPlotter to visualize Nautilus search results. +- **Plotting**: Using the plot functions to visualize Nautilus search results. - **Search Specific Visualization**: Accessing the native Nautilus sampler for custom visualizations. - **Plots**: Producing Nautilus-specific diagnostic plots. """ @@ -79,14 +79,13 @@ __Plotting__ -We now pass the samples to a `NestPlotter` which will allow us to use nautilus's in-built plotting libraries to -make figures. +We now use the `autofit.plot` module-level functions to visualize the results. -The nautilus readthedocs describes fully all of the methods used below +The nautilus readthedocs describes fully all of the methods used below - https://nautilus-sampler.readthedocs.io/en/stable/guides/crash_course.html - -In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are + +In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are described in the API docs. Nautilus plotters use `_kwargs` dictionaries to pass visualization settings to matplotlib lib. For example, below, @@ -94,7 +93,7 @@ - Set the fontsize of the x and y labels by passing `label_kwargs={"fontsize": 16}`. - Set the fontsize of the title by passing `title_kwargs={"fontsize": "10"}`. - + There are other `_kwargs` inputs we pass as None, you should check out the Nautilus docs if you need to customize your figure. """ diff --git a/scripts/plot/PySwarmsPlotter.py b/scripts/plot/PySwarmsPlotter.py index a57eaab1..960d7fc7 100644 --- a/scripts/plot/PySwarmsPlotter.py +++ b/scripts/plot/PySwarmsPlotter.py @@ -1,16 +1,16 @@ """ -Plots: MLEPlotter +Plots: PySwarmsPlotter ====================== This example illustrates how to plot visualization summarizing the results of a pyswarms non-linear search using -a `MLEPlotter`. +the `autofit.plot` module-level functions. __Contents__ This script is split into the following sections: - **Notation**: How parameter labels and superscripts are customized for plots. -- **Plotting**: Using the MLEPlotter to visualize PySwarms search results. +- **Plotting**: Using the plot functions to visualize PySwarms search results. - **Search Specific Visualization**: Accessing the native PySwarms optimizer for custom visualizations. """ @@ -79,7 +79,7 @@ __Plotting__ -We now pass the samples to a `MLEPlotter` which will allow us to use pyswarms's in-built plotting libraries to +We now use the `autofit.plot` module-level functions and pyswarms's in-built plotting libraries to make figures. The pyswarms readthedocs describes fully all of the methods used below diff --git a/scripts/plot/UltraNestPlotter.py b/scripts/plot/UltraNestPlotter.py index c26ad3ba..2353179c 100644 --- a/scripts/plot/UltraNestPlotter.py +++ b/scripts/plot/UltraNestPlotter.py @@ -3,7 +3,7 @@ ======================= This example illustrates how to plot visualization summarizing the results of a ultranest non-linear search using -a `NestPlotter`. +the `autofit.plot` module-level functions. Installation ------------ @@ -17,7 +17,7 @@ This script is split into the following sections: - **Notation**: How parameter labels and superscripts are customized for plots. -- **Plotting**: Using the NestPlotter to visualize UltraNest search results. +- **Plotting**: Using the plot functions to visualize UltraNest search results. - **Search Specific Visualization**: Accessing the native UltraNest sampler for custom visualizations. - **Plots**: Producing UltraNest-specific diagnostic plots. """ @@ -81,8 +81,7 @@ __Plotting__ -We now pass the samples to a `NestPlotter` which will allow us to use ultranest's in-built plotting libraries to -make figures. +We now use the `autofit.plot` module-level functions to visualize the results. The ultranest readthedocs describes fully all of the methods used below diff --git a/scripts/plot/ZeusPlotter.py b/scripts/plot/ZeusPlotter.py index f1864e9a..dc40bd0f 100644 --- a/scripts/plot/ZeusPlotter.py +++ b/scripts/plot/ZeusPlotter.py @@ -1,16 +1,16 @@ """ -Plots: MCMCPlotter +Plots: ZeusPlotter ================== This example illustrates how to plot visualization summarizing the results of a zeus non-linear search using -a `MCMCPlotter`. +the `autofit.plot` module-level functions. __Contents__ This script is split into the following sections: - **Notation**: How parameter labels and superscripts are customized for plots. -- **Plotting**: Using the MCMCPlotter to visualize Zeus search results. +- **Plotting**: Using the plot functions to visualize Zeus search results. - **Search Specific Visualization**: Accessing the native Zeus sampler for custom visualizations. """ @@ -80,19 +80,18 @@ __Plotting__ -We now pass the samples to a `MCMCPlotter` which will allow us to use dynesty's in-built plotting libraries to -make figures. +We now use the `autofit.plot` module-level functions to visualize the results. -The zeus readthedocs describes fully all of the methods used below +The zeus readthedocs describes fully all of the methods used below - https://zeus-mcmc.readthedocs.io/en/latest/api/plotting.html#cornerplot - https://zeus-mcmc.readthedocs.io/en/latest/notebooks/normal_distribution.html - - The plotter wraps the `corner` method of the library `corner.py` to make corner plots of the PDF: + +The `aplt.corner_cornerpy` function wraps the library `corner.py` to make corner plots of the PDF: - https://corner.readthedocs.io/en/latest/index.html - -In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are + +In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are described in the API docs. """ """