Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ NUMBA_CACHE_DIR=/tmp/numba_cache MPLCONFIGDIR=/tmp/matplotlib python -m pytest t

## Key Architecture

- **Non-linear searches** (`non_linear/search/`): MCMC (emcee), nested sampling (dynesty, nautilus), MLE (LBFGS, pyswarms)
- **Non-linear searches** (`non_linear/search/`): MCMC (emcee), nested sampling (dynesty, nautilus), MLE (LBFGS, BFGS, drawer)
- **Model composition** (`mapper/`): `af.Model`, `af.Collection`, prior distributions
- **Analysis** (`non_linear/analysis/`): base `af.Analysis` class with `log_likelihood_function`
- **Aggregator** (`aggregator/`): results aggregation across runs
Expand Down
7 changes: 3 additions & 4 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ Shared utilities (e.g. `test_mode`, `jax_wrapper`) belong in autoconf.
- `autofit/` - Main package
- `non_linear/` - Non-linear search algorithms
- `search/mcmc/` - MCMC (emcee, zeus)
- `search/mle/` - Maximum likelihood (LBFGS, pyswarms)
- `search/nest/` - Nested sampling (dynesty, nautilus, ultranest)
- `search/mle/` - Maximum likelihood (LBFGS, BFGS, drawer)
- `search/nest/` - Nested sampling (dynesty, nautilus)
- `samples/` - Posterior samples handling
- `paths/` - Output path management
- `analysis/` - Analysis base classes
Expand All @@ -39,11 +39,10 @@ Shared utilities (e.g. `test_mode`, `jax_wrapper`) belong in autoconf.

- `dynesty==2.1.4` - Nested sampling
- `emcee>=3.1.6` - MCMC
- `pyswarms==1.3.0` - Particle swarm optimisation
- `scipy<=1.14.0` - Optimisation
- `SQLAlchemy==2.0.32` - Database backend
- `anesthetic==2.8.14` - Posterior analysis/plotting
- Optional: `nautilus-sampler`, `ultranest`, `zeus-mcmc`, `getdist`
- Optional: `nautilus-sampler`, `zeus-mcmc`, `getdist`

## Running Tests

Expand Down
3 changes: 0 additions & 3 deletions autofit/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,12 +82,9 @@
from .non_linear.search.nest.nautilus.search import Nautilus
from .non_linear.search.nest.dynesty.search.dynamic import DynestyDynamic
from .non_linear.search.nest.dynesty.search.static import DynestyStatic
from .non_linear.search.nest.ultranest.search import UltraNest
from .non_linear.search.mle.drawer.search import Drawer
from .non_linear.search.mle.bfgs.search import BFGS
from .non_linear.search.mle.bfgs.search import LBFGS
from .non_linear.search.mle.pyswarms.search.globe import PySwarmsGlobal
from .non_linear.search.mle.pyswarms.search.local import PySwarmsLocal
from .non_linear.paths.abstract import AbstractPaths
from .non_linear.paths import DirectoryPaths
from .non_linear.paths import DatabasePaths
Expand Down
2 changes: 1 addition & 1 deletion autofit/config/non_linear/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ Files

- ``mcmc.yaml``: Settings default behaviour of MCMC non-linear searches (e.g. Emcee).
- ``nest.yaml``: Settings default behaviour of nested sampler non-linear searches (e.g. Dynesty).
- ``mle.yaml``: Settings default behaviour of maximum likelihood estimator (mle) searches (e.g. PySwarms).
- ``mle.yaml``: Settings default behaviour of maximum likelihood estimator (mle) searches (e.g. BFGS).
46 changes: 1 addition & 45 deletions autofit/config/non_linear/mle.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,53 +2,9 @@

# **PyAutoFit** supports the following maximum likelihood estimator (MLE) algorithms:

# - PySwarms: https://github.com/ljvmiranda921/pyswarms / https://pyswarms.readthedocs.io/en/latest/index.html

# Settings in the [search], [run] and [options] entries are specific to each nested algorithm and should be
# Settings in the [search], [run] and [options] entries are specific to each algorithm and should be
# determined by consulting that method's own readthedocs.

PySwarmsGlobal:
run:
iters: 2000
search:
cognitive: 0.5
ftol: -.inf
inertia: 0.9
n_particles: 50
social: 0.3
initialize: # The method used to generate where walkers are initialized in parameter space {prior | ball}.
method: ball # priors: samples are initialized by randomly drawing from each parameter's prior. ball: samples are initialized by randomly drawing unit values from a narrow uniform distribution.
ball_lower_limit: 0.49 # The lower limit of the uniform distribution unit values are drawn from when initializing walkers using the ball method.
ball_upper_limit: 0.51 # The upper limit of the uniform distribution unit values are drawn from when initializing walkers using the ball method.
parallel:
number_of_cores: 1 # The number of cores the search is parallelized over by default, using Python multiprocessing.
printing:
silence: false # If True, the default print output of the non-linear search is silcened and not printed by the Python interpreter.
iterations_per_full_update: 500 # Non-linear search iterations between every full update, which outputs all visuals and result fits (e.g. model.result, search.summary), this exits the search and can be slow.
iterations_per_quick_update: 500 # Non-linear search iterations between every quick update, which just displays the maximum likelihood model fit.
remove_state_files_at_end: true # Whether to remove the savestate of the seach (e.g. the Emcee hdf5 file) at the end to save hard-disk space (results are still stored as PyAutoFit pickles and loadable).
PySwarmsLocal:
run:
iters: 2000
search:
cognitive: 0.5
ftol: -.inf
inertia: 0.9
minkowski_p_norm: 2
n_particles: 50
number_of_k_neighbors: 3
social: 0.3
initialize: # The method used to generate where walkers are initialized in parameter space {prior | ball}.
method: ball # priors: samples are initialized by randomly drawing from each parameter's prior. ball: samples are initialized by randomly drawing unit values from a narrow uniform distribution.
ball_lower_limit: 0.49 # The lower limit of the uniform distribution unit values are drawn from when initializing walkers using the ball method.
ball_upper_limit: 0.51 # The upper limit of the uniform distribution unit values are drawn from when initializing walkers using the ball method.
parallel:
number_of_cores: 1 # The number of cores the search is parallelized over by default, using Python multiprocessing.
printing:
silence: false # If True, the default print output of the non-linear search is silcened and not printed by the Python interpreter.
iterations_per_full_update: 500 # Non-linear search iterations between every full update, which outputs all visuals and result fits (e.g. model.result, search.summary), this exits the search and can be slow.
iterations_per_quick_update: 500 # Non-linear search iterations between every quick update, which just displays the maximum likelihood model fit.
remove_state_files_at_end: true # Whether to remove the savestate of the seach (e.g. the Emcee hdf5 file) at the end to save hard-disk space (results are still stored as PyAutoFit pickles and loadable).
BFGS:
search:
tol: null
Expand Down
45 changes: 0 additions & 45 deletions autofit/config/non_linear/nest.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@

# - Dynesty: https://github.com/joshspeagle/dynesty / https://dynesty.readthedocs.io/en/latest/index.html
# - Nautilus https://https://github.com/johannesulf/nautilus / https://nautilus-sampler.readthedocs.io/en/stable/index.html
# - UltraNest: https://github.com/JohannesBuchner/UltraNest / https://johannesbuchner.github.io/UltraNest/readme.html

# Settings in the [search] and [run] entries are specific to each nested algorithm and should be determined by
# consulting that MCMC method's own readthedocs.
Expand Down Expand Up @@ -87,47 +86,3 @@ Nautilus:
force_x1_cpu: false # Force Dynesty to not use Python multiprocessing Pool, which can fix issues on certain operating systems.
printing:
silence: false # If True, the default print output of the non-linear search is silenced and not printed by the Python interpreter.
UltraNest:
search:
draw_multiple: true
ndraw_max: 65536
ndraw_min: 128
num_bootstraps: 30
num_test_samples: 2
resume: true
run_num: null
storage_backend: hdf5
vectorized: false
warmstart_max_tau: -1.0
run:
cluster_num_live_points: 40
dkl: 0.5
dlogz: 0.5
frac_remain: 0.01
insertion_test_window: 10
insertion_test_zscore_threshold: 2
lepsilon: 0.001
log_interval: null
max_iters: null
max_ncalls: null
max_num_improvement_loops: -1.0
min_ess: 400
min_num_live_points: 400
show_status: true
update_interval_ncall: null
update_interval_volume_fraction: 0.8
viz_callback: auto
stepsampler:
adaptive_nsteps: false
log: false
max_nsteps: 1000
nsteps: 25
region_filter: false
scale: 1.0
stepsampler_cls: null
initialize: # The method used to generate where walkers are initialized in parameter space {prior}.
method: prior # priors: samples are initialized by randomly drawing from each parameter's prior.
parallel:
number_of_cores: 1 # The number of cores the search is parallelized over by default, using Python multiprocessing.
printing:
silence: false # If True, the default print output of the non-linear search is silenced and not printed by the Python interpreter.
2 changes: 1 addition & 1 deletion autofit/non_linear/samples/samples.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ def __init__(
individual sample by the `NonLinearSearch` and return information on the likelihoods, errors, etc.

This class stores samples of searches which provide maximum likelihood estimates of the model-fit (e.g.
PySwarms, LBFGS).
LBFGS).

Parameters
----------
Expand Down
Empty file.
Empty file.
Loading
Loading