diff --git a/docs/source/algorithms.md b/docs/source/algorithms.md index bd8837b9a..54c7e8ea6 100644 --- a/docs/source/algorithms.md +++ b/docs/source/algorithms.md @@ -3920,48 +3920,36 @@ addition to optimagic when using an NLOPT algorithm. To install nlopt run optimagic supports the [IMINUIT MIGRAD Optimizer](https://iminuit.readthedocs.io/). To use MIGRAD, you need to have -[the iminuit package](https://github.com/scikit-hep/iminuit) installed (pip install -iminuit). +[the iminuit package](https://github.com/scikit-hep/iminuit) installed +(`pip install iminuit`). ```{eval-rst} .. dropdown:: iminuit_migrad - .. code-block:: - - "iminuit_migrad" - - `MIGRAD `_ is - the workhorse algorithm of the MINUIT optimization suite, which has been widely used in the - high-energy physics community since 1975. The IMINUIT package is a Python interface to the - Minuit2 C++ library developed by CERN. + **How to use this algorithm:** - Migrad uses a quasi-Newton method, updating the Hessian matrix iteratively - to guide the optimization. The algorithm adapts dynamically to challenging landscapes - using several key techniques: + .. code-block:: - - **Quasi-Newton updates**: The Hessian is updated iteratively rather than recalculated at - each step, improving efficiency. - - **Steepest descent fallback**: When the Hessian update fails, Migrad falls back to steepest - descent with line search. - - **Box constraints handling**: Parameters with bounds are transformed internally to ensure - they remain within allowed limits. - - **Heuristics for numerical stability**: Special cases such as flat gradients or singular - Hessians are managed using pre-defined heuristics. - - **Stopping criteria based on Estimated Distance to Minimum (EDM)**: The optimization halts - when the predicted improvement becomes sufficiently small. - - For details see :cite:`JAMES1975343`. + import optimagic as om + om.minimize( + ..., + algorithm=om.algos.iminuit_migrad(stopping_maxfun=10_000, ...) + ) + + or + + .. code-block:: - **Optimizer Parameters:** + om.minimize( + ..., + algorithm="iminuit_migrad", + algo_options={"stopping_maxfun=10_000, ...} + ) - - **stopping.maxfun** (int): Maximum number of function evaluations. If reached, the optimization stops - but this is not counted as successful convergence. Function evaluations used for numerical gradient - calculations do not count toward this limit. Default is 1,000,000. + **Description and available options:** - - **n_restarts** (int): Number of times to restart the optimizer if convergence is not reached. + .. autoclass:: optimagic.optimizers.iminuit_migrad.IminuitMigrad - - A value of 1 (the default) indicates that the optimizer will only run once, disabling the restart feature. - - Values greater than 1 specify the maximum number of restart attempts. ``` ## Nevergrad Optimizers diff --git a/src/optimagic/optimizers/iminuit_migrad.py b/src/optimagic/optimizers/iminuit_migrad.py index 4c6490e74..27721131b 100644 --- a/src/optimagic/optimizers/iminuit_migrad.py +++ b/src/optimagic/optimizers/iminuit_migrad.py @@ -42,8 +42,44 @@ ) @dataclass(frozen=True) class IminuitMigrad(Algorithm): + """Minimize a scalar differentiable function using the MIGRAD algorithm from + iminuit. + + This optimizer wraps the MIGRAD algorithm from the iminuit package, which provides a + Python interface to the Minuit2 C++ library developed and maintained by CERN. + + MIGRAD is a local optimization method in the quasi-Newton family. It iteratively + builds an approximation of the inverse Hessian matrix using the DFP variable-metric + method to efficiently navigate optimization landscapes. + + At each iteration, the algorithm attempts a Newton step, using gradient and Hessian + approximations to move toward the function’s minimum. If this step fails to reduce + the objective function, MIGRAD conducts a line search along the gradient direction + to maintain progress. This continues until the convergence criteria, such as the + Estimated Distance to Minimum (EDM) are met, that is, they fall below preset + thresholds. + + MIGRAD is designed for statistical optimization problems where accurate parameter + uncertainty estimates are essential. It excels at maximum-likelihood and least- + squares fits common in scientific computing, and is best suited for smooth, + differentiable cost functions. + + For best performance, supply analytical gradients. Convergence and solution will + depend on your starting values. Bound constraints (limits) supported. + + """ + stopping_maxfun: int = STOPPING_MAXFUN + """Maximum number of function evaluations.""" + n_restarts: int = N_RESTARTS + """Number of times to restart the optimizer if convergence is not reached. + + A value of 1 (the default) indicates that the optimizer will only run once, + disabling the restart feature. Values greater than 1 specify the maximum number of + restart attempts. + + """ def _solve_internal_problem( self, problem: InternalOptimizationProblem, params: NDArray[np.float64]