Skip to content

Conversation

gauravmanmode
Copy link
Collaborator

@gauravmanmode gauravmanmode commented Jul 27, 2025

PR Description

This PR adds optimizers from gradient_free_optimizers

The following optimizers are now available

Other Changes

  • Add a new field experimental to AlgoInfo for optimizers that need to skip tests.
  • Add a new example problem SphereExampleInternalOptimizationProblemWithConverter with converter in internal_optimization_problem.py.
  • Refactor test_many_algorithms.py, add a PRECISION_LOOKUP dict for algorithm specific precision.

Helper Functions

Copy link

codecov bot commented Aug 2, 2025

Codecov Report

❌ Patch coverage is 99.49239% with 1 line in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
...agic/optimization/internal_optimization_problem.py 50.00% 1 Missing ⚠️
Files with missing lines Coverage Δ
src/optimagic/algorithms.py 88.51% <100.00%> (+0.33%) ⬆️
src/optimagic/mark.py 90.00% <ø> (ø)
src/optimagic/optimization/algorithm.py 94.97% <100.00%> (+0.02%) ⬆️
src/optimagic/optimizers/gfo_optimizers.py 100.00% <100.00%> (ø)
...agic/optimization/internal_optimization_problem.py 92.85% <50.00%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@gauravmanmode gauravmanmode marked this pull request as ready for review August 4, 2025 14:21
@gauravmanmode
Copy link
Collaborator Author

Hi @janosg
There are a few points

  1. I am failing tests in test_many_algorithms for stochastic algorithms as in the case of all gfo optimizers needs_bounds = True but the tests do not have them.
  2. Writing test for helper functions is a bit difficult as I have exposed the converter. Should I pass arrays only after converting?

@janosg
Copy link
Member

janosg commented Aug 5, 2025

Hi @gauravmanmode

  1. Is this because we currently use the is_global flag to decide which tests get called wtih bounds? If so, we can simply use the new flags in test_many_algorithms (either adding a new test case for non global optimizers that require bounds or restructuring what weh have).
  2. I don't understand what the problem is and what you mean by "Should I pass arrays only after converting?". Is the problem that SphereExampleInternalOptimizationProblem does not have a converter?

@gauravmanmode
Copy link
Collaborator Author

Hi @janosg,
Could you please review the changes?

Changes

add a new example problem with converter in internal_optimization_problem

Added functions and converter dealing with dict input.

refactor test_many_algorithms.

( this is minimal and is just for passing tests on this one)

  1. For now, the new tests in test_many_algorithms_new cover all of the tests existing in the original test_many_algorithms_new .
  2. The only failing test is scipy_trust_constr with binding bounds which fails barely . For the purpose of passing this one test, I have set the accuracy of local algorithms to 3 for now.
  3. Why test_with _binding_bounds test does not test with finite bounds ?
    Should this be in a separate PR?

gfo

  1. Many algos in GFO do not pass tests with their default values. How should I go about that?
  2. Some algorithms in GFO particularly population based ones do not advance ( n_iterations_performed is low ) if convergence_iter_noimprove is set to something low like < 100. Not sure but some optimizers jump randomly to new positions to explore the search space might be the reason.

stopping_funval: float | None = None
""""Stop the optimization if the objective function is less than this value."""

convergence_iter_noimprove: PositiveInt = 1000 # need to set high
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we set this to None or a really high value instead? Is there another convercence criterion we could set to a non-None value instead? We don't want all optimizers just to run until max_iter but we also don't want pre-mature stopping of course.

Copy link
Collaborator Author

@gauravmanmode gauravmanmode Aug 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit confused about this.
Most of the time convergence_iter_noimprove behaves as stopping_maxiter. The other convergence criteria dont seem to be respected. Even after a > 100000 iterations the algorithm does not converge to a good solution. I might be missing something but hope to get a solution here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for creating the issue over there.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since convergence_iter_noimprove is set to a large value, it will look for a change in a large no. of iterartions and convergence_ftol_abs and convergence_ftol_rel will not have effect.

@gauravmanmode
Copy link
Collaborator Author

Hi @janosg, The tests are passing, could you review this as well?

@janosg
Copy link
Member

janosg commented Aug 28, 2025

@gauravmanmode can you fix the merge conflicts?

@gauravmanmode
Copy link
Collaborator Author

@janosg , I've fixed the merge conflicts.

Copy link
Member

@janosg janosg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice. Just minor comments!

```{eval-rst}
.. dropdown:: Common options across all optimizers

.. autoclass:: optimagic.optimizers.gfo_optimizers.GFOCommonOptions
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I probably missed this last time: If possible I would prefer not to expose the common options in the documentation and instead document the inherited attributes for each optimizer. I think it is possible with something like this:

.. autoclass:: SomeGFOOptimizer
   :members:
   :inherited-members: GFOCommonOptions

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will work on this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the code snippet. I achieved similar with

  .. autoclass:: SomeGFOOptimizer
    :members:
    :inherited-members: Algorithm, object  
    :member-order: bysource

The members of classes listed in the argument to :inherited-members: are excluded from the automatic documentation.
:member-order: bysource will list base class members first.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. You can decide on the implementation. The goal is that the final documentation of each algorithm looks like it would look if we did not use inheritance from common options.

@@ -108,6 +109,482 @@ class GFOCommonOptions:
seed: int | None = None
"""Random seed for reproducibility."""

rand_rest_p: NonNegativeFloat = 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be a ProbabilityFloat? Similar question for other options below.

Copy link
Collaborator Author

@gauravmanmode gauravmanmode Aug 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I missed that, p_accept is also ProbabilityFloat.

Hill climbing is a local search algorithm suited for exploring combinatorial search
spaces.

It starts at an initial point, which is often chosen randomly and continues to move
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would remove "which is often chosen randomly". There are different ways to get start parameters and many include domain knowledge.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the implemenation in GFO is that it runs a initialization step where it queries n_init points (chosen from grid randomly or vertices of the grid) and starts from the best one. should i mention this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that would be better. Currently it sounds like one random init point.

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants