-
Notifications
You must be signed in to change notification settings - Fork 46
Wrap gradient_free_optimizers (local) #624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report❌ Patch coverage is
🚀 New features to boost your workflow:
|
Hi @janosg
|
|
fix doc fixes add tests
Hi @janosg, Changesadd a new example problem with converter in internal_optimization_problemAdded functions and converter dealing with dict input. refactor test_many_algorithms.( this is minimal and is just for passing tests on this one)
gfo
|
stopping_funval: float | None = None | ||
""""Stop the optimization if the objective function is less than this value.""" | ||
|
||
convergence_iter_noimprove: PositiveInt = 1000 # need to set high |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we set this to None or a really high value instead? Is there another convercence criterion we could set to a non-None value instead? We don't want all optimizers just to run until max_iter but we also don't want pre-mature stopping of course.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit confused about this.
Most of the time convergence_iter_noimprove
behaves as stopping_maxiter
. The other convergence criteria dont seem to be respected. Even after a > 100000 iterations the algorithm does not converge to a good solution. I might be missing something but hope to get a solution here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for creating the issue over there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since convergence_iter_noimprove
is set to a large value, it will look for a change in a large no. of iterartions and convergence_ftol_abs
and convergence_ftol_rel
will not have effect.
…st_gfo_optimizers
…okup dict for failing tests, move nag_dfols test to nag_optimizers, move test_pygmo_optimizers
Hi @janosg, The tests are passing, could you review this as well? |
@gauravmanmode can you fix the merge conflicts? |
@janosg , I've fixed the merge conflicts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice. Just minor comments!
docs/source/algorithms.md
Outdated
```{eval-rst} | ||
.. dropdown:: Common options across all optimizers | ||
|
||
.. autoclass:: optimagic.optimizers.gfo_optimizers.GFOCommonOptions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I probably missed this last time: If possible I would prefer not to expose the common options in the documentation and instead document the inherited attributes for each optimizer. I think it is possible with something like this:
.. autoclass:: SomeGFOOptimizer
:members:
:inherited-members: GFOCommonOptions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will work on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the code snippet. I achieved similar with
.. autoclass:: SomeGFOOptimizer
:members:
:inherited-members: Algorithm, object
:member-order: bysource
The members of classes listed in the argument to :inherited-members:
are excluded from the automatic documentation.
:member-order: bysource
will list base class members first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. You can decide on the implementation. The goal is that the final documentation of each algorithm looks like it would look if we did not use inheritance from common options.
@@ -108,6 +109,482 @@ class GFOCommonOptions: | |||
seed: int | None = None | |||
"""Random seed for reproducibility.""" | |||
|
|||
rand_rest_p: NonNegativeFloat = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be a ProbabilityFloat
? Similar question for other options below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I missed that, p_accept
is also ProbabilityFloat
.
Hill climbing is a local search algorithm suited for exploring combinatorial search | ||
spaces. | ||
|
||
It starts at an initial point, which is often chosen randomly and continues to move |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would remove "which is often chosen randomly". There are different ways to get start parameters and many include domain knowledge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the implemenation in GFO is that it runs a initialization step where it queries n_init
points (chosen from grid randomly or vertices of the grid) and starts from the best one. should i mention this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that would be better. Currently it sounds like one random init point.
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
PR Description
This PR adds optimizers from gradient_free_optimizers
The following optimizers are now available
Other Changes
experimental
toAlgoInfo
for optimizers that need to skip tests.SphereExampleInternalOptimizationProblemWithConverter
with converter ininternal_optimization_problem.py
.test_many_algorithms.py
, add aPRECISION_LOOKUP
dict for algorithm specific precision.Helper Functions