Detailed model training control and batch modelling#365
Draft
nikitakuklev wants to merge 29 commits intoxopt-org:mainfrom
Draft
Detailed model training control and batch modelling#365nikitakuklev wants to merge 29 commits intoxopt-org:mainfrom
nikitakuklev wants to merge 29 commits intoxopt-org:mainfrom
Conversation
4d8076a to
745516e
Compare
Collaborator
|
Looking good! LMK when this is ready for review and we can have a short discussion to go over it |
5f75338 to
a76ae75
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR introduces batch GP models and finer control over model training. The former can be useful for scalarized objectives, while latter is necessary to speed up BO in operational contexts. As recently demonstrated at a NAPAC25 talk, one can significantly relax fitting tolerances to meet real-time requirements without impact on convergence, especially with scalarized objectives. There is physical motivation - we cannot set the physical devices precisely enough for exact fitting to matter.
Changes:
Caveats:
train_model=False.Benchmarks for n_vars=12, n_obj=5, n_constr=2, n=500
CPU:
GPU (RTX 3070, H100 todo):
To reproduce:
python bench_runner.py bench_build_standard bench_build_batched bench_build_standard_adam bench_build_batched_adam bench_build_standard_gpytorch bench_build_batched_gpytorch -n 10 -device cpu