-
Notifications
You must be signed in to change notification settings - Fork 33
Add benchmarks with regression testing #815
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
AnnaKwa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool feature! I (ok, mostly cursor) was able to use this to generate a benchmark for the diffusion UNet in an experiment branch. I had a few minor comments.
| ) | ||
|
|
||
| @classmethod | ||
| def _new_with_params( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the purpose of having this separate method rather than just having this in new?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So that it can be re-used by the two different new implementations.
|
|
||
| from fme.core.benchmark.benchmark import get_benchmarks | ||
|
|
||
| RESULTS_PATH = pathlib.Path(os.path.abspath(os.path.dirname(__file__))) / "results" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be nice to print at the end of main where your png is saved.
|
|
||
| from fme.core.benchmark.benchmark import get_benchmarks | ||
|
|
||
| RESULTS_PATH = pathlib.Path(os.path.abspath(os.path.dirname(__file__))) / "results" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be an optional CLI arg? The use case I'm thinking of is if I wanted to run and save the results to /results before a job starts on a cluster with different hardware since it's otherwise a pain to start an interactive job just to get that information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added as a CLI arg.
|
Added a basic regression test just to make sure it still runs. |

This PR adds a system for registering benchmarks to be run in a new
fme.core.benchmark.runentrypoint.All benchmarks are regression tested for backwards compatibility.
Changes:
Added BenchmarkABC and register_benchmark to record new benchmarks
Added
fme.core.benchmark.runentrypointconftest.py updated to use deterministic algorithms during tests
Added
fme.core.device.force_cpucontext to force cpu usage during regression testsTests added