Skip to content

Conversation

@mcgibbon
Copy link
Contributor

@mcgibbon mcgibbon commented Feb 10, 2026

This PR adds a system for registering benchmarks to be run in a new fme.core.benchmark.run entrypoint.

All benchmarks are regression tested for backwards compatibility.

Changes:

  • Added BenchmarkABC and register_benchmark to record new benchmarks

  • Added fme.core.benchmark.run entrypoint

  • conftest.py updated to use deterministic algorithms during tests

  • Added fme.core.device.force_cpu context to force cpu usage during regression tests

  • Tests added

Base automatically changed from feature/sfno_timers to main February 10, 2026 22:40
@mcgibbon
Copy link
Contributor Author

An example plot from fme.core.benchmark.run:

csfno_block_tesla_t4_1c756074

@mcgibbon mcgibbon marked this pull request as ready for review February 11, 2026 15:16
@mcgibbon mcgibbon changed the title Add benchmarks with gpu regression testing Add benchmarks with regression testing Feb 11, 2026
Copy link
Contributor

@AnnaKwa AnnaKwa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool feature! I (ok, mostly cursor) was able to use this to generate a benchmark for the diffusion UNet in an experiment branch. I had a few minor comments.

)

@classmethod
def _new_with_params(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the purpose of having this separate method rather than just having this in new?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So that it can be re-used by the two different new implementations.


from fme.core.benchmark.benchmark import get_benchmarks

RESULTS_PATH = pathlib.Path(os.path.abspath(os.path.dirname(__file__))) / "results"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be nice to print at the end of main where your png is saved.


from fme.core.benchmark.benchmark import get_benchmarks

RESULTS_PATH = pathlib.Path(os.path.abspath(os.path.dirname(__file__))) / "results"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this be an optional CLI arg? The use case I'm thinking of is if I wanted to run and save the results to /results before a job starts on a cluster with different hardware since it's otherwise a pain to start an interactive job just to get that information.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added as a CLI arg.

@mcgibbon
Copy link
Contributor Author

Added a basic regression test just to make sure it still runs.

@mcgibbon mcgibbon enabled auto-merge (squash) February 12, 2026 20:12
@mcgibbon mcgibbon merged commit 8d68297 into main Feb 12, 2026
7 checks passed
@mcgibbon mcgibbon deleted the feature/benchmarking branch February 12, 2026 21:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants