Skip to content

[ENH] Refactor range-based metrics to restore original behavior #2781

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
May 16, 2025

Conversation

SebastianSchmidl
Copy link
Member

@SebastianSchmidl SebastianSchmidl commented May 6, 2025

Reference Issues/PRs

Fixes #2780

This deprecates / closes #2767

What does this implement/fix? Explain your changes.

  • Restores original method behavior of range_f_score, range_precision, and range_recall and removes their deprecation (uses the implementation of ts_precision, ts_recall, and ts_fscore)
  • Deprecates ts_precision, ts_recall, and ts_fscore
    • API is inconsistent with original behavior and implementation in TimeEval
    • arguments were inconsistent with the other AD metrics, especially the order of y_true and y_pred (which was confusing before)
    • support for anomaly ranges as input is unnecessary because no other code in aeon uses this
  • Refactor sub-module names, e.g. marks the old range_metrics.py-module private (similar to all other modules in this folder) by renaming to _range_ts_metrics.py. The methods are still accessible via their original name from the parent module (__init__.py)

Does your contribution introduce a new dependency? If yes, which one?

no

… metrics and integrate them into the AD test harness
@SebastianSchmidl SebastianSchmidl self-assigned this May 6, 2025
@SebastianSchmidl SebastianSchmidl added enhancement New feature, improvement request or other non-bug code enhancement API design API design & software architecture benchmarking Benchmarking package anomaly detection Anomaly detection package labels May 6, 2025
@aeon-actions-bot
Copy link
Contributor

Thank you for contributing to aeon

I would have added the following labels to this PR based on the changes made: [ $\color{#264F59}{\textsf{benchmarking}}$ ], however some package labels are already present.

The Checks tab will show the status of our automated tests. You can click on individual test runs in the tab or "Details" in the panel below to see more information if there is a failure.

If our pre-commit code quality check fails, any trivial fixes will automatically be pushed to your PR unless it is a draft.

Don't hesitate to ask questions on the aeon Slack channel if you have any.

PR CI actions

These checkboxes will add labels to enable/disable CI functionality for this PR. This may not take effect immediately, and a new commit may be required to run the new configuration.

  • Run pre-commit checks for all files
  • Run mypy typecheck tests
  • Run all pytest tests and configurations
  • Run all notebook example tests
  • Run numba-disabled codecov tests
  • Stop automatic pre-commit fixes (always disabled for drafts)
  • Disable numba cache loading
  • Push an empty commit to re-run CI checks

Copy link
Member

@MatthewMiddlehurst MatthewMiddlehurst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@TonyBagnall TonyBagnall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@TonyBagnall TonyBagnall merged commit 4fab159 into main May 16, 2025
16 checks passed
@TonyBagnall TonyBagnall deleted the refactor/tsad-range-metrics branch May 16, 2025 17:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
anomaly detection Anomaly detection package API design API design & software architecture benchmarking Benchmarking package enhancement New feature, improvement request or other non-bug code enhancement
Projects
None yet
3 participants