-
Notifications
You must be signed in to change notification settings - Fork 705
[ENH] Standardize testing of estimator outputs and skip tests for non-conformant estimators #1971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
this commit ensures only a single branch of testing on the output format [batch_size, prediction_length, loss_dim] and skips the test for non-conformant estimators
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1971 +/- ##
=======================================
Coverage ? 85.32%
=======================================
Files ? 158
Lines ? 9296
Branches ? 0
=======================================
Hits ? 7932
Misses ? 1364
Partials ? 0
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
@@ -5,6 +5,10 @@ | |||
EXCLUDE_ESTIMATORS = [ | |||
"DummySkipped", | |||
"ClassName", # exclude classes from extension templates | |||
"NBeatsKAN_pkg", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could also add "tests:skip_by_name"
tag in corresponding _pkg
classes. There if you add test_integration
it should skip the integration
tests for that model class. This would not skip other tests for that model, like it is now, where framework is skipping the whole _pkg
class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, although afaik we would have to add the machinery for that tag in pytorch-forecasting
first. I do not think it works out of the box right now, since I have not added that feature to scikit-base
yet. There is code in sktime
that could be copied for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reference Issues/PRs
#1966
What does this implement/fix? Explain your changes.
Fix #1966.
Uncovered non-conformances:
This PR enforces testing of the estimators' output format on only one case -
[batch_size, prediction_length, loss_dim]
. The non-conformance in the above mentioned models, was the presence of 2D output for single targets case.What should a reviewer concentrate their feedback on?
Did you add any tests for the change?
Any other comments?
PR checklist
pre-commit install
.To run hooks independent of commit, execute
pre-commit run --all-files