Add macro-averaged Mean Squared Error metric with tests #1149
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR introduces a new regression metric: macro-averaged Mean Squared Error (Macro-MSE).
🔹 What it does
Adds macro_mean_squared_error to imblearn.metrics._regression.py.
Implements computation of MSE per class and returns the unweighted (macro) average.
Includes unit tests in imblearn/tests/test_regression.py to verify correctness.
🔹 Motivation
While sklearn.metrics.mean_squared_error provides sample-wise MSE, there is no built-in option for macro averaging across classes. This is particularly useful in imbalanced regression scenarios where class distributions are skewed, ensuring that minority classes contribute equally to the error metric.
🔹 Changes made
Implemented macro_mean_squared_error function.
Added corresponding test cases with multiple class distributions.
Ensured compatibility with scikit-learn’s API style.
🔹 Next steps
Documentation update (if maintainers prefer exposing this in the public API).
Feedback welcome on naming, placement, and test coverage.