Skip to content

Conversation

phoeenniixx
Copy link
Member

@phoeenniixx phoeenniixx commented Sep 5, 2025

Fixes #1964
This PR standardizes the output format of forward of tslib models to:

  • 3D tensors for single-target for DLinear
  • 3D tensors for single-target for TimeXer
  • list of tensors for multi-target

@phoeenniixx
Copy link
Member Author

If we add fix for TimeXer as well here, TestAllEstimators will pass but the Test / Run notebook tutorials will fail. Reason being, in TestAllEstimators, we are only using point prediction nn losses right now as our pipeline was not supporting metrics. But somehow in the tslib notebook, QuantileLoss works for TimeXer (maybe some loss specific changes done to model class?)
If we change this, the QuantileLoss fails, which will then be fixed by #1960, when we actually introduce the metrics support to the whole pipeline.

@phoeenniixx phoeenniixx changed the title [ENH] Standardize output format for tslib models [ENH] Standardize output format for tslib v2 models Sep 5, 2025
Copy link

codecov bot commented Sep 5, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (main@e1cc1ce). Learn more about missing BASE report.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1965   +/-   ##
=======================================
  Coverage        ?   87.29%           
=======================================
  Files           ?      158           
  Lines           ?     9278           
  Branches        ?        0           
=======================================
  Hits            ?     8099           
  Misses          ?     1179           
  Partials        ?        0           
Flag Coverage Δ
cpu 87.29% <100.00%> (?)
pytest 87.29% <100.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@fkiraly
Copy link
Collaborator

fkiraly commented Sep 6, 2025

should we merge #1960 first then?

You can also stack on #1960 to see if that indeed fixes the notebook failure.

@phoeenniixx
Copy link
Member Author

should we merge #1960 first then?

Probably, but that is also a WiP. As you said, I will first stack this PR on #1960, to be see if it is actually working or not

@fkiraly
Copy link
Collaborator

fkiraly commented Sep 7, 2025

can you add in the PR description if this stacks on another PR now? It does not seem to be actually stacked on anything else at the moment.

@phoeenniixx
Copy link
Member Author

#1960 stacks on #1965, this branch is from main. I have updated the Description of #1960, mentioning the same.

@fkiraly fkiraly added enhancement New feature or request module:models labels Sep 10, 2025
fkiraly
fkiraly previously approved these changes Sep 10, 2025
@fkiraly
Copy link
Collaborator

fkiraly commented Sep 10, 2025

quick question: can this be merged, please confirm

  • failing notebook test (explained above)
  • item "list of tensors multi-target" in PR description not checked

@phoeenniixx
Copy link
Member Author

phoeenniixx commented Sep 10, 2025

can this be merged

Not right now, I need to add some tests for the changes :) To make sure there are no outliers. I am busy this week, I'll try to add them this weekend.

  • failing notebook test (explained above)

Yes these are passing in #1960 after stacking

  • item "list of tensors multi-target" in PR description not checked

I am a little confused about it, right now pipeline doesnot support multi-target. The models doesnot support or being tested for multi-target. So, I was thinking, if its better to leave it for future? Or should I add atleast a backbone for multi-target tests, but idts I'll have anyway to test it.

@fkiraly
Copy link
Collaborator

fkiraly commented Sep 11, 2025

I am a little confused about it

All I was asking, let me explain: "there is an unchecked box in the PR description which looks like you intended to work on it. Is this assumption correct, and this is still WiP? Or is this ready for merge and the unchecked box in the PR description is not blocking for you?"

I was not asking about an additional feature to be added to the PR; just asking whether I should consider the unchecked box as an indicator that you intend to still work on this before merge.

@phoeenniixx
Copy link
Member Author

Or is this ready for merge and the unchecked box in the PR description is not blocking for you?

No it is not blocking me. What I was trying to say was that it is hard to add multi-target. It is kind of a new feature and the unchecked box is mainly to signify that we still need to add this support and we should not forget about this. I thought maybe I can do something about it in this PR, but I think this would require a new PR now :)
So, the only thing blocking the merge of this PR is the new tests I need to add, once we add them, I think this should be ready to go..

@phoeenniixx phoeenniixx requested a review from fkiraly September 11, 2025 09:14
@fkiraly fkiraly moved this to PR in progress in May - Sep 2025 mentee projects Sep 15, 2025
@fkiraly fkiraly moved this from PR in progress to PR under review in May - Sep 2025 mentee projects Sep 15, 2025
@agobbifbk
Copy link

Hello there.
The model dlinear_v2 seems strange to me: output_dim = self.prediction_length * self.n_quantiles

  1. not sure if it supports multitarget in the actual form, should it work with multiple targets?
  2. It will work only with the quantile loss, (len(self.loss.quantiles)) but what about the other distribution losses?
  3. I remember that, for homogeneity with v1, we need to return a list of prediction, one for target variable. Is this task something to do after this PR?

It is hard for me to review all the code, sorry, but if there are some part of it you are not sure about I will check them, just tell me where!

@phoeenniixx
Copy link
Member Author

phoeenniixx commented Sep 17, 2025

Thanks @agobbifbk for the review!
Some replies:

1. not sure if it supports multitarget in the actual form, should it work with multiple targets?

I dont think our pipeline can still handle multi-target, see #1965 (comment)

2. It will work only with the quantile loss, (len(self.loss.quantiles)) but what about the other distribution losses?

DLinear only supports quantile loss and point prediction losses imo (@PranavBhatP please correct me if I am wrong).

3. I remember that, for homogeneity with v1, we need to return a list of prediction, one for target variable. Is this task something to do after this PR?

Yes! this would be one of the discussion points I think, after this PR and after the prediction pipeline EP is approved?
Just to be clear, for single-target the return is tensor, list is for multi-target. Similar to v1

I think we need to have some discussions on the way forward. I am busy with exams this month, maybe we can discuss it in first week of Oct?

Copy link
Collaborator

@fkiraly fkiraly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for clarifying my questions, all addressed now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request module:models
Projects
Status: PR under review
Development

Successfully merging this pull request may close these issues.

[ENH] standardize tslib v2 models to 3D output returns
3 participants