Conversation
|
Ok, ready @Luthaf |
Luthaf
left a comment
There was a problem hiding this comment.
The changes look good to me, I'd like to have some input from one of the LLPR maintainer though @frostedoyster @SanggyuChong
|
cscs-ci run |
1 similar comment
|
cscs-ci run |
|
The same error has happened twice in CSCS-CI (at different moments):
I think it has nothing to do with my PR 😅 |
|
cscs-ci run |
|
It is apparently an internal network error, let's try again today! |
|
Actually, in that PR Paolo fixed a point where it was assuming there were components (and made it work when there are no components). But the strange thing is that all the code before that assumes that there are no components 😅 |
|
cscs-ci run |
SanggyuChong
left a comment
There was a problem hiding this comment.
First of all, many thanks @pfebrer for going ahead and figuring out these bugs. You've also done a wonderful job handling the extraction of ll feats for different scenarios.
With regards to the components, instead of erroring out we should reshape to keep the ll_feat_size and flatten out everything else. My vision here is that ultimately, performing LLPR-based UQ should closely follow how the loss function is formulated. Then, computing the covariance matrix should also simply add up all the loss terms originating from the different components, as done in most training scenarios. The UQ metrics will then be returned for each component.
If there is ever a scenario where different weights are applied to different components, we can think of responding by allowing the covariance computation routine to account for the different applied weights, but that can wait until later.
Would be best to also renamed the title given that the LLPR model is also getting modified.
| self.layouts[target_name] = target_info.layout | ||
|
|
||
| self.last_layer_feature_size = 128 | ||
|
|
There was a problem hiding this comment.
I think this doesn't work with:
- The internal MACE head.
- Heads that output spherical harmonics with more than one irrep.
I think Sanggyu is working on a branch to support multiheads
There was a problem hiding this comment.
In this branch I just made the internal MACE head work because it is what LLPR supported (a single head with scalar features)
MACE wasn't working with LLPR because of some stupid reasons:
last_layer_feature_sizeattribute.This is a temporary fix so that things work (one can test the tutorial using MACE instead of soap-bpnn) before the workshop next week. I think @SanggyuChong is working on some improvements, so the implementation will be changed in the future. Eventually we should support multihead models, for now I just made the MACE internal head work with LLPR. This works with MACE foundation models.