feat(mtsrust): ArcArray for data [dlpk]#1052
Conversation
9c2e23b to
2d39278
Compare
aa9ea9e to
33991bb
Compare
7d80761 to
d1ca567
Compare
chore(mtsrust): apply arcarray consistently
............ cuz into_shared() may copy, into() may not, and ArcArray::() is even more explicit
Luthaf
left a comment
There was a problem hiding this comment.
Looks good, just some small questions and suggestions!
| if TypeId::of::<T>() != TypeId::of::<f64>() { | ||
| panic!("Array::data() is only supported for f64 arrays"); |
There was a problem hiding this comment.
this is fine, but we'll need to remove this function before 0.2 anyway, everyone should go through the dlpack version
| let mut array = std::mem::take(self); | ||
| array = array.to_shape(shape).expect("invalid shape").to_owned(); | ||
| std::mem::swap(self, &mut array); | ||
| *self = self.to_shape(shape).expect("invalid shape").to_shared(); |
There was a problem hiding this comment.
I guess this would invalidate any pointers held through the as_dlpack function?
There was a problem hiding this comment.
Yes, with the current reshape implementation (to_shape().to_shared()), it would since to_shape() returns a CowArray view, and .to_shared() on a view calls to_owned().into_shared(), which allocates a new buffer. The ArcArray and any outstanding DLPack tensor would then point to different data, despite flags=0 (shared).
The pointer doesn't dangle (Arc keeps the old buffer alive in the DLPack ManagerContext), but the producer and consumer silently diverge.
| }; | ||
| let max_version = DLPackVersion { major: 1, minor: 1 }; | ||
| let device = DLDevice::cpu(); | ||
| let max_version = DLPackVersion { major: 1, minor: 9 }; |
There was a problem hiding this comment.
small comment here about why this is minor: 9?
There was a problem hiding this comment.
it needs to be some arbitrarily large thing, so probably better to take it from the library and increment it :D
There was a problem hiding this comment.
it needs to be some arbitrarily large thing, so probably better to take it from the library and increment it :D but I'm inclined to push it to #1045
There was a problem hiding this comment.
Sure, I'm just asking to have a comment explaining this so next readers are not confused
There was a problem hiding this comment.
ah I fixed it in the next PR to take current + 1
Closes #1043. Needs an upstream release of dlpk, i.e. metatensor/dlpk#20
Also of course, the
tmpcommit needs to be removed.cc @Luthaf would you prefer handling the whole of #1043 here? i.e. generalizing types instead of being stuck atf64?Precursor to #1025 and therefore #1045.
Contributor (creator of pull-request) checklist
Reviewer checklist