Skip to content

Fixing some torch segmented_polynomial support#270

Merged
phiandark merged 3 commits intomainfrom
torch_equiv_issues_apr26
Apr 13, 2026
Merged

Fixing some torch segmented_polynomial support#270
phiandark merged 3 commits intomainfrom
torch_equiv_issues_apr26

Conversation

@phiandark
Copy link
Copy Markdown
Collaborator

This should fix issues #265 and #267 .

Add __reduce__ to SegmentedPolynomialFromUniform1dJit, SegmentedPolynomialFusedTP,
SegmentedPolynomialIndexedLinear, and SegmentedPolynomial so that unpickling
re-delegates to SegmentedPolynomial.__init__, which selects the appropriate
backend for the loading machine. Also wrap backend construction in try/except
ImportError to handle the case where cuequivariance_ops_torch is importable
but specific extensions (e.g. uniform_1d) are not available. Remove the
pre-built fallback submodule for fused_tp in favor of lazy construction
from the stored polynomial when CPU fallback is needed.

Made-with: Cursor
Precompute descriptor-derived values (einsum equation, output size,
path indices) in __init__ instead of referencing the descriptor closure
at forward time. Dynamo cannot trace custom Subscripts operations.

Made-with: Cursor
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 13, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@phiandark phiandark merged commit c9252a1 into main Apr 13, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants