Ruipeng Wang1* Langkun Zhong2* Miaowei Wang3
1University of Pennsylvania 2The University of Hong Kong 3The University of Edinburgh
(* Equal contribution)
TL;DR: Since an animated sequence represents the same underlying object, we enforce a consistency prior to fine-tune existing rigging models, enabling them to learn robust, pose-invariant rigs from abundant unlabeled data.
Comparison of our method vs Puppeteer. Our method (top, blue) yields a complete, temporally consistent skeleton with smooth, coherent skinning weights, whereas Puppeteer (bottom, red) produces an incomplete skeleton with missing hand rigging and unstable, blocky skinning.
This repository is currently under construction. We are organizing the clean version of the training and inference code into the src/ directory.
Currently, the repository structure is:
doc/: Contains the source code for the Project Page.src/: (Coming Soon) Will contain the official implementation.
While we clean up the code, you can access our experimental notebooks and checkpoints via the links below. These notebooks were used to run the experiments in the paper on NVIDIA A100 GPUs.
| Resource | Description | Link |
|---|---|---|
| Google Colab | Training & Inference Notebooks | |
| Google Drive | All Checkpoints & Sample Data | 📂 Open Drive Folder |
Note: The code in the Colab notebooks is raw and experimental. We are working on merging it into this repository.
If you find our work useful for your research, please consider citing:
@misc{wang2026sprig,
title={SPRig: Self-Supervised Pose-Invariant Rigging from Mesh Sequences},
author={Ruipeng Wang and Langkun Zhong and Miaowei Wang},
year={2026},
eprint={2602.12740},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={[https://arxiv.org/abs/2602.12740](https://arxiv.org/abs/2602.12740)},
}