Hi, thanks for releasing this awesome project.
I am interested in using the pretrained Beingh0.5 model for human hand action prediction, specifically following the MANO-based pipeline as seen in the original BeingH0 repository.
My goals are:
- To predict hand actions (MANO parameters) using the Beingh0.5 checkpoint.
- To render the predicted actions into videos for visualization.
Which specific scripts or inference entry points should I use for this purpose? Any guidance or a brief pointer to the relevant documentation/code snippets would be greatly appreciated.