Code for reproducing results in 'Graph-based Normalizing Flow for Human Motion Generation and Reconstruction'
If this code helps with your work, please cite:
@article{yin2021graph,
title={Graph-based Normalizing Flow for Human Motion Generation and Reconstruction},
author={Yin, Wenjie and Yin, Hang and Kragic, Danica and Bj{\"o}rkman, M{\aa}rten},
journal={arXiv preprint arXiv:2104.03020},
year={2021}
}We find when we provide the whole past sequence but some body markers are missing, we can observe that some joints may fly apart. This situation happens occasionally.
We use a spatial convolutional network in the affine coupling layer to extract skeleton features. The conditioning information include the past poses. All these are concatenated as one vector in MoGlow. We use a spatial temporal convolution networks to extract the features of past sequence.
To reconstruct the missing data, we first generate future poses, then reverse the generated poses and control signals. Regarding the reversed sequences as control information to generate markers to fill the holes of the missing data.
The data is pooled from the Edinburgh Locomotion, CMU Motion Capture, and HDM05 datasets. Thanks for Gustav Eje Henter, Simon Alexanderson, and Jonas Beskow originally sharing the data here.



