A Graph Attention Spatio-temporal Convolutional Networks for 3D Human
Pose Estimation in Video
- 3DH
3D pose estimation in video benefits from both temporal and spatial information. Spatiotemporal information can help tackle occlusion and depth ambiguities, which are outstanding problems. Previous methods focused more on time consistency and did not propose an effective way combined with spatial semantics. In this work, we improve the learning of kinematic constraints in the human skeleton; namely posture, 2nd order joint relations, and symmetry. We do this by modeling both local and global spatial information via attention mechanisms. Also, importantly, we carefully design the interleaving of spatial information with temporal information to achieve a synergistic effect. We contribute a simple and effective graph attention spatio-temporal convolutional network (GAST-Net) that comprises of interleaved temporal convolutional and graph attention blocks. Local 2nd order and symmetric constraints can mitigate depth ambiguities for these joints with only one first-order neighbor (like ankle et al.), while global posture semantics can more effectively combine time information to address self-occlusion. Experiments on two challenging benchmark datasets, Human3.6M and HumanEva-I, show that we achieve 4.1% and 8.2% improvements.
View on arXiv