31
15

Highly Efficient 3D Human Pose Tracking from Events with Spiking Spatiotemporal Transformer

Abstract

Event camera, as an asynchronous vision sensor capturing scene dynamics, presents new opportunities for highly efficient 3D human pose tracking. Existing approaches typically adopt modern-day Artificial Neural Networks (ANNs), such as CNNs or Transformer, where sparse events are converted into dense images or paired with additional gray-scale images as input. Such practices, however, ignore the inherent sparsity of events, resulting in redundant computations, increased energy consumption, and potentially degraded performance. Motivated by these observations, we introduce the first sparse Spiking Neural Networks (SNNs) framework for 3D human pose tracking based solely on events. Our approach eliminates the need to convert sparse data to dense formats or incorporate additional images, thereby fully exploiting the innate sparsity of input events. Central to our framework is a novel Spiking Spatiotemporal Transformer, which enables bi-directional spatiotemporal fusion of spike pose features and provides a guaranteed similarity measurement between binary spike features in spiking attention. Moreover, we have constructed a large-scale synthetic dataset, SynEventHPD, that features a broad and diverse set of 3D human motions, as well as much longer hours of event streams. Empirical experiments demonstrate the superiority of our approach over existing state-of-the-art (SOTA) ANN-based methods, requiring only 19.1% FLOPs and 3.6% energy cost. Furthermore, our approach outperforms existing SNN-based benchmarks in this task, highlighting the effectiveness of our proposed SNN framework. The dataset will be released upon acceptance, and code can be found atthis https URL.

View on arXiv
@article{zou2025_2303.09681,
  title={ Highly Efficient 3D Human Pose Tracking from Events with Spiking Spatiotemporal Transformer },
  author={ Shihao Zou and Yuxuan Mu and Wei Ji and Zi-An Wang and Xinxin Zuo and Sen Wang and Weixin Si and Li Cheng },
  journal={arXiv preprint arXiv:2303.09681},
  year={ 2025 }
}
Comments on this paper