ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15840
22
0

TDFormer: A Top-Down Attention-Controlled Spiking Transformer

17 May 2025
Zizheng Zhu
Yingchao Yu
Zeqi Zheng
Zhaofei Yu
Yaochu Jin
ArXivPDFHTML
Abstract

Traditional spiking neural networks (SNNs) can be viewed as a combination of multiple subnetworks with each running for one time step, where the parameters are shared, and the membrane potential serves as the only information link between them. However, the implicit nature of the membrane potential limits its ability to effectively represent temporal information. As a result, each time step cannot fully leverage information from previous time steps, seriously limiting the model's performance. Inspired by the top-down mechanism in the brain, we introduce TDFormer, a novel model with a top-down feedback structure that functions hierarchically and leverages high-order representations from earlier time steps to modulate the processing of low-order information at later stages. The feedback structure plays a role from two perspectives: 1) During forward propagation, our model increases the mutual information across time steps, indicating that richer temporal information is being transmitted and integrated in different time steps. 2) During backward propagation, we theoretically prove that the feedback structure alleviates the problem of vanishing gradients along the time dimension. We find that these mechanisms together significantly and consistently improve the model performance on multiple datasets. In particular, our model achieves state-of-the-art performance on ImageNet with an accuracy of 86.83%.

View on arXiv
@article{zhu2025_2505.15840,
  title={ TDFormer: A Top-Down Attention-Controlled Spiking Transformer },
  author={ Zizheng Zhu and Yingchao Yu and Zeqi Zheng and Zhaofei Yu and Yaochu Jin },
  journal={arXiv preprint arXiv:2505.15840},
  year={ 2025 }
}
Comments on this paper