ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11151
19
0

STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking

16 May 2025
Sicheng Shen
Dongcheng Zhao
Linghao Feng
Zeyang Yue
Jindong Li
Tenglong Li
Guobin Shen
Yi Zeng
ArXivPDFHTML
Abstract

Spiking Transformers have recently emerged as promising architectures for combining the efficiency of spiking neural networks with the representational power of self-attention. However, the lack of standardized implementations, evaluation pipelines, and consistent design choices has hindered fair comparison and principled analysis. In this paper, we introduce \textbf{STEP}, a unified benchmark framework for Spiking Transformers that supports a wide range of tasks, including classification, segmentation, and detection across static, event-based, and sequential datasets. STEP provides modular support for diverse components such as spiking neurons, input encodings, surrogate gradients, and multiple backends (e.g., SpikingJelly, BrainCog). Using STEP, we reproduce and evaluate several representative models, and conduct systematic ablation studies on attention design, neuron types, encoding schemes, and temporal modeling capabilities. We also propose a unified analytical model for energy estimation, accounting for spike sparsity, bitwidth, and memory access, and show that quantized ANNs may offer comparable or better energy efficiency. Our results suggest that current Spiking Transformers rely heavily on convolutional frontends and lack strong temporal modeling, underscoring the need for spike-native architectural innovations. The full code is available at:this https URL

View on arXiv
@article{shen2025_2505.11151,
  title={ STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking },
  author={ Sicheng Shen and Dongcheng Zhao and Linghao Feng and Zeyang Yue and Jindong Li and Tenglong Li and Guobin Shen and Yi Zeng },
  journal={arXiv preprint arXiv:2505.11151},
  year={ 2025 }
}
Comments on this paper