ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12087
22
0

Efficient Parallel Training Methods for Spiking Neural Networks with Constant Time Complexity

10 June 2025
Wanjin Feng
Xingyu Gao
Wenqian Du
Hailong Shi
Peilin Zhao
Pengcheng Wu
Chunyan Miao
ArXiv (abs)PDFHTML
Main:8 Pages
5 Figures
Bibliography:3 Pages
11 Tables
Appendix:5 Pages
Abstract

Spiking Neural Networks (SNNs) often suffer from high time complexity O(T)O(T)O(T) due to the sequential processing of TTT spikes, making training computationally expensive.In this paper, we propose a novel Fixed-point Parallel Training (FPT) method to accelerate SNN training without modifying the network architecture or introducing additional assumptions.FPT reduces the time complexity to O(K)O(K)O(K), where KKK is a small constant (usually K=3K=3K=3), by using a fixed-point iteration form of Leaky Integrate-and-Fire (LIF) neurons for all TTT timesteps.We provide a theoretical convergence analysis of FPT and demonstrate that existing parallel spiking neurons can be viewed as special cases of our proposed method.Experimental results show that FPT effectively simulates the dynamics of original LIF neurons, significantly reducing computational time without sacrificing accuracy.This makes FPT a scalable and efficient solution for real-world applications, particularly for long-term tasks.Our code will be released at \href{this https URL}{\texttt{this https URL}}.

View on arXiv
@article{feng2025_2506.12087,
  title={ Efficient Parallel Training Methods for Spiking Neural Networks with Constant Time Complexity },
  author={ Wanjin Feng and Xingyu Gao and Wenqian Du and Hailong Shi and Peilin Zhao and Pengcheng Wu and Chunyan Miao },
  journal={arXiv preprint arXiv:2506.12087},
  year={ 2025 }
}
Comments on this paper