ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03144
66
3

Temporal Separation with Entropy Regularization for Knowledge Distillation in Spiking Neural Networks

5 March 2025
Kairong Yu
Chengting Yu
Tianqing Zhang
Xiaochen Zhao
Shu Yang
Hongwei Wang
Qiang Zhang
Qi Xu
ArXivPDFHTML
Abstract

Spiking Neural Networks (SNNs), inspired by the human brain, offer significant computational efficiency through discrete spike-based information transfer. Despite their potential to reduce inference energy consumption, a performance gap persists between SNNs and Artificial Neural Networks (ANNs), primarily due to current training methods and inherent model limitations. While recent research has aimed to enhance SNN learning by employing knowledge distillation (KD) from ANN teacher networks, traditional distillation techniques often overlook the distinctive spatiotemporal properties of SNNs, thus failing to fully leverage their advantages. To overcome these challenge, we propose a novel logit distillation method characterized by temporal separation and entropy regularization. This approach improves existing SNN distillation techniques by performing distillation learning on logits across different time steps, rather than merely on aggregated output features. Furthermore, the integration of entropy regularization stabilizes model optimization and further boosts the performance. Extensive experimental results indicate that our method surpasses prior SNN distillation strategies, whether based on logit distillation, feature distillation, or a combination of both. The code will be available on GitHub.

View on arXiv
@article{yu2025_2503.03144,
  title={ Temporal Separation with Entropy Regularization for Knowledge Distillation in Spiking Neural Networks },
  author={ Kairong Yu and Chengting Yu and Tianqing Zhang and Xiaochen Zhao and Shu Yang and Hongwei Wang and Qiang Zhang and Qi Xu },
  journal={arXiv preprint arXiv:2503.03144},
  year={ 2025 }
}
Comments on this paper