ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.08659
50
0

Deployment-friendly Lane-changing Intention Prediction Powered by Brain-inspired Spiking Neural Networks

9 February 2025
Junjie Yang
Shuqi Shen
Hui Zhong
Qiming Zhang
Hongliang Lu
Hai Yang
ArXivPDFHTML
Abstract

Accurate and real-time prediction of surrounding vehicles' lane-changing intentions is a critical challenge in deploying safe and efficient autonomous driving systems in open-world scenarios. Existing high-performing methods remain hard to deploy due to their high computational cost, long training times, and excessive memory requirements. Here, we propose an efficient lane-changing intention prediction approach based on brain-inspired Spiking Neural Networks (SNN). By leveraging the event-driven nature of SNN, the proposed approach enables us to encode the vehicle's states in a more efficient manner. Comparison experiments conducted on HighD and NGSIM datasets demonstrate that our method significantly improves training efficiency and reduces deployment costs while maintaining comparable prediction accuracy. Particularly, compared to the baseline, our approach reduces training time by 75% and memory usage by 99.9%. These results validate the efficiency and reliability of our method in lane-changing predictions, highlighting its potential for safe and efficient autonomous driving systems while offering significant advantages in deployment, including reduced training time, lower memory usage, and faster inference.

View on arXiv
@article{shen2025_2502.08659,
  title={ Deployment-friendly Lane-changing Intention Prediction Powered by Brain-inspired Spiking Neural Networks },
  author={ Shuqi Shen and Junjie Yang and Hui Zhong and Hongliang Lu and Xinhu Zheng and Hai Yang },
  journal={arXiv preprint arXiv:2502.08659},
  year={ 2025 }
}
Comments on this paper