ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.03793
31
0

Outlook Towards Deployable Continual Learning for Particle Accelerators

4 April 2025
Kishansingh Rajput
Sen Lin
Auralee Edelen
Willem Blokland
Malachi Schram
ArXivPDFHTML
Abstract

Particle Accelerators are high power complex machines. To ensure uninterrupted operation of these machines, thousands of pieces of equipment need to be synchronized, which requires addressing many challenges including design, optimization and control, anomaly detection and machine protection. With recent advancements, Machine Learning (ML) holds promise to assist in more advance prognostics, optimization, and control. While ML based solutions have been developed for several applications in particle accelerators, only few have reached deployment and even fewer to long term usage, due to particle accelerator data distribution drifts caused by changes in both measurable and non-measurable parameters. In this paper, we identify some of the key areas within particle accelerators where continual learning can allow maintenance of ML model performance with distribution drifts. Particularly, we first discuss existing applications of ML in particle accelerators, and their limitations due to distribution drift. Next, we review existing continual learning techniques and investigate their potential applications to address data distribution drifts in accelerators. By identifying the opportunities and challenges in applying continual learning, this paper seeks to open up the new field and inspire more research efforts towards deployable continual learning for particle accelerators.

View on arXiv
@article{rajput2025_2504.03793,
  title={ Outlook Towards Deployable Continual Learning for Particle Accelerators },
  author={ Kishansingh Rajput and Sen Lin and Auralee Edelen and Willem Blokland and Malachi Schram },
  journal={arXiv preprint arXiv:2504.03793},
  year={ 2025 }
}
Comments on this paper