15
0

Weight Factorization and Centralization for Continual Learning in Speech Recognition

Main:4 Pages
1 Figures
Bibliography:1 Pages
2 Tables
Abstract

Modern neural network based speech recognition models are required to continually absorb new data without re-training the whole system, especially in downstream applications using foundation models, having no access to the original training data. Continually training the models in a rehearsal-free, multilingual, and language agnostic condition, likely leads to catastrophic forgetting, when a seemingly insignificant disruption to the weights can destructively harm the quality of the models. Inspired by the ability of human brains to learn and consolidate knowledge through the waking-sleeping cycle, we propose a continual learning approach with two distinct phases: factorization and centralization, learning and merging knowledge accordingly. Our experiments on a sequence of varied code-switching datasets showed that the centralization stage can effectively prevent catastrophic forgetting by accumulating the knowledge in multiple scattering low-rank adapters.

View on arXiv
@article{ugan2025_2506.16574,
  title={ Weight Factorization and Centralization for Continual Learning in Speech Recognition },
  author={ Enes Yavuz Ugan and Ngoc-Quan Pham and Alexander Waibel },
  journal={arXiv preprint arXiv:2506.16574},
  year={ 2025 }
}
Comments on this paper