What Causes Knowledge Loss in Multilingual Language Models?

Cross-lingual transfer in natural language processing (NLP) models enhances multilingual performance by leveraging shared linguistic knowledge. However, traditional methods that process all data simultaneously often fail to mimic real-world scenarios, leading to challenges like catastrophic forgetting, where fine-tuning on new tasks degrades performance on previously learned ones. Our study explores this issue in multilingual contexts, focusing on linguistic differences affecting representational learning rather than just model parameters. We experiment with 52 languages using LoRA adapters of varying ranks to evaluate non-shared, partially shared, and fully shared parameters. Our aim is to see if parameter sharing through adapters can mitigate forgetting while preserving prior knowledge. We find that languages using non-Latin scripts are more susceptible to catastrophic forgetting, whereas those written in Latin script facilitate more effective cross-lingual transfer.
View on arXiv@article{khelli2025_2504.20356, title={ What Causes Knowledge Loss in Multilingual Language Models? }, author={ Maria Khelli and Samuel Cahyawijaya and Ayu Purwarianti and Genta Indra Winata }, journal={arXiv preprint arXiv:2504.20356}, year={ 2025 } }