38
0

The Unreasonable Effectiveness of Model Merging for Cross-Lingual Transfer in LLMs

Main:8 Pages
2 Figures
Bibliography:6 Pages
9 Tables
Appendix:4 Pages
Abstract

Large language models (LLMs) still struggle across tasks outside of high-resource languages. In this work, we investigate cross-lingual transfer to lower-resource languages where task-specific post-training data is scarce. Building on prior work, we first validate that the subsets of model parameters that matter most for mathematical reasoning and multilingual capabilities are distinctly non-overlapping. To exploit this implicit separability between task and target language parameterization, we develop and analyze numerous modular frameworks to improve the composition of the two during fine-tuning. These methods generally employ freezing parameters or post hoc model merging to assign math and language improvement to different key parts of the LLM. In the absence of in-language math data, we demonstrate that the modular approaches successfully improve upon baselines across three languages, four models, and two fine-tuning paradigms (full and LoRA). Furthermore, we identify the most consistently successful modular method to be fine-tuning separate language and math experts and model merging via Layer-Swapping, somewhat surprisingly. We offer possible explanations for this result via recent works on the linearity of task vectors. We further explain this by empirically showing that reverting less useful fine-tuning updates after training often outperforms freezing them from the start.

View on arXiv
@article{bandarkar2025_2505.18356,
  title={ The Unreasonable Effectiveness of Model Merging for Cross-Lingual Transfer in LLMs },
  author={ Lucas Bandarkar and Nanyun Peng },
  journal={arXiv preprint arXiv:2505.18356},
  year={ 2025 }
}
Comments on this paper