Prior research diverges on language diversity in LLM fine-tuning: Some studies report benefits while others find no advantages. Through controlled fine-tuning experiments across 132 translation directions, we systematically resolve these disparities. We find that expanding language diversity during fine-tuning improves translation quality for both unsupervised and -- surprisingly -- supervised pairs, despite less diverse models being fine-tuned exclusively on these supervised pairs. However, benefits plateau or decrease beyond a certain diversity threshold. We show that increased language diversity creates more language-agnostic representations. These representational adaptations help explain the improved performance in models fine-tuned with greater diversity.
View on arXiv@article{stap2025_2505.13090, title={ The Effect of Language Diversity When Fine-Tuning Large Language Models for Translation }, author={ David Stap and Christof Monz }, journal={arXiv preprint arXiv:2505.13090}, year={ 2025 } }