ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13090
12
0

The Effect of Language Diversity When Fine-Tuning Large Language Models for Translation

19 May 2025
David Stap
Christof Monz
ArXivPDFHTML
Abstract

Prior research diverges on language diversity in LLM fine-tuning: Some studies report benefits while others find no advantages. Through controlled fine-tuning experiments across 132 translation directions, we systematically resolve these disparities. We find that expanding language diversity during fine-tuning improves translation quality for both unsupervised and -- surprisingly -- supervised pairs, despite less diverse models being fine-tuned exclusively on these supervised pairs. However, benefits plateau or decrease beyond a certain diversity threshold. We show that increased language diversity creates more language-agnostic representations. These representational adaptations help explain the improved performance in models fine-tuned with greater diversity.

View on arXiv
@article{stap2025_2505.13090,
  title={ The Effect of Language Diversity When Fine-Tuning Large Language Models for Translation },
  author={ David Stap and Christof Monz },
  journal={arXiv preprint arXiv:2505.13090},
  year={ 2025 }
}
Comments on this paper