ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.04672
75
1220

No Language Left Behind: Scaling Human-Centered Machine Translation

11 July 2022
Nllb team
Marta R. Costa-jussá
James Cross
Onur cCelebi
Maha Elbayad
Kenneth Heafield
Kevin Heffernan
Elahe Kalbassi
Janice Lam
Daniel Licht
Jean Maillard
Anna Y. Sun
Skyler Wang
Guillaume Wenzek
Alison Youngblood
Bapi Akula
Loïc Barrault
Gabriel Mejia Gonzalez
Prangthip Hansanti
John Hoffman
Semarley Jarrett
Kaushik Ram Sadagopan
Dirk Rowe
Shannon L. Spruit
C. Tran
Pierre Yves Andrews
Necip Fazil Ayan
Shruti Bhosale
Sergey Edunov
Angela Fan
Cynthia Gao
Vedanuj Goswami
Francisco Guzmán
Philipp Koehn
Alexandre Mourachko
C. Ropers
Safiyyah Saleem
Holger Schwenk
Jeff Wang
    MoE
ArXivPDFHTML
Abstract

Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. Finally, we open source all contributions described in this work, accessible at https://github.com/facebookresearch/fairseq/tree/nllb.

View on arXiv
Comments on this paper