ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.09828
23
6

AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages

16 November 2023
Jiayi Wang
David Ifeoluwa Adelani
Sweta Agrawal
Marek Masiak
Ricardo Rei
Eleftheria Briakou
Marine Carpuat
Xuanli He
Sofia Bourhim
Andiswa Bukula
Muhidin A. Mohamed
Temitayo Olatoye
Tosin P. Adewumi
Hamam Mokayede
Christine Mwase
Wangui Kimotho
Foutse Yuehgoh
Anuoluwapo Aremu
Jessica Ojo
Shamsuddeen Hassan Muhammad
Salomey Osei
Abdul-Hakeem Omotayo
Chiamaka Chukwuneke
Perez Ogayo
Oumaima Hourrane
Salma El Anigri
Lolwethu Ndolela
Thabiso Mangwana
Shafie Abdi Mohamed
Ayinde Hassan
Oluwabusayo Olufunke Awoyomi
Lama Alkhaled
Sana Al-Azzawi
Naome A. Etori
Millicent Ochieng
Clemencia Siro
Samuel Njoroge
Eric Muchiri
Wangari Kimotho
Lyse Naomi Wamba Momo
Daud Abolade
Simbiat Ajao
Iyanuoluwa Shode
Ricky Macharm
R. Iro
S. S. Abdullahi
Stephen E. Moore
Bernard Opoku
Zainab Akinjobi
Abeeb Afolabi
Nnaemeka Obiefuna
Onyekachi Raphael Ogbu
Sam Brian
V. Otiende
C. Mbonu
Sakayo Toadoum Sari
Yao Lu
Pontus Stenetorp
ArXivPDFHTML
Abstract

Despite the recent progress on scaling multilingual machine translation (MT) to several under-resourced African languages, accurately measuring this progress remains challenging, since evaluation is often performed on n-gram matching metrics such as BLEU, which typically show a weaker correlation with human judgments. Learned metrics such as COMET have higher correlation; however, the lack of evaluation data with human ratings for under-resourced languages, complexity of annotation guidelines like Multidimensional Quality Metrics (MQM), and limited language coverage of multilingual encoders have hampered their applicability to African languages. In this paper, we address these challenges by creating high-quality human evaluation data with simplified MQM guidelines for error detection and direct assessment (DA) scoring for 13 typologically diverse African languages. Furthermore, we develop AfriCOMET: COMET evaluation metrics for African languages by leveraging DA data from well-resourced languages and an African-centric multilingual encoder (AfroXLM-R) to create the state-of-the-art MT evaluation metrics for African languages with respect to Spearman-rank correlation with human judgments (0.441).

View on arXiv
Comments on this paper