ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.08279
88
0

Y-NQ: English-Yorùbá Evaluation dataset for Open-Book Reading Comprehension and Text Generation

11 December 2024
Marta R. Costa-jussá
Joy Chen
Ifeoluwanimi Adebara
Joe Chuang
C. Ropers
Eduardo Sánchez
ArXivPDFHTML
Abstract

The purpose of this work is to share an English-Yor\`ub\á evaluation dataset for open-book reading comprehension and text generation to assess the performance of models both in a high- and a low- resource language. The dataset contains 358 questions and answers on 338 English documents and 208 Yor\`ub\á documents. The average document length is ~ 10k words for English and 430 words for Yor\`ub\á. Experiments show a consistent disparity in performance between the two languages, with Yor\`ub\á falling behind English for automatic metrics even if documents are much shorter for this language. For a small set of documents with comparable length, performance of Yor\`ub\á drops by x2.5 times. When analyzing performance by length, we observe that Yor\`ub\á decreases performance dramatically for documents that reach 1500 words while English performance is barely affected at that length. Our dataset opens the door to showcasing if English LLM reading comprehension capabilities extend to Yor\`ub\á, which for the evaluated LLMs is not the case.

View on arXiv
Comments on this paper