ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13102
32
0

REPA: Russian Error Types Annotation for Evaluating Text Generation and Judgment Capabilities

17 March 2025
Alexander Pugachev
Alena Fenogenova
Vladislav Mikhailov
Ekaterina Artemova
ArXivPDFHTML
Abstract

Recent advances in large language models (LLMs) have introduced the novel paradigm of using LLMs as judges, where an LLM evaluates and scores the outputs of another LLM, which often correlates highly with human preferences. However, the use of LLM-as-a-judge has been primarily studied in English. In this paper, we evaluate this framework in Russian by introducing the Russian Error tyPes Annotation dataset (REPA), a dataset of 1k user queries and 2k LLM-generated responses. Human annotators labeled each response pair expressing their preferences across ten specific error types, as well as selecting an overall preference. We rank six generative LLMs across the error types using three rating systems based on human preferences. We also evaluate responses using eight LLM judges in zero-shot and few-shot settings. We describe the results of analyzing the judges and position and length biases. Our findings reveal a notable gap between LLM judge performance in Russian and English. However, rankings based on human and LLM preferences show partial alignment, suggesting that while current LLM judges struggle with fine-grained evaluation in Russian, there is potential for improvement.

View on arXiv
@article{pugachev2025_2503.13102,
  title={ REPA: Russian Error Types Annotation for Evaluating Text Generation and Judgment Capabilities },
  author={ Alexander Pugachev and Alena Fenogenova and Vladislav Mikhailov and Ekaterina Artemova },
  journal={arXiv preprint arXiv:2503.13102},
  year={ 2025 }
}
Comments on this paper