ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.13908
59
0

Judging the Judges: A Collection of LLM-Generated Relevance Judgements

20 February 2025
Hossein A. Rahmani
Clemencia Siro
Mohammad Aliannejadi
Nick Craswell
Charles L. A. Clarke
Guglielmo Faggioli
Bhaskar Mitra
Paul Thomas
Emine Yilmaz
    ELM
ArXivPDFHTML
Abstract

Using Large Language Models (LLMs) for relevance assessments offers promising opportunities to improve Information Retrieval (IR), Natural Language Processing (NLP), and related fields. Indeed, LLMs hold the promise of allowing IR experimenters to build evaluation collections with a fraction of the manual human labor currently required. This could help with fresh topics on which there is still limited knowledge and could mitigate the challenges of evaluating ranking systems in low-resource scenarios, where it is challenging to find human annotators. Given the fast-paced recent developments in the domain, many questions concerning LLMs as assessors are yet to be answered. Among the aspects that require further investigation, we can list the impact of various components in a relevance judgment generation pipeline, such as the prompt used or the LLM chosen.This paper benchmarks and reports on the results of a large-scale automatic relevance judgment evaluation, the LLMJudge challenge at SIGIR 2024, where different relevance assessment approaches were proposed. In detail, we release and benchmark 42 LLM-generated labels of the TREC 2023 Deep Learning track relevance judgments produced by eight international teams who participated in the challenge. Given their diverse nature, these automatically generated relevance judgments can help the community not only investigate systematic biases caused by LLMs but also explore the effectiveness of ensemble models, analyze the trade-offs between different models and human assessors, and advance methodologies for improving automated evaluation techniques. The released resource is available at the following link:this https URL

View on arXiv
@article{rahmani2025_2502.13908,
  title={ Judging the Judges: A Collection of LLM-Generated Relevance Judgements },
  author={ Hossein A. Rahmani and Clemencia Siro and Mohammad Aliannejadi and Nick Craswell and Charles L. A. Clarke and Guglielmo Faggioli and Bhaskar Mitra and Paul Thomas and Emine Yilmaz },
  journal={arXiv preprint arXiv:2502.13908},
  year={ 2025 }
}
Comments on this paper