ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.10301
111
0

Towards Understanding Bias in Synthetic Data for Evaluation

12 June 2025
Hossein A. Rahmani
Varsha Ramineni
Nick Craswell
Bhaskar Mitra
Emine Yilmaz
ArXiv (abs)PDFHTML
Main:4 Pages
3 Figures
Bibliography:1 Pages
2 Tables
Abstract

Test collections are crucial for evaluating Information Retrieval (IR) systems. Creating a diverse set of user queries for these collections can be challenging, and obtaining relevance judgments, which indicate how well retrieved documents match a query, is often costly and resource-intensive. Recently, generating synthetic datasets using Large Language Models (LLMs) has gained attention in various applications. While previous work has used LLMs to generate synthetic queries or documents to improve ranking models, using LLMs to create synthetic test collections is still relatively unexplored. Previous work~\cite{rahmani2024synthetic} showed that synthetic test collections have the potential to be used for system evaluation, however, more analysis is needed to validate this claim. In this paper, we thoroughly investigate the reliability of synthetic test collections constructed using LLMs, where LLMs are used to generate synthetic queries, labels, or both. In particular, we examine the potential biases that might occur when such test collections are used for evaluation. We first empirically show the presence of such bias in evaluation results and analyse the effects it might have on system evaluation. We further validate the presence of such bias using a linear mixed-effects model. Our analysis shows that while the effect of bias present in evaluation results obtained using synthetic test collections could be significant, for e.g.~computing absolute system performance, its effect may not be as significant in comparing relative system performance. Codes and data are available at:this https URL.

View on arXiv
@article{rahmani2025_2506.10301,
  title={ Towards Understanding Bias in Synthetic Data for Evaluation },
  author={ Hossein A. Rahmani and Varsha Ramineni and Nick Craswell and Bhaskar Mitra and Emine Yilmaz },
  journal={arXiv preprint arXiv:2506.10301},
  year={ 2025 }
}
Comments on this paper