ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14335
10
0

Evaluation Should Not Ignore Variation: On the Impact of Reference Set Choice on Summarization Metrics

17 June 2025
Silvia Casola
Yang Liu
Siyao Peng
Oliver Kraus
Albert Gatt
Barbara Plank
ArXiv (abs)PDFHTML
Main:2 Pages
15 Figures
Bibliography:3 Pages
5 Tables
Appendix:12 Pages
Abstract

Human language production exhibits remarkable richness and variation, reflecting diverse communication styles and intents. However, this variation is often overlooked in summarization evaluation. While having multiple reference summaries is known to improve correlation with human judgments, the impact of using different reference sets on reference-based metrics has not been systematically investigated. This work examines the sensitivity of widely used reference-based metrics in relation to the choice of reference sets, analyzing three diverse multi-reference summarization datasets: SummEval, GUMSum, and DUC2004. We demonstrate that many popular metrics exhibit significant instability. This instability is particularly concerning for n-gram-based metrics like ROUGE, where model rankings vary depending on the reference sets, undermining the reliability of model comparisons. We also collect human judgments on LLM outputs for genre-diverse data and examine their correlation with metrics to supplement existing findings beyond newswire summaries, finding weak-to-no correlation. Taken together, we recommend incorporating reference set variation into summarization evaluation to enhance consistency alongside correlation with human judgments, especially when evaluating LLMs.

View on arXiv
@article{casola2025_2506.14335,
  title={ Evaluation Should Not Ignore Variation: On the Impact of Reference Set Choice on Summarization Metrics },
  author={ Silvia Casola and Yang Janet Liu and Siyao Peng and Oliver Kraus and Albert Gatt and Barbara Plank },
  journal={arXiv preprint arXiv:2506.14335},
  year={ 2025 }
}
Comments on this paper