ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00549
20
0

Towards Multi-dimensional Evaluation of LLM Summarization across Domains and Languages

31 May 2025
Hyangsuk Min
Yuho Lee
Minjeong Ban
Jiaqi Deng
Nicole Hee-Yeon Kim
Taewon Yun
Hang Su
Jason (Jinglun) Cai
Hwanjun Song
    ELM
ArXiv (abs)PDFHTML
Main:8 Pages
6 Figures
Bibliography:3 Pages
32 Tables
Appendix:23 Pages
Abstract

Evaluation frameworks for text summarization have evolved in terms of both domain coverage and metrics. However, existing benchmarks still lack domain-specific assessment criteria, remain predominantly English-centric, and face challenges with human annotation due to the complexity of reasoning. To address these, we introduce MSumBench, which provides a multi-dimensional, multi-domain evaluation of summarization in English and Chinese. It also incorporates specialized assessment criteria for each domain and leverages a multi-agent debate system to enhance annotation quality. By evaluating eight modern summarization models, we discover distinct performance patterns across domains and languages. We further examine large language models as summary evaluators, analyzing the correlation between their evaluation and summarization capabilities, and uncovering systematic bias in their assessment of self-generated summaries. Our benchmark dataset is publicly available atthis https URL.

View on arXiv
@article{min2025_2506.00549,
  title={ Towards Multi-dimensional Evaluation of LLM Summarization across Domains and Languages },
  author={ Hyangsuk Min and Yuho Lee and Minjeong Ban and Jiaqi Deng and Nicole Hee-Yeon Kim and Taewon Yun and Hang Su and Jason Cai and Hwanjun Song },
  journal={arXiv preprint arXiv:2506.00549},
  year={ 2025 }
}
Comments on this paper