CiteEval: Principle-Driven Citation Evaluation for Source Attribution
- HILM

Citation quality is crucial in information-seeking systems, directly influencing trust and the effectiveness of information access. Current evaluation frameworks, both human and automatic, mainly rely on Natural Language Inference (NLI) to assess binary or ternary supportiveness from cited sources, which we argue is a suboptimal proxy for citation evaluation. In this work we introduce CiteEval, a citation evaluation framework driven by principles focusing on fine-grained citation assessment within a broad context, encompassing not only the cited sources but the full retrieval context, user query, and generated text. Guided by the proposed framework, we construct CiteBench, a multi-domain benchmark with high-quality human annotations on citation quality. To enable efficient evaluation, we further develop CiteEval-Auto, a suite of model-based metrics that exhibit strong correlation with human judgments. Experiments across diverse systems demonstrate CiteEval-Auto's superior ability to capture the multifaceted nature of citations compared to existing metrics, offering a principled and scalable approach to evaluate and improve model-generated citations.
View on arXiv@article{xu2025_2506.01829, title={ CiteEval: Principle-Driven Citation Evaluation for Source Attribution }, author={ Yumo Xu and Peng Qi and Jifan Chen and Kunlun Liu and Rujun Han and Lan Liu and Bonan Min and Vittorio Castelli and Arshit Gupta and Zhiguo Wang }, journal={arXiv preprint arXiv:2506.01829}, year={ 2025 } }