ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15792
34
0

Long-Form Information Alignment Evaluation Beyond Atomic Facts

21 May 2025
Danna Zheng
Mirella Lapata
Jeff Z. Pan
    HILM
ArXivPDFHTML
Abstract

Information alignment evaluators are vital for various NLG evaluation tasks and trustworthy LLM deployment, reducing hallucinations and enhancing user trust. Current fine-grained methods, like FactScore, verify facts individually but neglect inter-fact dependencies, enabling subtle vulnerabilities. In this work, we introduce MontageLie, a challenging benchmark that constructs deceptive narratives by "montaging" truthful statements without introducing explicit hallucinations. We demonstrate that both coarse-grained LLM-based evaluators and current fine-grained frameworks are susceptible to this attack, with AUC-ROC scores falling below 65%. To enable more robust fine-grained evaluation, we propose DoveScore, a novel framework that jointly verifies factual accuracy and event-order consistency. By modeling inter-fact relationships, DoveScore outperforms existing fine-grained methods by over 8%, providing a more robust solution for long-form text alignment evaluation. Our code and datasets are available atthis https URL.

View on arXiv
@article{zheng2025_2505.15792,
  title={ Long-Form Information Alignment Evaluation Beyond Atomic Facts },
  author={ Danna Zheng and Mirella Lapata and Jeff Z. Pan },
  journal={arXiv preprint arXiv:2505.15792},
  year={ 2025 }
}
Comments on this paper