75
1

Towards Effective Extraction and Evaluation of Factual Claims

Main:9 Pages
3 Figures
Bibliography:4 Pages
12 Tables
Appendix:37 Pages
Abstract

A common strategy for fact-checking long-form content generated by Large Language Models (LLMs) is extracting simple claims that can be verified independently. Since inaccurate or incomplete claims compromise fact-checking results, ensuring claim quality is critical. However, the lack of a standardized evaluation framework impedes assessment and comparison of claim extraction methods. To address this gap, we propose a framework for evaluating claim extraction in the context of fact-checking along with automated, scalable, and replicable methods for applying this framework, including novel approaches for measuring coverage and decontextualization. We also introduce Claimify, an LLM-based claim extraction method, and demonstrate that it outperforms existing methods under our evaluation framework. A key feature of Claimify is its ability to handle ambiguity and extract claims only when there is high confidence in the correct interpretation of the source text.

View on arXiv
@article{metropolitansky2025_2502.10855,
  title={ Towards Effective Extraction and Evaluation of Factual Claims },
  author={ Dasha Metropolitansky and Jonathan Larson },
  journal={arXiv preprint arXiv:2502.10855},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.