65
0

CORRECT: Context- and Reference-Augmented Reasoning and Prompting for Fact-Checking

Abstract

Fact-checking the truthfulness of claims usually requires reasoning over multiple evidence sentences. Oftentimes, evidence sentences may not be always self-contained, and may require additional contexts and references from elsewhere to understand coreferential expressions, acronyms, and the scope of a reported finding. For example, evidence sentences from an academic paper may need contextual sentences in the paper and descriptions in its cited papers to determine the scope of a research discovery. However, most fact-checking models mainly focus on the reasoning within evidence sentences, and ignore the auxiliary contexts and references. To address this problem, we propose a novel method, Context- and Reference-augmented Reasoning and Prompting. For evidence reasoning, we construct a three-layer evidence graph with evidence, context, and reference layers. We design intra- and cross-layer reasoning to integrate three graph layers into a unified evidence embedding. For verdict prediction, we design evidence-conditioned prompt encoder, which produces unique prompt embeddings for each claim. These evidence-conditioned prompt embeddings and claims are unified for fact-checking. Experiments verify the strength of our model.

View on arXiv
@article{zhang2025_2502.09635,
  title={ CORRECT: Context- and Reference-Augmented Reasoning and Prompting for Fact-Checking },
  author={ Delvin Ce Zhang and Dongwon Lee },
  journal={arXiv preprint arXiv:2502.09635},
  year={ 2025 }
}
Comments on this paper