LLM-based Corroborating and Refuting Evidence Retrieval for Scientific Claim Verification
In this paper, we introduce CIBER (Claim Investigation Based on Evidence Retrieval), an extension of the Retrieval-Augmented Generation (RAG) framework designed to identify corroborating and refuting documents as evidence for scientific claim verification. CIBER addresses the inherent uncertainty in Large Language Models (LLMs) by evaluating response consistency across diverse interrogation probes. By focusing on the behavioral analysis of LLMs without requiring access to their internal information, CIBER is applicable to both white-box and black-box models. Furthermore, CIBER operates in an unsupervised manner, enabling easy generalization across various scientific domains. Comprehensive evaluations conducted using LLMs with varying levels of linguistic proficiency reveal CIBER's superior performance compared to conventional RAG approaches. These findings not only highlight the effectiveness of CIBER but also provide valuable insights for future advancements in LLM-based scientific claim verification.
View on arXiv@article{wang2025_2503.07937, title={ LLM-based Corroborating and Refuting Evidence Retrieval for Scientific Claim Verification }, author={ Siyuan Wang and James R. Foulds and Md Osman Gani and Shimei Pan }, journal={arXiv preprint arXiv:2503.07937}, year={ 2025 } }