14
0

Atomic Reasoning for Scientific Table Claim Verification

Main:7 Pages
15 Figures
Bibliography:4 Pages
7 Tables
Appendix:8 Pages
Abstract

Scientific texts often convey authority due to their technical language and complex data. However, this complexity can sometimes lead to the spread of misinformation. Non-experts are particularly susceptible to misleading claims based on scientific tables due to their high information density and perceived credibility. Existing table claim verification models, including state-of-the-art large language models (LLMs), often struggle with precise fine-grained reasoning, resulting in errors and a lack of precision in verifying scientific claims. Inspired by Cognitive Load Theory, we propose that enhancing a model's ability to interpret table-based claims involves reducing cognitive load by developing modular, reusable reasoning components (i.e., atomic skills). We introduce a skill-chaining schema that dynamically composes these skills to facilitate more accurate and generalizable reasoning with a reduced cognitive load. To evaluate this, we create SciAtomicBench, a cross-domain benchmark with fine-grained reasoning annotations. With only 350 fine-tuning examples, our model trained by atomic reasoning outperforms GPT-4o's chain-of-thought method, achieving state-of-the-art results with far less training data.

View on arXiv
@article{zhang2025_2506.06972,
  title={ Atomic Reasoning for Scientific Table Claim Verification },
  author={ Yuji Zhang and Qingyun Wang and Cheng Qian and Jiateng Liu and Chenkai Sun and Denghui Zhang and Tarek Abdelzaher and Chengxiang Zhai and Preslav Nakov and Heng Ji },
  journal={arXiv preprint arXiv:2506.06972},
  year={ 2025 }
}
Comments on this paper