ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.17539
120
10
v1v2 (latest)

Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models

26 September 2024
Tongxuan Liu
Wenjiang Xu
Weizhe Huang
Xingyu Wang
Jiaxing Wang
Hailong Yang
Jing Li
    LRMReLM
ArXiv (abs)PDFHTML
Abstract

Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks but their performance in complex logical reasoning tasks remains unsatisfactory. Although some prompting methods, such as Chain-of-Thought, can improve the reasoning ability of LLMs to some extent, they suffer from an unfaithful issue where derived conclusions may not align with the generated reasoning chain. To address this issue, some studies employ the approach of propositional logic to further enhance logical reasoning abilities of LLMs. However, the potential omissions in the extraction of logical expressions in these methods can cause information loss in the logical reasoning process, thereby generating incorrect results. To this end, we propose Logic-of-Thought (LoT) prompting which employs propositional logic to generate expanded logical information from input context, and utilizes the generated logical information as an additional augmentation to the input prompts, thereby enhancing the capability of logical reasoning. The LoT is orthogonal to existing prompting methods and can be seamlessly integrated with them. Extensive experiments demonstrate that LoT boosts the performance of various prompting methods with a striking margin across five logical reasoning tasks. In particular, the LoT enhances Chain-of-Thought's performance on the ReClor dataset by +4.35%; moreover, it improves Chain-of-Thought with Self-Consistency's performance on LogiQA by +5%; additionally, it boosts performance of Tree-of-Thoughts on ProofWriter dataset by +8%.

View on arXiv
@article{liu2025_2409.17539,
  title={ Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models },
  author={ Tongxuan Liu and Wenjiang Xu and Weizhe Huang and Yuting Zeng and Jiaxing Wang and Xingyu Wang and Hailong Yang and Jing Li },
  journal={arXiv preprint arXiv:2409.17539},
  year={ 2025 }
}
Comments on this paper