ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.09589
71
2

Logical forms complement probability in understanding language model (and human) performance

13 February 2025
Yixuan Wang
Freda Shi
    ReLM
    LRM
ArXivPDFHTML
Abstract

With the increasing interest in using large language models (LLMs) for planning in natural language, understanding their behaviors becomes an important research question. This work conducts a systematic investigation of LLMs' ability to perform logical reasoning in natural language. We introduce a controlled dataset of hypothetical and disjunctive syllogisms in propositional and modal logic and use it as the testbed for understanding LLM performance. Our results lead to novel insights in predicting LLM behaviors: in addition to the probability of input (Gonen et al., 2023; McCoy et al., 2024), logical forms should be considered as important factors. In addition, we show similarities and discrepancies between the logical reasoning performances of humans and LLMs by collecting and comparing behavioral data from both.

View on arXiv
@article{wang2025_2502.09589,
  title={ Logical forms complement probability in understanding language model (and human) performance },
  author={ Yixuan Wang and Freda Shi },
  journal={arXiv preprint arXiv:2502.09589},
  year={ 2025 }
}
Comments on this paper