ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.00840
31
92

FOLIO: Natural Language Reasoning with First-Order Logic

2 September 2022
Simeng Han
Hailey Schoelkopf
Yilun Zhao
Zhenting Qi
Martin Riddell
Wenfei Zhou
James Coady
David Peng
Yujie Qiao
Luke Benson
Lucy Sun
Alex Wardle-Solano
Hannah Szabo
E. Zubova
Matthew Burtell
Jonathan Fan
Yixin Liu
Brian Wong
Malcolm Sailor
Ansong Ni
Linyong Nan
Jungo Kasai
Tao Yu
Rui Zhang
Alexander R. Fabbri
Wojciech Kry'sciñski
Semih Yavuz
Ye Liu
Xi Victoria Lin
Shafiq R. Joty
Yingbo Zhou
Caiming Xiong
Rex Ying
Arman Cohan
Dragomir R. Radev
    ReLM
    LRM
ArXivPDFHTML
Abstract

Large language models (LLMs) have achieved remarkable performance on a variety of natural language understanding tasks. However, existing benchmarks are inadequate in measuring the complex logical reasoning capabilities of a model. We present FOLIO, a human-annotated, logically complex and diverse dataset for reasoning in natural language (NL), equipped with first-order logic (FOL) annotations. FOLIO consists of 1,430 examples (unique conclusions), each paired with one of 487 sets of premises used to deductively reason for the validity of each conclusion. The logical correctness of the premises and conclusions is ensured by their FOL annotations, which are automatically verified by an FOL inference engine. In addition to the main NL reasoning task, NL-FOL pairs in FOLIO constitute a new NL-FOL translation dataset. Our experiments on FOLIO systematically evaluate the FOL reasoning ability of supervised fine-tuning on medium-sized language models. For both NL reasoning and NL-FOL translation, we benchmark multiple state-of-the-art language models. Our results show that a subset of FOLIO presents a challenge for one of the most capable {Large Language Model (LLM)} publicly available, GPT-4.

View on arXiv
Comments on this paper