ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.14868
  4. Cited By
Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study
  on Syllogism

Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism

23 October 2023
Mengyu Ye
Tatsuki Kuribayashi
Jun Suzuki
Goro Kobayashi
Hiroaki Funayama
    LRM
ArXivPDFHTML

Papers citing "Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism"

8 / 8 papers shown
Title
First Heuristic Then Rational: Dynamic Use of Heuristics in Language
  Model Reasoning
First Heuristic Then Rational: Dynamic Use of Heuristics in Language Model Reasoning
Yoichi Aoki
Keito Kudo
Tatsuki Kuribayashi
Shusaku Sone
Masaya Taniguchi
Keisuke Sakaguchi
Kentaro Inui
LRM
29
1
0
23 Jun 2024
Investigating and Addressing Hallucinations of LLMs in Tasks Involving
  Negation
Investigating and Addressing Hallucinations of LLMs in Tasks Involving Negation
Neeraj Varshney
Satyam Raj
Venkatesh Mishra
Agneet Chatterjee
Ritika Sarkar
Amir Saeidi
Chitta Baral
LRM
35
7
0
08 Jun 2024
Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with
  Knowledge Graphs
Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs
Minh-Vuong Nguyen
Linhao Luo
Fatemeh Shiri
Dinh Q. Phung
Yuan-Fang Li
Thuy-Trang Vu
Gholamreza Haffari
KELM
LRM
28
9
0
17 Feb 2024
A Systematic Comparison of Syllogistic Reasoning in Humans and Language
  Models
A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models
Tiwalayo Eisape
MH Tessler
Ishita Dasgupta
Fei Sha
Sjoerd van Steenkiste
Tal Linzen
ReLM
LRM
35
7
0
01 Nov 2023
Simple Linguistic Inferences of Large Language Models (LLMs): Blind
  Spots and Blinds
Simple Linguistic Inferences of Large Language Models (LLMs): Blind Spots and Blinds
Victoria Basmov
Yoav Goldberg
Reut Tsarfaty
ReLM
LRM
24
5
0
24 May 2023
Can Large Language Models Truly Understand Prompts? A Case Study with
  Negated Prompts
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
Joel Jang
Seonghyeon Ye
Minjoon Seo
ELM
LRM
95
64
0
26 Sep 2022
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
322
4,077
0
24 May 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
313
11,953
0
04 Mar 2022
1