ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.09853
  4. Cited By
How susceptible are LLMs to Logical Fallacies?

How susceptible are LLMs to Logical Fallacies?

18 August 2023
Amirreza Payandeh
Dan Pluth
Jordan Hosier
Xuesu Xiao
V. Gurbani
    LLMAGLRMELM
ArXiv (abs)PDFHTML

Papers citing "How susceptible are LLMs to Logical Fallacies?"

10 / 10 papers shown
Title
Are Large Language Models Really Good Logical Reasoners? A Comprehensive
  Evaluation and Beyond
Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and Beyond
Fangzhi Xu
Qika Lin
Jiawei Han
Tianzhe Zhao
Jun Liu
Min Zhang
ELMLRM
105
38
0
16 Jun 2023
Emergent Analogical Reasoning in Large Language Models
Emergent Analogical Reasoning in Large Language Models
Taylor Webb
K. Holyoak
Hongjing Lu
ReLMELMLRMAI4CE
99
309
0
19 Dec 2022
Solving Quantitative Reasoning Problems with Language Models
Solving Quantitative Reasoning Problems with Language Models
Aitor Lewkowycz
Anders Andreassen
David Dohan
Ethan Dyer
Henryk Michalewski
...
Theo Gutman-Solo
Yuhuai Wu
Behnam Neyshabur
Guy Gur-Ari
Vedant Misra
ReLMELMLRM
177
851
0
29 Jun 2022
PaLM: Scaling Language Modeling with Pathways
PaLM: Scaling Language Modeling with Pathways
Aakanksha Chowdhery
Sharan Narang
Jacob Devlin
Maarten Bosma
Gaurav Mishra
...
Kathy Meier-Hellstern
Douglas Eck
J. Dean
Slav Petrov
Noah Fiedel
PILMLRM
495
6,240
0
05 Apr 2022
Logical Fallacy Detection
Logical Fallacy Detection
Zhijing Jin
Abhinav Lalwani
Tejas Vaidhya
Xiaoyu Shen
Yiwen Ding
Zhiheng Lyu
Mrinmaya Sachan
Rada Mihalcea
Bernhard Schölkopf
LRM
104
34
0
28 Feb 2022
Red Teaming Language Models with Language Models
Red Teaming Language Models with Language Models
Ethan Perez
Saffron Huang
Francis Song
Trevor Cai
Roman Ring
John Aslanides
Amelia Glaese
Nat McAleese
G. Irving
AAML
169
664
0
07 Feb 2022
The Limitations of Stylometry for Detecting Machine-Generated Fake News
The Limitations of Stylometry for Detecting Machine-Generated Fake News
Tal Schuster
R. Schuster
Darsh J. Shah
Regina Barzilay
DeLMO
102
126
0
26 Aug 2019
Persuasion for Good: Towards a Personalized Persuasive Dialogue System
  for Social Good
Persuasion for Good: Towards a Personalized Persuasive Dialogue System for Social Good
Xuewei Wang
Weiyan Shi
Richard Kim
Y. Oh
Sijia Yang
Jingwen Zhang
Zhou Yu
105
286
0
16 Jun 2019
Defending Against Neural Fake News
Defending Against Neural Fake News
Rowan Zellers
Ari Holtzman
Hannah Rashkin
Yonatan Bisk
Ali Farhadi
Franziska Roesner
Yejin Choi
AAML
123
1,029
0
29 May 2019
Before Name-calling: Dynamics and Triggers of Ad Hominem Fallacies in
  Web Argumentation
Before Name-calling: Dynamics and Triggers of Ad Hominem Fallacies in Web Argumentation
Ivan Habernal
Henning Wachsmuth
Iryna Gurevych
Benno Stein
63
85
0
19 Feb 2018
1