Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2408.04403
Cited By
Exploring Reasoning Biases in Large Language Models Through Syllogism: Insights from the NeuBAROCO Dataset
8 August 2024
Kentaro Ozeki
Risako Ando
Takanobu Morishita
Hirohiko Abe
K. Mineshima
Mitsuhiro Okada
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Exploring Reasoning Biases in Large Language Models Through Syllogism: Insights from the NeuBAROCO Dataset"
5 / 5 papers shown
Title
Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities
Kazuki Fujii
Taishi Nakamura
Mengsay Loem
Hiroki Iida
Masanari Ohi
Kakeru Hattori
Hirai Shota
Sakae Mizuki
Rio Yokota
Naoaki Okazaki
CLL
74
71
0
27 Apr 2024
Evaluating Large Language Models with NeuBAROCO: Syllogistic Reasoning Ability and Human-like Biases
Risako Ando
Takanobu Morishita
Hirohiko Abe
K. Mineshima
Mitsuhiro Okada
LRM
ELM
62
13
0
21 Jun 2023
Language models show human-like content effects on reasoning tasks
Ishita Dasgupta
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Hannah R. Sheahan
Antonia Creswell
D. Kumaran
James L. McClelland
Felix Hill
ReLM
LRM
109
186
0
14 Jul 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
806
12,893
0
04 Mar 2022
Can neural networks understand monotonicity reasoning?
Hitomi Yanaka
K. Mineshima
D. Bekki
Kentaro Inui
Satoshi Sekine
Lasha Abzianidze
Johan Bos
LRM
55
80
0
15 Jun 2019
1