Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.08516
Cited By
v1
v2
v3 (latest)
LLMs cannot find reasoning errors, but can correct them given the error location
14 November 2023
Gladys Tyen
Hassan Mansoor
Victor Carbune
Peter Chen
Tony Mak
LRM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"LLMs cannot find reasoning errors, but can correct them given the error location"
17 / 17 papers shown
Title
When Do LLMs Admit Their Mistakes? Understanding the Role of Model Belief in Retraction
Yuqing Yang
Robin Jia
KELM
LRM
103
1
0
22 May 2025
BIG-Bench Extra Hard
Mehran Kazemi
Bahare Fatemi
Hritik Bansal
John Palowitch
Chrysovalantis Anastasiou
...
Kate Olszewska
Yi Tay
Vinh Q. Tran
Quoc V. Le
Orhan Firat
ELM
LRM
289
13
0
26 Feb 2025
Can Large Language Models Detect Errors in Long Chain-of-Thought Reasoning?
Yancheng He
Shilong Li
Jing Liu
Weixun Wang
Xingyuan Bu
...
Zhongyuan Peng
Zhenru Zhang
Zhicheng Zheng
Wenbo Su
Bo Zheng
ELM
LRM
146
16
0
26 Feb 2025
Examining False Positives under Inference Scaling for Mathematical Reasoning
Yu Guang Wang
Nan Yang
Liang Wang
Furu Wei
LRM
125
4
0
10 Feb 2025
LLM-based Translation Inference with Iterative Bilingual Understanding
Andong Chen
Kehai Chen
Yang Xiang
Xuefeng Bai
Muyun Yang
Yang Feng
Tiejun Zhao
Min Zhang
LRM
131
5
0
31 Dec 2024
Critic-V: VLM Critics Help Catch VLM Errors in Multimodal Reasoning
Di Zhang
Jingdi Lei
Junxian Li
Xunzhi Wang
Yong Liu
...
Steve Yang
Jianbo Wu
Peng Ye
Wanli Ouyang
Dongzhan Zhou
OffRL
LRM
145
8
0
27 Nov 2024
LLMs can Find Mathematical Reasoning Mistakes by Pedagogical Chain-of-Thought
Zhuoxuan Jiang
Haoyuan Peng
Shanshan Feng
Fan Li
Dongsheng Li
KELM
LRM
77
16
0
09 May 2024
Evaluating Mathematical Reasoning Beyond Accuracy
Shijie Xia
Xuefeng Li
Yixin Liu
Tongshuang Wu
Pengfei Liu
LRM
ReLM
126
29
0
08 Apr 2024
AutoMix: Automatically Mixing Language Models
Pranjal Aggarwal
Aman Madaan
Ankit Anand
Srividya Pranavi Potharaju
Swaroop Mishra
...
Karthik Kappaganthu
Yiming Yang
Shyam Upadhyay
Manaal Faruqui
Mausam
148
26
0
19 Oct 2023
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Ning Miao
Yee Whye Teh
Tom Rainforth
ReLM
LRM
62
135
0
01 Aug 2023
Large Language Models are Better Reasoners with Self-Verification
Yixuan Weng
Minjun Zhu
Fei Xia
Bin Li
Shizhu He
Shengping Liu
Bin Sun
Kang Liu
Jun Zhao
ReLM
LRM
78
225
0
19 Dec 2022
Constitutional AI: Harmlessness from AI Feedback
Yuntao Bai
Saurav Kadavath
Sandipan Kundu
Amanda Askell
John Kernion
...
Dario Amodei
Nicholas Joseph
Sam McCandlish
Tom B. Brown
Jared Kaplan
SyDa
MoMe
214
1,646
0
15 Dec 2022
Large Language Models Can Self-Improve
Jiaxin Huang
S. Gu
Le Hou
Yuexin Wu
Xuezhi Wang
Hongkun Yu
Jiawei Han
ReLM
AI4MH
LRM
206
614
0
20 Oct 2022
ReAct: Synergizing Reasoning and Acting in Language Models
Shunyu Yao
Jeffrey Zhao
Dian Yu
Nan Du
Izhak Shafran
Karthik Narasimhan
Yuan Cao
LLMAG
ReLM
LRM
439
2,976
0
06 Oct 2022
CodeT: Code Generation with Generated Tests
Bei Chen
Fengji Zhang
A. Nguyen
Daoguang Zan
Zeqi Lin
Jian-Guang Lou
Weizhu Chen
103
344
0
21 Jul 2022
Self-critiquing models for assisting human evaluators
William Saunders
Catherine Yeh
Jeff Wu
Steven Bills
Ouyang Long
Jonathan Ward
Jan Leike
ALM
ELM
112
306
0
12 Jun 2022
Think about it! Improving defeasible reasoning by first modeling the question scenario
Cencheng Shen
Niket Tandon
Ha Trinh
Peter Clark
Yiming Yang
Eduard H. Hovy
LRM
ReLM
93
31
0
24 Oct 2021
1