ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.11822
  4. Cited By
Maieutic Prompting: Logically Consistent Reasoning with Recursive
  Explanations

Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations

24 May 2022
Jaehun Jung
Lianhui Qin
Sean Welleck
Faeze Brahman
Chandra Bhagavatula
Ronan Le Bras
Yejin Choi
    ReLM
    LRM
ArXivPDFHTML

Papers citing "Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations"

50 / 147 papers shown
Title
Always Tell Me The Odds: Fine-grained Conditional Probability Estimation
Always Tell Me The Odds: Fine-grained Conditional Probability Estimation
Liaoyaqi Wang
Zhengping Jiang
Anqi Liu
Benjamin Van Durme
59
0
0
02 May 2025
BELL: Benchmarking the Explainability of Large Language Models
BELL: Benchmarking the Explainability of Large Language Models
Syed Quiser Ahmed
Bharathi Vokkaliga Ganesh
Jagadish Babu P
Karthick Selvaraj
ReddySiva Naga Parvathi Devi
Sravya Kappala
ELM
133
0
0
22 Apr 2025
Affordable AI Assistants with Knowledge Graph of Thoughts
Affordable AI Assistants with Knowledge Graph of Thoughts
Maciej Besta
Lorenzo Paleari
Jia Hao Andrea Jiang
Robert Gerstenberger
You Wu
...
Jón Gunnar Hannesson
Grzegorz Kwa'sniewski
Marcin Copik
H. Niewiadomski
Torsten Hoefler
LLMAG
RALM
142
0
0
03 Apr 2025
CoKe: Customizable Fine-Grained Story Evaluation via Chain-of-Keyword Rationalization
CoKe: Customizable Fine-Grained Story Evaluation via Chain-of-Keyword Rationalization
Brihi Joshi
Sriram Venkatapathy
Mohit Bansal
Nanyun Peng
Haw-Shiuan Chang
LRM
49
0
0
21 Mar 2025
An Overview and Discussion on Using Large Language Models for Implementation Generation of Solutions to Open-Ended Problems
An Overview and Discussion on Using Large Language Models for Implementation Generation of Solutions to Open-Ended Problems
Hashmath Shaik
Alex Doboli
OffRL
ELM
146
0
0
31 Dec 2024
ArgMed-Agents: Explainable Clinical Decision Reasoning with LLM Disscusion via Argumentation Schemes
ArgMed-Agents: Explainable Clinical Decision Reasoning with LLM Disscusion via Argumentation Schemes
Shengxin Hong
Liang Xiao
Xin Zhang
Jian-Xing Chen
LRM
40
2
0
31 Dec 2024
From Models to Microtheories: Distilling a Model's Topical Knowledge for
  Grounded Question Answering
From Models to Microtheories: Distilling a Model's Topical Knowledge for Grounded Question Answering
Nathaniel Weir
Bhavana Dalvi Mishra
Orion Weller
Oyvind Tafjord
Sam Hornstein
Alexander Sabol
Peter Alexander Jansen
Benjamin Van Durme
Peter Clark
LRM
KELM
92
0
0
23 Dec 2024
Blind Spot Navigation in LLM Reasoning with Thought Space Explorer
Blind Spot Navigation in LLM Reasoning with Thought Space Explorer
Jinghan Zhang
Fengran Mo
Xiting Wang
Kunpeng Liu
LM&Ro
LRM
51
5
0
31 Oct 2024
MiCEval: Unveiling Multimodal Chain of Thought's Quality via Image Description and Reasoning Steps
MiCEval: Unveiling Multimodal Chain of Thought's Quality via Image Description and Reasoning Steps
Xiongtao Zhou
Jie He
Lanyu Chen
Jingyu Li
Haojing Chen
Víctor Gutiérrez-Basulto
Jeff Z. Pan
H. Chen
LRM
55
1
0
18 Oct 2024
Plausibly Problematic Questions in Multiple-Choice Benchmarks for
  Commonsense Reasoning
Plausibly Problematic Questions in Multiple-Choice Benchmarks for Commonsense Reasoning
Shramay Palta
Nishant Balepur
Peter Rankel
Sarah Wiegreffe
Marine Carpuat
Rachel Rudinger
ELM
31
4
0
06 Oct 2024
Multi-Step Time Series Inference Agent for Reasoning and Automated Task Execution
Multi-Step Time Series Inference Agent for Reasoning and Automated Task Execution
Wen Ye
Yizhou Zhang
Wei Yang
Lumingyuan Tang
Defu Cao
Jie Cai
Yan Liu
BDL
CoGe
AI4TS
36
2
0
05 Oct 2024
System 2 Reasoning Capabilities Are Nigh
System 2 Reasoning Capabilities Are Nigh
Scott C. Lowe
VLM
LRM
40
0
0
04 Oct 2024
Aligning with Logic: Measuring, Evaluating and Improving Logical Preference Consistency in Large Language Models
Aligning with Logic: Measuring, Evaluating and Improving Logical Preference Consistency in Large Language Models
Yinhong Liu
Zhijiang Guo
Tianya Liang
Ehsan Shareghi
Ivan Vulić
Nigel Collier
108
0
0
03 Oct 2024
Graph Reasoning with Large Language Models via Pseudo-code Prompting
Graph Reasoning with Large Language Models via Pseudo-code Prompting
Konstantinos Skianis
Giannis Nikolentzos
Michalis Vazirgiannis
LRM
ReLM
35
4
0
26 Sep 2024
Iteration of Thought: Leveraging Inner Dialogue for Autonomous Large
  Language Model Reasoning
Iteration of Thought: Leveraging Inner Dialogue for Autonomous Large Language Model Reasoning
Santosh Kumar Radha
Yasamin Nouri Jelyani
Ara Ghukasyan
Oktay Goktas
LLMAG
LM&Ro
LRM
31
5
0
19 Sep 2024
Logically Consistent Language Models via Neuro-Symbolic Integration
Logically Consistent Language Models via Neuro-Symbolic Integration
Diego Calanzone
Stefano Teso
Antonio Vergari
LRM
73
6
0
09 Sep 2024
Diagnosing and Remedying Knowledge Deficiencies in LLMs via Label-free
  Curricular Meaningful Learning
Diagnosing and Remedying Knowledge Deficiencies in LLMs via Label-free Curricular Meaningful Learning
Kai Xiong
Xiao Ding
Li Du
Jiahao Ying
Ting Liu
Bing Qin
Yixin Cao
34
1
0
21 Aug 2024
Internal Consistency and Self-Feedback in Large Language Models: A
  Survey
Internal Consistency and Self-Feedback in Large Language Models: A Survey
Xun Liang
Shichao Song
Zifan Zheng
Hanyu Wang
Qingchen Yu
...
Rong-Hua Li
Peng Cheng
Zhonghao Wang
Feiyu Xiong
Zhiyu Li
HILM
LRM
62
25
0
19 Jul 2024
xTower: A Multilingual LLM for Explaining and Correcting Translation
  Errors
xTower: A Multilingual LLM for Explaining and Correcting Translation Errors
Marcos Vinícius Treviso
Nuno M. Guerreiro
Sweta Agrawal
Ricardo Rei
José P. Pombal
Tânia Vaz
Helena Wu
Beatriz Silva
Daan van Stigt
André F. T. Martins
LRM
34
7
0
27 Jun 2024
CAVE: Controllable Authorship Verification Explanations
CAVE: Controllable Authorship Verification Explanations
Sahana Ramnath
Kartik Pandey
Elizabeth Boschee
Xiang Ren
61
1
0
24 Jun 2024
Chain-of-Probe: Examining the Necessity and Accuracy of CoT Step-by-Step
Chain-of-Probe: Examining the Necessity and Accuracy of CoT Step-by-Step
Zezhong Wang
Xingshan Zeng
Weiwen Liu
Yufei Wang
Liangyou Li
Yasheng Wang
Lifeng Shang
Xin Jiang
Qun Liu
Kam-Fai Wong
LRM
56
3
0
23 Jun 2024
A Personalised Learning Tool for Physics Undergraduate Students Built On
  a Large Language Model for Symbolic Regression
A Personalised Learning Tool for Physics Undergraduate Students Built On a Large Language Model for Symbolic Regression
Yufan Zhu
Zi-Yu Khoo
Jonathan Sze Choong Low
Stephane Bressan
AI4Ed
25
2
0
17 Jun 2024
Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs
Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs
Yi Fang
Moxin Li
Wenjie Wang
Hui Lin
Fuli Feng
LRM
60
5
0
17 Jun 2024
A Probabilistic Framework for LLM Hallucination Detection via Belief Tree Propagation
A Probabilistic Framework for LLM Hallucination Detection via Belief Tree Propagation
Bairu Hou
Yang Zhang
Jacob Andreas
Shiyu Chang
69
5
0
11 Jun 2024
Process-Driven Autoformalization in Lean 4
Process-Driven Autoformalization in Lean 4
Jianqiao Lu
Zhengying Liu
Yingjia Wan
Yinya Huang
Haiming Wang
Zhicheng YANG
Jing Tang
Zhijiang Guo
AI4CE
37
14
0
04 Jun 2024
When Can LLMs Actually Correct Their Own Mistakes? A Critical Survey of
  Self-Correction of LLMs
When Can LLMs Actually Correct Their Own Mistakes? A Critical Survey of Self-Correction of LLMs
Ryo Kamoi
Yusen Zhang
Nan Zhang
Jiawei Han
Rui Zhang
LRM
44
57
0
03 Jun 2024
Brainstorming Brings Power to Large Language Models of Knowledge
  Reasoning
Brainstorming Brings Power to Large Language Models of Knowledge Reasoning
Zining Qin
Chenhao Wang
Huiling Qin
Weijia Jia
LRM
29
1
0
02 Jun 2024
Towards Dialogues for Joint Human-AI Reasoning and Value Alignment
Towards Dialogues for Joint Human-AI Reasoning and Value Alignment
Elfia Bezou-Vrakatseli
O. Cocarascu
Sanjay Modgil
30
0
0
28 May 2024
Hypothesis Testing Prompting Improves Deductive Reasoning in Large
  Language Models
Hypothesis Testing Prompting Improves Deductive Reasoning in Large Language Models
Yitian Li
Jidong Tian
Hao He
Yaohui Jin
LRM
ReLM
27
0
0
09 May 2024
Towards a Theoretical Understanding of the 'Reversal Curse' via Training
  Dynamics
Towards a Theoretical Understanding of the 'Reversal Curse' via Training Dynamics
Hanlin Zhu
Baihe Huang
Shaolun Zhang
Michael I. Jordan
Jiantao Jiao
Yuandong Tian
Stuart Russell
LRM
AI4CE
47
13
0
07 May 2024
Towards Logically Consistent Language Models via Probabilistic Reasoning
Towards Logically Consistent Language Models via Probabilistic Reasoning
Diego Calanzone
Stefano Teso
Antonio Vergari
LRM
HILM
37
2
0
19 Apr 2024
Can Small Language Models Help Large Language Models Reason Better?:
  LM-Guided Chain-of-Thought
Can Small Language Models Help Large Language Models Reason Better?: LM-Guided Chain-of-Thought
Jooyoung Lee
Fan Yang
Thanh Tran
Qian Hu
Emre Barut
Kai-Wei Chang
Chengwei Su
ReLM
LLMAG
LRM
19
10
0
04 Apr 2024
Learning From Correctness Without Prompting Makes LLM Efficient Reasoner
Learning From Correctness Without Prompting Makes LLM Efficient Reasoner
Yuxuan Yao
Han Wu
Zhijiang Guo
Biyan Zhou
Jiahui Gao
Sichun Luo
Hanxu Hou
Xiaojin Fu
Linqi Song
LLMAG
LRM
40
9
0
28 Mar 2024
Don't Trust: Verify -- Grounding LLM Quantitative Reasoning with
  Autoformalization
Don't Trust: Verify -- Grounding LLM Quantitative Reasoning with Autoformalization
Jin Peng Zhou
Charles Staats
Wenda Li
Christian Szegedy
Kilian Q. Weinberger
Yuhuai Wu
LRM
24
27
0
26 Mar 2024
Information-Theoretic Distillation for Reference-less Summarization
Information-Theoretic Distillation for Reference-less Summarization
Jaehun Jung
Ximing Lu
Liwei Jiang
Faeze Brahman
Peter West
Pang Wei Koh
Yejin Choi
38
3
0
20 Mar 2024
Think Twice Before Trusting: Self-Detection for Large Language Models
  through Comprehensive Answer Reflection
Think Twice Before Trusting: Self-Detection for Large Language Models through Comprehensive Answer Reflection
Moxin Li
Wenjie Wang
Fuli Feng
Fengbin Zhu
Qifan Wang
Tat-Seng Chua
HILM
LRM
38
13
0
15 Mar 2024
From Instructions to Constraints: Language Model Alignment with
  Automatic Constraint Verification
From Instructions to Constraints: Language Model Alignment with Automatic Constraint Verification
Fei Wang
Chao Shang
Sarthak Jain
Shuai Wang
Qiang Ning
Bonan Min
Vittorio Castelli
Yassine Benajiba
Dan Roth
ALM
22
7
0
10 Mar 2024
Look Before You Leap: Problem Elaboration Prompting Improves
  Mathematical Reasoning in Large Language Models
Look Before You Leap: Problem Elaboration Prompting Improves Mathematical Reasoning in Large Language Models
Haoran Liao
Jidong Tian
Shaohua Hu
Hao He
Yaohui Jin
ReLM
LRM
43
1
0
24 Feb 2024
Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions
  Without the Question?
Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question?
Nishant Balepur
Abhilasha Ravichander
Rachel Rudinger
ELM
35
19
0
19 Feb 2024
An Examination on the Effectiveness of Divide-and-Conquer Prompting in
  Large Language Models
An Examination on the Effectiveness of Divide-and-Conquer Prompting in Large Language Models
Yizhou Zhang
Lun Du
Defu Cao
Qiang Fu
Yan Liu
LRM
20
7
0
08 Feb 2024
Are Machines Better at Complex Reasoning? Unveiling Human-Machine
  Inference Gaps in Entailment Verification
Are Machines Better at Complex Reasoning? Unveiling Human-Machine Inference Gaps in Entailment Verification
Soumya Sanyal
Tianyi Xiao
Jiacheng Liu
Wenya Wang
Xiang Ren
LRM
ReLM
49
12
0
06 Feb 2024
LLM-based NLG Evaluation: Current Status and Challenges
LLM-based NLG Evaluation: Current Status and Challenges
Mingqi Gao
Xinyu Hu
Jie Ruan
Xiao Pu
Xiaojun Wan
ELM
LM&MA
55
29
0
02 Feb 2024
Enhancing Ethical Explanations of Large Language Models through
  Iterative Symbolic Refinement
Enhancing Ethical Explanations of Large Language Models through Iterative Symbolic Refinement
Xin Quan
Marco Valentino
Louise A. Dennis
André Freitas
LRM
20
11
0
01 Feb 2024
A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for
  Verifiers of Reasoning Chains
A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains
Alon Jacovi
Yonatan Bitton
Bernd Bohnet
Jonathan Herzig
Or Honovich
Michael Tseng
Michael Collins
Roee Aharoni
Mor Geva
LRM
34
18
0
01 Feb 2024
Demystifying Chains, Trees, and Graphs of Thoughts
Demystifying Chains, Trees, and Graphs of Thoughts
Maciej Besta
Florim Memedi
Zhenyu Zhang
Robert Gerstenberger
Guangyuan Piao
...
Aleš Kubíček
H. Niewiadomski
Aidan O'Mahony
Onur Mutlu
Torsten Hoefler
AI4CE
LRM
67
26
0
25 Jan 2024
SocraSynth: Multi-LLM Reasoning with Conditional Statistics
SocraSynth: Multi-LLM Reasoning with Conditional Statistics
Edward Y. Chang
LLMAG
LRM
25
7
0
19 Jan 2024
Deductive Closure Training of Language Models for Coherence, Accuracy,
  and Updatability
Deductive Closure Training of Language Models for Coherence, Accuracy, and Updatability
Afra Feyza Akyürek
Ekin Akyürek
Leshem Choshen
Derry Wijaya
Jacob Andreas
HILM
SyDa
41
16
0
16 Jan 2024
Language Models, Agent Models, and World Models: The LAW for Machine
  Reasoning and Planning
Language Models, Agent Models, and World Models: The LAW for Machine Reasoning and Planning
Zhiting Hu
Tianmin Shu
LLMAG
LM&Ro
LRM
102
34
0
08 Dec 2023
FlexModel: A Framework for Interpretability of Distributed Large
  Language Models
FlexModel: A Framework for Interpretability of Distributed Large Language Models
Matthew Choi
Muhammad Adil Asif
John Willes
David Emerson
AI4CE
ALM
22
1
0
05 Dec 2023
Applying Large Language Models and Chain-of-Thought for Automatic
  Scoring
Applying Large Language Models and Chain-of-Thought for Automatic Scoring
Gyeong-Geon Lee
Ehsan Latif
Xuansheng Wu
Ninghao Liu
Xiaoming Zhai
34
87
0
30 Nov 2023
123
Next