ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.12342
  4. Cited By
Eliminating Reasoning via Inferring with Planning: A New Framework to
  Guide LLMs' Non-linear Thinking

Eliminating Reasoning via Inferring with Planning: A New Framework to Guide LLMs' Non-linear Thinking

18 October 2023
Yongqi Tong
Yifan Wang
Dawei Li
Sizhe Wang
Zi Lin
Simeng Han
Jingbo Shang
    LRM
ArXivPDFHTML

Papers citing "Eliminating Reasoning via Inferring with Planning: A New Framework to Guide LLMs' Non-linear Thinking"

9 / 9 papers shown
Title
Cracking the Code of Juxtaposition: Can AI Models Understand the
  Humorous Contradictions
Cracking the Code of Juxtaposition: Can AI Models Understand the Humorous Contradictions
Zhe Hu
Tuo Liang
Jing Li
Yiren Lu
Yunlai Zhou
Yiran Qiao
Jing Ma
Yu Yin
49
4
0
29 May 2024
DALK: Dynamic Co-Augmentation of LLMs and KG to answer Alzheimer's
  Disease Questions with Scientific Literature
DALK: Dynamic Co-Augmentation of LLMs and KG to answer Alzheimer's Disease Questions with Scientific Literature
Dawei Li
Shu Yang
Zhen Tan
Jae Young Baik
Sunkwon Yun
...
D. Duong-Tran
Ying Ding
Huan Liu
Li Shen
Tianlong Chen
59
32
0
08 May 2024
Puzzle Solving using Reasoning of Large Language Models: A Survey
Puzzle Solving using Reasoning of Large Language Models: A Survey
Panagiotis Giadikiaroglou
Maria Lymperaiou
Giorgos Filandrianos
Giorgos Stamou
ELM
ReLM
LRM
19
24
0
17 Feb 2024
Contextualization Distillation from Large Language Model for Knowledge
  Graph Completion
Contextualization Distillation from Large Language Model for Knowledge Graph Completion
Dawei Li
Zhen Tan
Tianlong Chen
Huan Liu
KELM
25
12
0
28 Jan 2024
Complexity-Based Prompting for Multi-Step Reasoning
Complexity-Based Prompting for Multi-Step Reasoning
Yao Fu
Hao-Chun Peng
Ashish Sabharwal
Peter Clark
Tushar Khot
ReLM
LRM
162
414
0
03 Oct 2022
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
328
4,077
0
24 May 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
314
3,248
0
21 Mar 2022
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit
  Reasoning Strategies
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
Mor Geva
Daniel Khashabi
Elad Segal
Tushar Khot
Dan Roth
Jonathan Berant
RALM
250
677
0
06 Jan 2021
RiddleSense: Reasoning about Riddle Questions Featuring Linguistic
  Creativity and Commonsense Knowledge
RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge
Bill Yuchen Lin
Ziyi Wu
Yichi Yang
Dong-Ho Lee
Xiang Ren
ReLM
LRM
236
64
0
02 Jan 2021
1