ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.03673
57
0

Reason from Future: Reverse Thought Chain Enhances LLM Reasoning

4 June 2025
Yinlong Xu
Yanzhao Zheng
Shuoshuo Sun
Shuaihan Huang
Baohua Dong
Hangcheng Zhu
Ruohui Huang
Gang Yu
Hongxia Xu
J. Wu
    ReLMLRMAI4CE
ArXiv (abs)PDFHTML
Main:8 Pages
11 Figures
Bibliography:2 Pages
5 Tables
Appendix:4 Pages
Abstract

It has been demonstrated that carefully designed reasoning paradigms, like Chain-of-Thought (CoT) and Tree-of-Thought (ToT), can enhance the reasoning capabilities of small language models by detailed thinking and extensive thought searching, unbounded branching factors in the searching space create prohibitive reasoning consumption. However these methods fall into the trap of local optimum reasoning, which means the model lacks a global perspective while solving problems. We propose a novel reasoning paradigm called Reason from Future (RFF), which generates reasoning paths by bidirectional reasoning that combines top-down planning with bottom-up reasoning accumulation. The essence of RFF lies in its reverse reasoning mechanism, which prioritizes core logical relationships and imposes goal-oriented constraints on intermediate steps, thereby reducing the searching space and mitigating error accumulation inherent in sequential forward reasoning. Empirical evaluations across diverse experiments demonstrate that RFF outperforms conventional paradigms with higher accuracy and less searching space to solve complex tasks.

View on arXiv
@article{xu2025_2506.03673,
  title={ Reason from Future: Reverse Thought Chain Enhances LLM Reasoning },
  author={ Yinlong Xu and Yanzhao Zheng and Shuoshuo Sun and Shuaihan Huang and Baohua Dong and Hangcheng Zhu and Ruohui Huang and Gang Yu and Hongxia Xu and Jian Wu },
  journal={arXiv preprint arXiv:2506.03673},
  year={ 2025 }
}
Comments on this paper