ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14116
19
3

Self-Reasoning Language Models: Unfold Hidden Reasoning Chains with Few Reasoning Catalyst

20 May 2025
Hongru Wang
Deng Cai
Wanjun Zhong
Shijue Huang
Jeff Z. Pan
Zeming Liu
Kam-Fai Wong
    ReLM
    LRM
ArXivPDFHTML
Abstract

Inference-time scaling has attracted much attention which significantly enhance the performance of Large Language Models (LLMs) in complex reasoning tasks by increasing the length of Chain-of-Thought. These longer intermediate reasoning rationales embody various meta-reasoning skills in human cognition, such as reflection and decomposition, being difficult to create and acquire. In this work, we introduce \textit{Self-Reasoning Language Model} (SRLM), where the model itself can synthesize longer CoT data and iteratively improve performance through self-training. By incorporating a few demonstration examples (i.e., 1,000 samples) on how to unfold hidden reasoning chains from existing responses, which act as a reasoning catalyst, we demonstrate that SRLM not only enhances the model's initial performance but also ensures more stable and consistent improvements in subsequent iterations. Our proposed SRLM achieves an average absolute improvement of more than +2.5+2.5+2.5 points across five reasoning tasks: MMLU, GSM8K, ARC-C, HellaSwag, and BBH on two backbone models. Moreover, it brings more improvements with more times of sampling during inference, such as absolute +7.89+7.89+7.89 average improvement with 646464 sampling times, revealing the in-depth, diverse and creative reasoning paths in SRLM against the strong baseline.

View on arXiv
@article{wang2025_2505.14116,
  title={ Self-Reasoning Language Models: Unfold Hidden Reasoning Chains with Few Reasoning Catalyst },
  author={ Hongru Wang and Deng Cai and Wanjun Zhong and Shijue Huang and Jeff Z. Pan and Zeming Liu and Kam-Fai Wong },
  journal={arXiv preprint arXiv:2505.14116},
  year={ 2025 }
}
Comments on this paper