ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18440
54
0

Efficient Long CoT Reasoning in Small Language Models

24 May 2025
Z. Wang
Jinqi Jiang
Tian Qiu
Hui Liu
Xianfeng Tang
Huaxiu Yao
    OffRL
    ReLM
    LRM
ArXivPDFHTML
Abstract

Recent large reasoning models such as DeepSeek-R1 exhibit strong complex problems solving abilities by generating long chain-of-thought (CoT) reasoning steps. It is challenging to directly train small language models (SLMs) to emerge long CoT. Thus, distillation becomes a practical method to enable SLMs for such reasoning ability. However, the long CoT often contains a lot of redundant contents (e.g., overthinking steps) which may make SLMs hard to learn considering their relatively poor capacity and generalization. To address this issue, we propose a simple-yet-effective method to prune unnecessary steps in long CoT, and then employ an on-policy method for the SLM itself to curate valid and useful long CoT training data. In this way, SLMs can effectively learn efficient long CoT reasoning and preserve competitive performance at the same time. Experimental results across a series of mathematical reasoning benchmarks demonstrate the effectiveness of the proposed method in distilling long CoT reasoning ability into SLMs which maintains the competitive performance but significantly reduces generating redundant reasoning steps.

View on arXiv
@article{wang2025_2505.18440,
  title={ Efficient Long CoT Reasoning in Small Language Models },
  author={ Zhaoyang Wang and Jinqi Jiang and Tian Qiu and Hui Liu and Xianfeng Tang and Huaxiu Yao },
  journal={arXiv preprint arXiv:2505.18440},
  year={ 2025 }
}
Comments on this paper