ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02338
36
0

One Missing Piece for Open-Source Reasoning Models: A Dataset to Mitigate Cold-Starting Short CoT LLMs in RL

3 June 2025
Hyungjoo Chae
Dongjin Kang
J. Kim
Beong-woo Kwak
Sunghyun Park
Haeju Park
Jinyoung Yeo
M. Lee
Kyungjae Lee
    ReLMLRM
ArXiv (abs)PDFHTML
Main:7 Pages
20 Figures
Bibliography:3 Pages
5 Tables
Appendix:7 Pages
Abstract

With the release of R1, a publicly available large reasoning model (LRM), researchers commonly train new LRMs by training language models on R1's long chain-of-thought (CoT) inferences. While prior works show that LRMs' capabilities can be reproduced through direct distillation, the continued reliance on the existing models (e.g., R1) remains a critical limitation in advancing the field. As a first step toward independent LRM development, this paper explores the possibility of constructing a long CoT dataset with LLMs that are not trained for inference-time scaling. To this end, we present the Long CoT Collection, a dataset of 100K CoT rationales annotated using existing short CoT LLMs. We develop a pipeline that induces o1's novel reasoning strategies into short CoT LLMs, enabling them to think longer and introducing controllability over the thought budget to better manage the overthinking problem. Our extensive analyses validate that our dataset achieves quality comparable to--or slightly below--R1. Furthermore, our experiments demonstrate that training on our dataset not only strengthens general reasoning skills, but also provides a strong foundation for reinforcement learning--models initialized on our data achieve 2-3x larger gains with RLVR.

View on arXiv
@article{chae2025_2506.02338,
  title={ One Missing Piece for Open-Source Reasoning Models: A Dataset to Mitigate Cold-Starting Short CoT LLMs in RL },
  author={ Hyungjoo Chae and Dongjin Kang and Jihyuk Kim and Beong-woo Kwak and Sunghyun Park and Haeju Park and Jinyoung Yeo and Moontae Lee and Kyungjae Lee },
  journal={arXiv preprint arXiv:2506.02338},
  year={ 2025 }
}
Comments on this paper