NaturalThoughts: Selecting and Distilling Reasoning Traces for General Reasoning Tasks
- LRM

Recent work has shown that distilling reasoning traces from a larger teacher model via supervised finetuning outperforms reinforcement learning with the smaller student model alone (Guo et al. 2025). However, there has not been a systematic study of what kind of reasoning demonstrations from the teacher are most effective in improving the student model's reasoning capabilities. In this work we curate high-quality "NaturalThoughts" by selecting reasoning traces from a strong teacher model based on a large pool of questions from NaturalReasoning (Yuan et al. 2025). We first conduct a systematic analysis of factors that affect distilling reasoning capabilities, in terms of sample efficiency and scalability for general reasoning tasks. We observe that simply scaling up data size with random sampling is a strong baseline with steady performance gains. Further, we find that selecting difficult examples that require more diverse reasoning strategies is more sample-efficient to transfer the teacher model's reasoning skills. Evaluated on both Llama and Qwen models, training with NaturalThoughts outperforms existing reasoning datasets such as OpenThoughts, LIMO, etc. on general STEM reasoning benchmarks including GPQA-Diamond, MMLU-Pro and SuperGPQA.
View on arXiv@article{li2025_2507.01921, title={ NaturalThoughts: Selecting and Distilling Reasoning Traces for General Reasoning Tasks }, author={ Yang Li and Youssef Emad and Karthik Padthe and Jack Lanchantin and Weizhe Yuan and Thao Nguyen and Jason Weston and Shang-Wen Li and Dong Wang and Ilia Kulikov and Xian Li }, journal={arXiv preprint arXiv:2507.01921}, year={ 2025 } }