Boosting LLM Reasoning via Spontaneous Self-Correction
- ReLMLRMKELM

While large language models (LLMs) have demonstrated remarkable success on a broad range of tasks, math reasoning remains a challenging one. One of the approaches for improving math reasoning is self-correction, which designs self-improving loops to let the model correct its own mistakes. However, existing self-correction approaches treat corrections as standalone post-generation refinements, relying on extra prompt and system designs to elicit self-corrections, instead of performing real-time, spontaneous self-corrections in a single pass. To address this, we propose SPOC, a spontaneous self-correction approach that enables LLMs to generate interleaved solutions and verifications in a single inference pass, with generation dynamically terminated based on verification outcomes, thereby effectively scaling inference time compute. SPOC considers a multi-agent perspective by assigning dual roles -- solution proposer and verifier -- to the same model. We adopt a simple yet effective approach to generate synthetic data for fine-tuning, enabling the model to develop capabilities for self-verification and multi-agent collaboration. We further improve its solution proposal and verification accuracy through online reinforcement learning. Experiments on mathematical reasoning benchmarks show that SPOC significantly improves performance. Notably, SPOC boosts the accuracy of Llama-3.1-8B and 70B Instruct models, achieving gains of 8.8% and 11.6% on MATH500, 10.0% and 20.0% on AMC23, and 3.3% and 6.7% on AIME24, respectively.
View on arXiv@article{zhao2025_2506.06923, title={ Boosting LLM Reasoning via Spontaneous Self-Correction }, author={ Xutong Zhao and Tengyu Xu and Xuewei Wang and Zhengxing Chen and Di Jin and Liang Tan and Yen-Ting and Zishun Yu and Zhuokai Zhao and Yun He and Sinong Wang and Han Fang and Sarath Chandar and Chen Zhu }, journal={arXiv preprint arXiv:2506.06923}, year={ 2025 } }