7
0

EULER: Enhancing the Reasoning Ability of Large Language Models through Error-Induced Learning

Abstract

Large Language Models (LLMs) have demonstrated strong reasoning capabilities and achieved promising results in mathematical problem-solving tasks. Learning from errors offers the potential to further enhance the performance of LLMs during Supervised Fine-Tuning (SFT). However, the errors in synthesized solutions are typically gathered from sampling trails, making it challenging to generate solution errors for each mathematical problem. This paper introduces the Error-IndUced LEaRning (EULER) model, which aims to develop an error exposure model that generates high-quality solution errors to enhance the mathematical reasoning capabilities of LLMs. Specifically, EULER optimizes the error exposure model to increase the generation probability of self-made solution errors while utilizing solutions produced by a superior LLM to regularize the generation quality. Our experiments across various mathematical problem datasets demonstrate the effectiveness of the EULER model, achieving an improvement of over 4% compared to all baseline models. Further analysis reveals that EULER is capable of synthesizing more challenging and educational solution errors, which facilitate both the training and inference processes of LLMs. All codes are available atthis https URL.

View on arXiv
@article{wu2025_2505.22131,
  title={ EULER: Enhancing the Reasoning Ability of Large Language Models through Error-Induced Learning },
  author={ Zhuoyang Wu and Xinze Li and Zhenghao Liu and Yukun Yan and Zhiyuan Liu and Minghe Yu and Cheng Yang and Yu Gu and Ge Yu and Maosong Sun },
  journal={arXiv preprint arXiv:2505.22131},
  year={ 2025 }
}
Comments on this paper