ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12854
50
2

Enhancing LLM Reasoning with Iterative DPO: A Comprehensive Empirical Investigation

17 March 2025
Songjun Tu
Jiahao Lin
Xiangyu Tian
Qichao Zhang
Linjing Li
Y. Fu
Nan Xu
Wei He
Xiangyuan Lan
D. Jiang
Dongbin Zhao
    LRM
ArXivPDFHTML
Abstract

Recent advancements in post-training methodologies for large language models (LLMs) have highlighted reinforcement learning (RL) as a critical component for enhancing reasoning. However, the substantial computational costs associated with RL-based approaches have led to growing interest in alternative paradigms, such as Direct Preference Optimization (DPO). In this study, we investigate the effectiveness of DPO in facilitating self-improvement for LLMs through iterative preference-based learning. We demonstrate that a single round of DPO with coarse filtering significantly enhances mathematical reasoning performance, particularly for strong base model. Furthermore, we design an iterative enhancement framework for both the generator and the reward model (RM), enabling their mutual improvement through online interaction across multiple rounds of DPO. Finally, with simple verifiable rewards, our model DPO-VP achieves RL-level performance with significantly lower computational overhead. These findings highlight DPO as a scalable and cost-effective alternative to RL, offering a practical solution for enhancing LLM reasoning in resource-constrained situations.

View on arXiv
@article{tu2025_2503.12854,
  title={ Enhancing LLM Reasoning with Iterative DPO: A Comprehensive Empirical Investigation },
  author={ Songjun Tu and Jiahao Lin and Xiangyu Tian and Qichao Zhang and Linjing Li and Yuqian Fu and Nan Xu and Wei He and Xiangyuan Lan and Dongmei Jiang and Dongbin Zhao },
  journal={arXiv preprint arXiv:2503.12854},
  year={ 2025 }
}
Comments on this paper