ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.14340
36
3

Earlier Tokens Contribute More: Learning Direct Preference Optimization From Temporal Decay Perspective

21 February 2025
Ruichen Shao
Yangqiu Song
Gangao Liu
Yang Chen
Xiang Zhou
Jiadong Wang
Xunliang Cai
Peng Li
ArXiv (abs)PDFHTML
Abstract

Direct Preference Optimization (DPO) has gained attention as an efficient alternative to reinforcement learning from human feedback (RLHF) for aligning large language models (LLMs) with human preferences. Despite its advantages, DPO suffers from a length bias, generating responses longer than those from the reference model. Existing solutions like SimPO and SamPO address this issue but uniformly treat the contribution of rewards across sequences, overlooking temporal dynamics. To this end, we propose an enhanced preference optimization method that incorporates a temporal decay factor controlled by a gamma parameter. This dynamic weighting mechanism adjusts the influence of each reward based on its position in the sequence, prioritizing earlier tokens that are more critical for alignment. By adaptively focusing on more relevant feedback, our approach mitigates overfitting to less pertinent data and remains responsive to evolving human preferences. Experimental results on several benchmarks show that our approach consistently outperforms vanilla DPO by 5.9-8.8 points on AlpacaEval 2 and 3.3-9.7 points on Arena-Hard across different model architectures and sizes. Furthermore, additional experiments on mathematical and reasoning benchmarks (MMLU, GSM8K, and MATH) confirm that our method enhances performance without compromising general capabilities. Our codebase would be available at \url{this https URL}.

View on arXiv
@article{shao2025_2502.14340,
  title={ Earlier Tokens Contribute More: Learning Direct Preference Optimization From Temporal Decay Perspective },
  author={ Ruichen Shao and Bei Li and Gangao Liu and Yang Chen and Xiang Zhou and Jingang Wang and Xunliang Cai and Peng Li },
  journal={arXiv preprint arXiv:2502.14340},
  year={ 2025 }
}
Comments on this paper