ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15421
15
0

Reward Models in Deep Reinforcement Learning: A Survey

18 June 2025
Rui Yu
Shenghua Wan
Yucen Wang
Chen-Xiao Gao
Le Gan
Zongzhang Zhang
De-Chuan Zhan
    OffRL
ArXiv (abs)PDFHTML
Main:1 Pages
1 Figures
1 Tables
Appendix:9 Pages
Abstract

In reinforcement learning (RL), agents continually interact with the environment and use the feedback to refine their behavior. To guide policy optimization, reward models are introduced as proxies of the desired objectives, such that when the agent maximizes the accumulated reward, it also fulfills the task designer's intentions. Recently, significant attention from both academic and industrial researchers has focused on developing reward models that not only align closely with the true objectives but also facilitate policy optimization. In this survey, we provide a comprehensive review of reward modeling techniques within the deep RL literature. We begin by outlining the background and preliminaries in reward modeling. Next, we present an overview of recent reward modeling approaches, categorizing them based on the source, the mechanism, and the learning paradigm. Building on this understanding, we discuss various applications of these reward modeling techniques and review methods for evaluating reward models. Finally, we conclude by highlighting promising research directions in reward modeling. Altogether, this survey includes both established and emerging methods, filling the vacancy of a systematic review of reward models in current literature.

View on arXiv
@article{yu2025_2506.15421,
  title={ Reward Models in Deep Reinforcement Learning: A Survey },
  author={ Rui Yu and Shenghua Wan and Yucen Wang and Chen-Xiao Gao and Le Gan and Zongzhang Zhang and De-Chuan Zhan },
  journal={arXiv preprint arXiv:2506.15421},
  year={ 2025 }
}
Comments on this paper