ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.16877
117
0
v1v2v3v4 (latest)

Distributional Reinforcement Learning with Dual Expectile-Quantile Regression

26 May 2023
Sami Jullien
Romain Deffayet
J. Renders
Paul T. Groth
Maarten de Rijke
    OOD
ArXiv (abs)PDFHTML
Main:8 Pages
4 Figures
Bibliography:2 Pages
3 Tables
Appendix:4 Pages
Abstract

Successful applications of distributional reinforcement learning with quantile regression prompt a natural question: can we use other statistics to represent the distribution of returns? In particular, expectile regression is known to be more efficient than quantile regression for approximating distributions, especially on extreme values, and by providing a straightforward estimator of the mean it is a natural candidate for reinforcement learning. Prior work has answered this question positively in the case of expectiles, with the major caveat that expensive computations must be performed to ensure convergence. In this work, we propose a dual expectile-quantile approach which solves the shortcomings of previous work while leveraging the complementary properties of expectiles and quantiles. Our method outperforms both quantile-based and expectile-based baselines on the MuJoCo continuous control benchmark.

View on arXiv
Comments on this paper