ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15654
7
0

CAWR: Corruption-Averse Advantage-Weighted Regression for Robust Policy Optimization

18 June 2025
Ranting Hu
Author Contacts:
rthu22@m.fudan.edu.cn
    OffRL
ArXiv (abs)PDFHTML
Main:11 Pages
15 Figures
Bibliography:3 Pages
5 Tables
Appendix:9 Pages
Abstract

Offline reinforcement learning (offline RL) algorithms often require additional constraints or penalty terms to address distribution shift issues, such as adding implicit or explicit policy constraints during policy optimization to reduce the estimation bias of functions. This paper focuses on a limitation of the Advantage-Weighted Regression family (AWRs), i.e., the potential for learning over-conservative policies due to data corruption, specifically the poor explorations in suboptimal offline data. We study it from two perspectives: (1) how poor explorations impact the theoretically optimal policy based on KL divergence, and (2) how such poor explorations affect the approximation of the theoretically optimal policy. We prove that such over-conservatism is mainly caused by the sensitivity of the loss function for policy optimization to poor explorations, and the proportion of poor explorations in offline datasets. To address this concern, we propose Corruption-Averse Advantage-Weighted Regression (CAWR), which incorporates a set of robust loss functions during policy optimization and an advantage-based prioritized experience replay method to filter out poor explorations. Numerical experiments on the D4RL benchmark show that our method can learn superior policies from suboptimal offline data, significantly enhancing the performance of policy optimization.

View on arXiv
@article{hu2025_2506.15654,
  title={ CAWR: Corruption-Averse Advantage-Weighted Regression for Robust Policy Optimization },
  author={ Ranting Hu },
  journal={arXiv preprint arXiv:2506.15654},
  year={ 2025 }
}
Comments on this paper