6

Offline Reinforcement Learning of High-Quality Behaviors Under Robust Style Alignment

Mathieu Petitbois
Rémy Portelas
Sylvain Lamprier
Main:10 Pages
22 Figures
Bibliography:5 Pages
5 Tables
Appendix:18 Pages
Abstract

We study offline reinforcement learning of style-conditioned policies using explicit style supervision via subtrajectory labeling functions. In this setting, aligning style with high task performance is particularly challenging due to distribution shift and inherent conflicts between style and reward. Existing methods, despite introducing numerous definitions of style, often fail to reconcile these objectives effectively. To address these challenges, we propose a unified definition of behavior style and instantiate it into a practical framework. Building on this, we introduce Style-Conditioned Implicit Q-Learning (SCIQL), which leverages offline goal-conditioned RL techniques, such as hindsight relabeling and value learning, and combine it with a new Gated Advantage Weighted Regression mechanism to efficiently optimize task performance while preserving style alignment. Experiments demonstrate that SCIQL achieves superior performance on both objectives compared to prior offline methods. Code, datasets and visuals are available in:this https URL.

View on arXiv
Comments on this paper