CLARIFY: Contrastive Preference Reinforcement Learning for Untangling Ambiguous Queries

Preference-based reinforcement learning (PbRL) bypasses explicit reward engineering by inferring reward functions from human preference comparisons, enabling better alignment with human intentions. However, humans often struggle to label a clear preference between similar segments, reducing label efficiency and limiting PbRL's real-world applicability. To address this, we propose an offline PbRL method: Contrastive LeArning for ResolvIng Ambiguous Feedback (CLARIFY), which learns a trajectory embedding space that incorporates preference information, ensuring clearly distinguished segments are spaced apart, thus facilitating the selection of more unambiguous queries. Extensive experiments demonstrate that CLARIFY outperforms baselines in both non-ideal teachers and real human feedback settings. Our approach not only selects more distinguished queries but also learns meaningful trajectory embeddings.
View on arXiv@article{mu2025_2506.00388, title={ CLARIFY: Contrastive Preference Reinforcement Learning for Untangling Ambiguous Queries }, author={ Ni Mu and Hao Hu and Xiao Hu and Yiqin Yang and Bo Xu and Qing-Shan Jia }, journal={arXiv preprint arXiv:2506.00388}, year={ 2025 } }