46
0

Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic Sparsity

Abstract

To integrate into human-centered environments, autonomous agents must learn from and adapt to humans in their native settings. Preference-based reinforcement learning (PbRL) can enable this by learning reward functions from human preferences. However, humans live in a world full of diverse information, most of which is irrelevant to completing any particular task. It then becomes essential that agents learn to focus on the subset of task-relevant state features. To that end, this work proposes R2N (Robust-to-Noise), the first PbRL algorithm that leverages principles of dynamic sparse training to learn robust reward models that can focus on task-relevant features. In experiments with a simulated teacher, we demonstrate that R2N can adapt the sparse connectivity of its neural networks to focus on task-relevant features, enabling R2N to significantly outperform several sparse training and PbRL algorithms across simulated robotic environments.

View on arXiv
@article{muslimani2025_2406.06495,
  title={ Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic Sparsity },
  author={ Calarina Muslimani and Bram Grooten and Deepak Ranganatha Sastry Mamillapalli and Mykola Pechenizkiy and Decebal Constantin Mocanu and Matthew E. Taylor },
  journal={arXiv preprint arXiv:2406.06495},
  year={ 2025 }
}
Comments on this paper