59
0

Reachability Weighted Offline Goal-conditioned Resampling

Main:10 Pages
7 Figures
Bibliography:4 Pages
2 Tables
Abstract

Offline goal-conditioned reinforcement learning (RL) relies on fixed datasets where many potential goals share the same state and action spaces. However, these potential goals are not explicitly represented in the collected trajectories. To learn a generalizable goal-conditioned policy, it is common to sample goals and state-action pairs uniformly using dynamic programming methods such as Q-learning. Uniform sampling, however, requires an intractably large dataset to cover all possible combinations and creates many unreachable state-goal-action pairs that degrade policy performance. Our key insight is that sampling should favor transitions that enable goal achievement. To this end, we propose Reachability Weighted Sampling (RWS). RWS uses a reachability classifier trained via positive-unlabeled (PU) learning on goal-conditioned state-action values. The classifier maps these values to a reachability score, which is then used as a sampling priority. RWS is a plug-and-play module that integrates seamlessly with standard offline RL algorithms. Experiments on six complex simulated robotic manipulation tasks, including those with a robot arm and a dexterous hand, show that RWS significantly improves performance. In one notable case, performance on the HandBlock-Z task improved by nearly 50 percent relative to the baseline. These results indicate the effectiveness of reachability-weighted sampling.

View on arXiv
@article{yang2025_2506.02577,
  title={ Reachability Weighted Offline Goal-conditioned Resampling },
  author={ Wenyan Yang and Joni Pajarinen },
  journal={arXiv preprint arXiv:2506.02577},
  year={ 2025 }
}
Comments on this paper