v1v2 (latest)
Sparse-Reg: Improving Sample Complexity in Offline Reinforcement Learning using Sparsity
- OffRL

Main:11 Pages
11 Figures
Bibliography:6 Pages
7 Tables
Appendix:1 Pages
Abstract
In this paper, we investigate the use of small datasets in the context of offline reinforcement learning (RL). While many common offline RL benchmarks employ datasets with over a million data points, many offline RL applications rely on considerably smaller datasets. We show that offline RL algorithms can overfit on small datasets, resulting in poor performance. To address this challenge, we introduce "Sparse-Reg": a regularization technique based on sparsity to mitigate overfitting in offline reinforcement learning, enabling effective learning in limited data settings and outperforming state-of-the-art baselines in continuous control.
View on arXiv@article{arnob2025_2506.17155, title={ Sparse-Reg: Improving Sample Complexity in Offline Reinforcement Learning using Sparsity }, author={ Samin Yeasar Arnob and Scott Fujimoto and Doina Precup }, journal={arXiv preprint arXiv:2506.17155}, year={ 2025 } }
Comments on this paper