32
0

Provable Sim-to-Real Transfer via Offline Domain Randomization

Main:9 Pages
Bibliography:3 Pages
3 Tables
Appendix:17 Pages
Abstract

Reinforcement-learning agents often struggle when deployed from simulation to the real-world. A dominant strategy for reducing the sim-to-real gap is domain randomization (DR) which trains the policy across many simulators produced by sampling dynamics parameters, but standard DR ignores offline data already available from the real system. We study offline domain randomization (ODR), which first fits a distribution over simulator parameters to an offline dataset. While a growing body of empirical work reports substantial gains with algorithms such as DROPO, the theoretical foundations of ODR remain largely unexplored. In this work, we (i) formalize ODR as a maximum-likelihood estimation over a parametric simulator family, (ii) prove consistency of this estimator under mild regularity and identifiability conditions, showing it converges to the true dynamics as the dataset grows, (iii) derive gap bounds demonstrating ODRs sim-to-real error is up to an O(M) factor tighter than uniform DR in the finite-simulator case (and analogous gains in the continuous setting), and (iv) introduce E-DROPO, a new version of DROPO which adds an entropy bonus to prevent variance collapse, yielding broader randomization and more robust zero-shot transfer in practice.

View on arXiv
@article{fickinger2025_2506.10133,
  title={ Provable Sim-to-Real Transfer via Offline Domain Randomization },
  author={ Arnaud Fickinger and Abderrahim Bendahi and Stuart Russell },
  journal={arXiv preprint arXiv:2506.10133},
  year={ 2025 }
}
Comments on this paper