ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.12136
38
2

Domain Adaptation for Offline Reinforcement Learning with Limited Samples

22 August 2024
Weiqin Chen
Sandipan Mishra
Santiago Paternain
    OffRL
ArXivPDFHTML
Abstract

Offline reinforcement learning (RL) learns effective policies from a static target dataset. The performance of state-of-the-art offline RL algorithms notwithstanding, it relies on the quality and size of the target dataset and it degrades if limited samples in the target dataset are available, which is often the case in real-world applications. To address this issue, domain adaptation that leverages auxiliary samples from related source datasets (such as simulators) can be beneficial. However, establishing the optimal way to trade off the source and target datasets while ensuring provably theoretical guarantees remains an open challenge. To the best of our knowledge, this paper proposes the first framework that theoretically explores the impact of the weights assigned to each dataset on the performance of offline RL. In particular, we establish performance bounds and the existence of an optimal weight, which can be computed in closed form under simplifying assumptions. We also provide algorithmic guarantees in terms of convergence to a neighborhood of the optimum. Notably, these results depend on the quality of the source dataset and the number of samples from the target dataset. Our empirical results on the well-known Procgen benchmark substantiate our theoretical contributions.

View on arXiv
@article{chen2025_2408.12136,
  title={ Domain Adaptation for Offline Reinforcement Learning with Limited Samples },
  author={ Weiqin Chen and Sandipan Mishra and Santiago Paternain },
  journal={arXiv preprint arXiv:2408.12136},
  year={ 2025 }
}
Comments on this paper