ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.00316
  4. Cited By

Model Generation with Provable Coverability for Offline Reinforcement Learning

1 June 2022
Chengxing Jia
Hao Yin
Chenxiao Gao
Tian Xu
Lei Yuan
Zongzhang Zhang
Yang Yu
    OffRL
ArXivPDFHTML

Papers citing "Model Generation with Provable Coverability for Offline Reinforcement Learning"

5 / 5 papers shown
Title
COMBO: Conservative Offline Model-Based Policy Optimization
COMBO: Conservative Offline Model-Based Policy Optimization
Tianhe Yu
Aviral Kumar
Rafael Rafailov
Aravind Rajeswaran
Sergey Levine
Chelsea Finn
OffRL
219
413
0
16 Feb 2021
Domain Adaptation In Reinforcement Learning Via Latent Unified State
  Representation
Domain Adaptation In Reinforcement Learning Via Latent Unified State Representation
Jinwei Xing
Takashi Nagata
Kexin Chen
Xinyun Zou
Emre Neftci
J. Krichmar
OOD
26
43
0
10 Feb 2021
Model-based Policy Optimization with Unsupervised Model Adaptation
Model-based Policy Optimization with Unsupervised Model Adaptation
Jian Shen
Han Zhao
Weinan Zhang
Yong Yu
30
27
0
19 Oct 2020
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
340
1,960
0
04 May 2020
Transferring End-to-End Visuomotor Control from Simulation to Real World
  for a Multi-Stage Task
Transferring End-to-End Visuomotor Control from Simulation to Real World for a Multi-Stage Task
Stephen James
Andrew J. Davison
Edward Johns
162
275
0
07 Jul 2017
1