ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.04974
  4. Cited By
Leveraging Offline Data in Online Reinforcement Learning

Leveraging Offline Data in Online Reinforcement Learning

9 November 2022
Andrew Wagenmaker
Aldo Pacchiano
    OffRL
    OnRL
ArXivPDFHTML

Papers citing "Leveraging Offline Data in Online Reinforcement Learning"

17 / 17 papers shown
Title
SIMPLEMIX: Frustratingly Simple Mixing of Off- and On-policy Data in Language Model Preference Learning
SIMPLEMIX: Frustratingly Simple Mixing of Off- and On-policy Data in Language Model Preference Learning
Tianjian Li
Daniel Khashabi
55
0
0
05 May 2025
On The Statistical Complexity of Offline Decision-Making
On The Statistical Complexity of Offline Decision-Making
Thanh Nguyen-Tang
R. Arora
OffRL
43
1
0
10 Jan 2025
Leveraging Unlabeled Data Sharing through Kernel Function Approximation in Offline Reinforcement Learning
Leveraging Unlabeled Data Sharing through Kernel Function Approximation in Offline Reinforcement Learning
Yen-Ru Lai
Fu-Chieh Chang
Pei-Yuan Wu
OffRL
76
1
0
22 Aug 2024
H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps
H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps
Haoyi Niu
Tianying Ji
Bingqi Liu
Haocheng Zhao
Xiangyu Zhu
Jianying Zheng
Pengfei Huang
Guyue Zhou
Jianming Hu
Xianyuan Zhan
OffRL
OnRL
AI4CE
27
6
0
22 Sep 2023
Optimal Exploration for Model-Based RL in Nonlinear Systems
Optimal Exploration for Model-Based RL in Nonlinear Systems
Andrew Wagenmaker
Guanya Shi
Kevin G. Jamieson
36
14
0
15 Jun 2023
Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online
  Fine-Tuning
Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning
Mitsuhiko Nakamoto
Yuexiang Zhai
Anika Singh
Max Sobol Mark
Yi Ma
Chelsea Finn
Aviral Kumar
Sergey Levine
OffRL
OnRL
112
108
0
09 Mar 2023
(Re)$^2$H2O: Autonomous Driving Scenario Generation via Reversely
  Regularized Hybrid Offline-and-Online Reinforcement Learning
(Re)2^22H2O: Autonomous Driving Scenario Generation via Reversely Regularized Hybrid Offline-and-Online Reinforcement Learning
Haoyi Niu
Kun Ren
Yi Tian Xu
Ziyuan Yang
Yi-Hsin Lin
Yan Zhang
Jianming Hu
OffRL
21
9
0
27 Feb 2023
Robust Knowledge Transfer in Tiered Reinforcement Learning
Robust Knowledge Transfer in Tiered Reinforcement Learning
Jiawei Huang
Niao He
OffRL
26
1
0
10 Feb 2023
Efficient Online Reinforcement Learning with Offline Data
Efficient Online Reinforcement Learning with Offline Data
Philip J. Ball
Laura M. Smith
Ilya Kostrikov
Sergey Levine
OffRL
OnRL
32
163
0
06 Feb 2023
Transfer Learning for Contextual Multi-armed Bandits
Transfer Learning for Contextual Multi-armed Bandits
Changxiao Cai
T. Tony Cai
Hongzhe Li
39
16
0
22 Nov 2022
Artificial Replay: A Meta-Algorithm for Harnessing Historical Data in Bandits
Artificial Replay: A Meta-Algorithm for Harnessing Historical Data in Bandits
Siddhartha Banerjee
Sean R. Sinclair
Milind Tambe
Lily Xu
Chao Yu
AI4TS
31
6
0
30 Sep 2022
First-Order Regret in Reinforcement Learning with Linear Function
  Approximation: A Robust Estimation Approach
First-Order Regret in Reinforcement Learning with Linear Function Approximation: A Robust Estimation Approach
Andrew Wagenmaker
Yifang Chen
Max Simchowitz
S. Du
Kevin G. Jamieson
73
36
0
07 Dec 2021
Online Target Q-learning with Reverse Experience Replay: Efficiently
  finding the Optimal Policy for Linear MDPs
Online Target Q-learning with Reverse Experience Replay: Efficiently finding the Optimal Policy for Linear MDPs
Naman Agarwal
Syomantak Chaudhuri
Prateek Jain
Dheeraj M. Nagaraj
Praneeth Netrapalli
OffRL
40
21
0
16 Oct 2021
Pessimistic Model-based Offline Reinforcement Learning under Partial
  Coverage
Pessimistic Model-based Offline Reinforcement Learning under Partial Coverage
Masatoshi Uehara
Wen Sun
OffRL
96
144
0
13 Jul 2021
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
255
4,781
0
24 Feb 2021
Safety Verification of Model Based Reinforcement Learning Controllers
Safety Verification of Model Based Reinforcement Learning Controllers
Akshita Gupta
Inseok Hwang
37
5
0
21 Oct 2020
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
340
1,960
0
04 May 2020
1