ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.11802
  4. Cited By
Improving TD3-BC: Relaxed Policy Constraint for Offline Learning and
  Stable Online Fine-Tuning

Improving TD3-BC: Relaxed Policy Constraint for Offline Learning and Stable Online Fine-Tuning

21 November 2022
Alex Beeson
Giovanni Montana
    OffRL
    OnRL
ArXivPDFHTML

Papers citing "Improving TD3-BC: Relaxed Policy Constraint for Offline Learning and Stable Online Fine-Tuning"

16 / 16 papers shown
Title
Dynamic Action Interpolation: A Universal Approach for Accelerating Reinforcement Learning with Expert Guidance
Dynamic Action Interpolation: A Universal Approach for Accelerating Reinforcement Learning with Expert Guidance
Wenjun Cao
52
0
0
26 Apr 2025
Attention-Enhanced Short-Time Wiener Solution for Acoustic Echo
  Cancellation
Attention-Enhanced Short-Time Wiener Solution for Acoustic Echo Cancellation
Fei Zhao
Xueliang Zhang
36
0
0
25 Dec 2024
SelfBC: Self Behavior Cloning for Offline Reinforcement Learning
SelfBC: Self Behavior Cloning for Offline Reinforcement Learning
Shirong Liu
Chenjia Bai
Zixian Guo
Hao Zhang
Gaurav Sharma
Yang Liu
OffRL
35
2
0
04 Aug 2024
Effective Reinforcement Learning Based on Structural Information
  Principles
Effective Reinforcement Learning Based on Structural Information Principles
Xianghua Zeng
Hao Peng
Dingli Su
Angsheng Li
40
0
0
15 Apr 2024
Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
  Autonomous Real-World Reinforcement Learning
Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for Autonomous Real-World Reinforcement Learning
Jingyun Yang
Max Sobol Mark
Brandon Vu
Archit Sharma
Jeannette Bohg
Chelsea Finn
OffRL
OnRL
32
21
0
23 Oct 2023
Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement
  Learning
Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning
Trevor A. McInroe
Adam Jelley
Stefano V. Albrecht
Amos Storkey
OffRL
OnRL
28
6
0
09 Oct 2023
Improving Offline-to-Online Reinforcement Learning with Q Conditioned
  State Entropy Exploration
Improving Offline-to-Online Reinforcement Learning with Q Conditioned State Entropy Exploration
Ziqi Zhang
Xiao Xiong
Zifeng Zhuang
Jinxin Liu
Donglin Wang
OffRL
OnRL
48
0
0
07 Oct 2023
Iteratively Refined Behavior Regularization for Offline Reinforcement
  Learning
Iteratively Refined Behavior Regularization for Offline Reinforcement Learning
Xiao Hu
Yi Ma
Chenjun Xiao
Yan Zheng
Zhaopeng Meng
OffRL
18
4
0
09 Jun 2023
Revisiting the Minimalist Approach to Offline Reinforcement Learning
Revisiting the Minimalist Approach to Offline Reinforcement Learning
Denis Tarasov
Vladislav Kurenkov
Alexander Nikulin
Sergey Kolesnikov
OffRL
33
37
0
16 May 2023
Reduce, Reuse, Recycle: Selective Reincarnation in Multi-Agent
  Reinforcement Learning
Reduce, Reuse, Recycle: Selective Reincarnation in Multi-Agent Reinforcement Learning
Claude Formanek
C. Tilbury
Jonathan P. Shock
Kale-ab Tessera
Arnu Pretorius
32
3
0
31 Mar 2023
Balancing policy constraint and ensemble size in uncertainty-based
  offline reinforcement learning
Balancing policy constraint and ensemble size in uncertainty-based offline reinforcement learning
Alex Beeson
Giovanni Montana
OffRL
26
13
0
26 Mar 2023
Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online
  Fine-Tuning
Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning
Mitsuhiko Nakamoto
Yuexiang Zhai
Anika Singh
Max Sobol Mark
Yi Ma
Chelsea Finn
Aviral Kumar
Sergey Levine
OffRL
OnRL
114
108
0
09 Mar 2023
Planning with Diffusion for Flexible Behavior Synthesis
Planning with Diffusion for Flexible Behavior Synthesis
Michael Janner
Yilun Du
J. Tenenbaum
Sergey Levine
DiffM
202
632
0
20 May 2022
Offline Reinforcement Learning with Implicit Q-Learning
Offline Reinforcement Learning with Implicit Q-Learning
Ilya Kostrikov
Ashvin Nair
Sergey Levine
OffRL
214
843
0
12 Oct 2021
COMBO: Conservative Offline Model-Based Policy Optimization
COMBO: Conservative Offline Model-Based Policy Optimization
Tianhe Yu
Aviral Kumar
Rafael Rafailov
Aravind Rajeswaran
Sergey Levine
Chelsea Finn
OffRL
219
415
0
16 Feb 2021
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
340
1,960
0
04 May 2020
1