ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.12130
  4. Cited By
Constrained Policy Optimization with Explicit Behavior Density for
  Offline Reinforcement Learning

Constrained Policy Optimization with Explicit Behavior Density for Offline Reinforcement Learning

28 January 2023
Jing Zhang
Chi Zhang
Wenjia Wang
Bing-Yi Jing
    OffRL
ArXivPDFHTML

Papers citing "Constrained Policy Optimization with Explicit Behavior Density for Offline Reinforcement Learning"

11 / 11 papers shown
Title
Policy Constraint by Only Support Constraint for Offline Reinforcement Learning
Yunkai Gao
Jiaming Guo
Fan Wu
Rui Zhang
OffRL
61
0
0
07 Mar 2025
Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model
Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model
Jing Zhang
Linjiajie Fang
Kexin Shi
Wenjia Wang
Bing-Yi Jing
OffRL
36
0
0
27 Oct 2024
DiffPoGAN: Diffusion Policies with Generative Adversarial Networks for
  Offline Reinforcement Learning
DiffPoGAN: Diffusion Policies with Generative Adversarial Networks for Offline Reinforcement Learning
Xuemin Hu
Shen Li
Yingfen Xu
Bo Tang
Long Chen
44
0
0
13 Jun 2024
FlowPG: Action-constrained Policy Gradient with Normalizing Flows
FlowPG: Action-constrained Policy Gradient with Normalizing Flows
J. Brahmanage
Jiajing Ling
Akshat Kumar
24
3
0
07 Feb 2024
Offline Reinforcement Learning via High-Fidelity Generative Behavior
  Modeling
Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling
Huayu Chen
Cheng Lu
Chengyang Ying
Hang Su
Jun Zhu
DiffM
OffRL
105
106
0
29 Sep 2022
Planning with Diffusion for Flexible Behavior Synthesis
Planning with Diffusion for Flexible Behavior Synthesis
Michael Janner
Yilun Du
J. Tenenbaum
Sergey Levine
DiffM
202
632
0
20 May 2022
Offline Reinforcement Learning with Implicit Q-Learning
Offline Reinforcement Learning with Implicit Q-Learning
Ilya Kostrikov
Ashvin Nair
Sergey Levine
OffRL
214
843
0
12 Oct 2021
Uncertainty-Based Offline Reinforcement Learning with Diversified
  Q-Ensemble
Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble
Gaon An
Seungyong Moon
Jang-Hyun Kim
Hyun Oh Song
OffRL
105
262
0
04 Oct 2021
EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline
  and Online RL
EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL
Seyed Kamyar Seyed Ghasemipour
Dale Schuurmans
S. Gu
OffRL
209
119
0
21 Jul 2020
Controlling Overestimation Bias with Truncated Mixture of Continuous
  Distributional Quantile Critics
Controlling Overestimation Bias with Truncated Mixture of Continuous Distributional Quantile Critics
Arsenii Kuznetsov
Pavel Shvechikov
Alexander Grishin
Dmitry Vetrov
136
187
0
08 May 2020
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
340
1,960
0
04 May 2020
1