ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.12448
  4. Cited By
Energy-Guided Diffusion Sampling for Offline-to-Online Reinforcement
  Learning

Energy-Guided Diffusion Sampling for Offline-to-Online Reinforcement Learning

17 July 2024
Xu-Hui Liu
Tian-Shuo Liu
Shengyi Jiang
Ruifeng Chen
Zhilong Zhang
Xinwei Chen
Yang Yu
    OffRL
    OnRL
ArXivPDFHTML

Papers citing "Energy-Guided Diffusion Sampling for Offline-to-Online Reinforcement Learning"

16 / 16 papers shown
Title
Learning to Cut via Hierarchical Sequence/Set Model for Efficient
  Mixed-Integer Programming
Learning to Cut via Hierarchical Sequence/Set Model for Efficient Mixed-Integer Programming
Jie Wang
Zhihai Wang
Xijun Li
Yufei Kuang
Zhihao Shi
Fangzhou Zhu
Mingxuan Yuan
Jianguo Zeng
Yongdong Zhang
Feng Wu
66
7
0
19 Apr 2024
Disentangling Policy from Offline Task Representation Learning via
  Adversarial Data Augmentation
Disentangling Policy from Offline Task Representation Learning via Adversarial Data Augmentation
Chengxing Jia
Fuxiang Zhang
Yi-Chen Li
Chenxiao Gao
Xu-Hui Liu
Lei Yuan
Zongzhang Zhang
Yang Yu
AAML
67
4
0
12 Mar 2024
World Models via Policy-Guided Trajectory Diffusion
World Models via Policy-Guided Trajectory Diffusion
Marc Rigter
Jun Yamada
Ingmar Posner
69
21
0
13 Dec 2023
Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online
  Reinforcement Learning
Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online Reinforcement Learning
Shenzhi Wang
Qisen Yang
Jiawei Gao
Matthieu Lin
Hao Chen
Liwei Wu
Ning Jia
Shiji Song
Gao Huang
OffRL
64
14
0
27 Oct 2023
Adaptive Behavior Cloning Regularization for Stable Offline-to-Online
  Reinforcement Learning
Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning
Yi Zhao
Rinu Boney
Alexander Ilin
Arno Solin
Joni Pajarinen
OffRL
OnRL
53
40
0
25 Oct 2022
Offline Reinforcement Learning via High-Fidelity Generative Behavior
  Modeling
Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling
Huayu Chen
Cheng Lu
Chengyang Ying
Hang Su
Jun Zhu
DiffM
OffRL
134
114
0
29 Sep 2022
GLIDE: Towards Photorealistic Image Generation and Editing with
  Text-Guided Diffusion Models
GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
Alex Nichol
Prafulla Dhariwal
Aditya A. Ramesh
Pranav Shyam
Pamela Mishkin
Bob McGrew
Ilya Sutskever
Mark Chen
247
3,552
0
20 Dec 2021
Offline Reinforcement Learning with Implicit Q-Learning
Offline Reinforcement Learning with Implicit Q-Learning
Ilya Kostrikov
Ashvin Nair
Sergey Levine
OffRL
254
874
0
12 Oct 2021
A Minimalist Approach to Offline Reinforcement Learning
A Minimalist Approach to Offline Reinforcement Learning
Scott Fujimoto
S. Gu
OffRL
101
804
0
12 Jun 2021
Improved Denoising Diffusion Probabilistic Models
Improved Denoising Diffusion Probabilistic Models
Alex Nichol
Prafulla Dhariwal
DiffM
210
3,621
0
18 Feb 2021
OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement
  Learning
OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
Anurag Ajay
Aviral Kumar
Pulkit Agrawal
Sergey Levine
Ofir Nachum
OffRL
OnRL
64
157
0
26 Oct 2020
Experience Replay with Likelihood-free Importance Weights
Experience Replay with Likelihood-free Importance Weights
Samarth Sinha
Jiaming Song
Animesh Garg
Stefano Ermon
OffRL
54
56
0
23 Jun 2020
AWAC: Accelerating Online Reinforcement Learning with Offline Datasets
AWAC: Accelerating Online Reinforcement Learning with Offline Datasets
Ashvin Nair
Abhishek Gupta
Murtaza Dalal
Sergey Levine
OffRL
OnRL
77
601
0
16 Jun 2020
D4RL: Datasets for Deep Data-Driven Reinforcement Learning
D4RL: Datasets for Deep Data-Driven Reinforcement Learning
Justin Fu
Aviral Kumar
Ofir Nachum
George Tucker
Sergey Levine
GP
OffRL
181
1,338
0
15 Apr 2020
When to Trust Your Model: Model-Based Policy Optimization
When to Trust Your Model: Model-Based Policy Optimization
Michael Janner
Justin Fu
Marvin Zhang
Sergey Levine
OffRL
59
939
0
19 Jun 2019
DualDICE: Behavior-Agnostic Estimation of Discounted Stationary
  Distribution Corrections
DualDICE: Behavior-Agnostic Estimation of Discounted Stationary Distribution Corrections
Ofir Nachum
Yinlam Chow
Bo Dai
Lihong Li
OffRL
96
332
0
10 Jun 2019
1