ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.11566
  4. Cited By
Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement
  Learning

Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning

23 February 2022
Chenjia Bai
Lingxiao Wang
Zhuoran Yang
Zhihong Deng
Animesh Garg
Peng Liu
Zhaoran Wang
    OffRL
ArXivPDFHTML

Papers citing "Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning"

50 / 100 papers shown
Title
Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local
  Value Regularization
Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization
Xiangsen Wang
Haoran Xu
Yinan Zheng
Xianyuan Zhan
OffRL
33
23
0
21 Jul 2023
Bayesian Safe Policy Learning with Chance Constrained Optimization:
  Application to Military Security Assessment during the Vietnam War
Bayesian Safe Policy Learning with Chance Constrained Optimization: Application to Military Security Assessment during the Vietnam War
Zeyang Jia
Eli Ben-Michael
Kosuke Imai
32
4
0
17 Jul 2023
Offline Reinforcement Learning with Imbalanced Datasets
Offline Reinforcement Learning with Imbalanced Datasets
Li Jiang
Sijie Cheng
Jielin Qiu
Haoran Xu
Wai Kin Victor Chan
Zhao Ding
OffRL
37
3
0
06 Jul 2023
Prioritized Trajectory Replay: A Replay Memory for Data-driven Reinforcement Learning
Prioritized Trajectory Replay: A Replay Memory for Data-driven Reinforcement Learning
Jinyi Liu
Y. Ma
Jianye Hao
Yujing Hu
Yan Zheng
Tangjie Lv
Changjie Fan
OffRL
44
2
0
27 Jun 2023
Design from Policies: Conservative Test-Time Adaptation for Offline
  Policy Optimization
Design from Policies: Conservative Test-Time Adaptation for Offline Policy Optimization
Jinxin Liu
Hongyin Zhang
Zifeng Zhuang
Yachen Kang
Donglin Wang
Bin Wang
OffRL
44
8
0
26 Jun 2023
Beyond OOD State Actions: Supported Cross-Domain Offline Reinforcement
  Learning
Beyond OOD State Actions: Supported Cross-Domain Offline Reinforcement Learning
Jinxin Liu
Ziqi Zhang
Zhenyu Wei
Zifeng Zhuang
Yachen Kang
Sibo Gai
Donglin Wang
OffRL
25
16
0
22 Jun 2023
Offline Multi-Agent Reinforcement Learning with Coupled Value
  Factorization
Offline Multi-Agent Reinforcement Learning with Coupled Value Factorization
Xiangsen Wang
Xianyuan Zhan
OffRL
23
5
0
15 Jun 2023
A Simple Unified Uncertainty-Guided Framework for Offline-to-Online
  Reinforcement Learning
A Simple Unified Uncertainty-Guided Framework for Offline-to-Online Reinforcement Learning
Siyuan Guo
Yanchao Sun
Jifeng Hu
Sili Huang
Hechang Chen
Haiyin Piao
Lichao Sun
Yi-Ju Chang
OffRL
OnRL
31
7
0
13 Jun 2023
Improving Offline-to-Online Reinforcement Learning with Q-Ensembles
Improving Offline-to-Online Reinforcement Learning with Q-Ensembles
Kai-Wen Zhao
Yi Ma
Jianye Hao
Jinyi Liu
Yan Zheng
Zhaopeng Meng
OffRL
OnRL
20
12
0
12 Jun 2023
Look Beneath the Surface: Exploiting Fundamental Symmetry for
  Sample-Efficient Offline RL
Look Beneath the Surface: Exploiting Fundamental Symmetry for Sample-Efficient Offline RL
Peng Cheng
Xianyuan Zhan
Zhihao Wu
Wenjia Zhang
Shoucheng Song
Han Wang
Youfang Lin
Li Jiang
OffRL
40
9
0
07 Jun 2023
What is Essential for Unseen Goal Generalization of Offline
  Goal-conditioned RL?
What is Essential for Unseen Goal Generalization of Offline Goal-conditioned RL?
Rui Yang
Yong Lin
Xiaoteng Ma
Haotian Hu
Chongjie Zhang
Tong Zhang
OffRL
29
22
0
30 May 2023
Reinforcement Learning with Human Feedback: Learning Dynamic Choices via
  Pessimism
Reinforcement Learning with Human Feedback: Learning Dynamic Choices via Pessimism
Zihao Li
Zhuoran Yang
Mengdi Wang
OffRL
34
55
0
29 May 2023
PROTO: Iterative Policy Regularized Offline-to-Online Reinforcement
  Learning
PROTO: Iterative Policy Regularized Offline-to-Online Reinforcement Learning
Jianxiong Li
Xiao Hu
Haoran Xu
Jingjing Liu
Xianyuan Zhan
Ya Zhang
OffRL
OnRL
36
19
0
25 May 2023
Uncertainty-driven Trajectory Truncation for Data Augmentation in
  Offline Reinforcement Learning
Uncertainty-driven Trajectory Truncation for Data Augmentation in Offline Reinforcement Learning
Junjie Zhang
Jiafei Lyu
Xiaoteng Ma
Jiangpeng Yan
Jun Yang
Le Wan
Xiu Li
OffRL
24
5
0
10 Apr 2023
Offline RL with No OOD Actions: In-Sample Learning via Implicit Value
  Regularization
Offline RL with No OOD Actions: In-Sample Learning via Implicit Value Regularization
Haoran Xu
Li Jiang
Jianxiong Li
Zhuoran Yang
Zhaoran Wang
Victor Chan
Xianyuan Zhan
OffRL
36
73
0
28 Mar 2023
Uncertainty-Aware Instance Reweighting for Off-Policy Learning
Uncertainty-Aware Instance Reweighting for Off-Policy Learning
Xiaoying Zhang
Junpu Chen
Hongning Wang
Hong Xie
Yang Liu
John C. S. Lui
Hang Li
OffRL
80
4
0
11 Mar 2023
The In-Sample Softmax for Offline Reinforcement Learning
The In-Sample Softmax for Offline Reinforcement Learning
Chenjun Xiao
Han Wang
Yangchen Pan
Adam White
Martha White
OffRL
29
26
0
28 Feb 2023
VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function
  Approximation
VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation
Thanh Nguyen-Tang
R. Arora
OffRL
46
5
0
24 Feb 2023
Behavior Proximal Policy Optimization
Behavior Proximal Policy Optimization
Zifeng Zhuang
Kun Lei
Jinxin Liu
Donglin Wang
Yilang Guo
OffRL
30
34
0
22 Feb 2023
Conservative State Value Estimation for Offline Reinforcement Learning
Conservative State Value Estimation for Offline Reinforcement Learning
Liting Chen
Jie Yan
Zhengdao Shao
Lu Wang
Qingwei Lin
Saravan Rajmohan
Thomas Moscibroda
Dongmei Zhang
OffRL
26
6
0
14 Feb 2023
Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage
Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage
Masatoshi Uehara
Nathan Kallus
Jason D. Lee
Wen Sun
OffRL
47
5
0
05 Feb 2023
Mind the Gap: Offline Policy Optimization for Imperfect Rewards
Mind the Gap: Offline Policy Optimization for Imperfect Rewards
Jianxiong Li
Xiao Hu
Haoran Xu
Jingjing Liu
Xianyuan Zhan
Qing-Shan Jia
Ya Zhang
OffRL
38
19
0
03 Feb 2023
STEEL: Singularity-aware Reinforcement Learning
STEEL: Singularity-aware Reinforcement Learning
Xiaohong Chen
Zhengling Qi
Runzhe Wan
OffRL
27
2
0
30 Jan 2023
Risk Sensitive Dead-end Identification in Safety-Critical Offline
  Reinforcement Learning
Risk Sensitive Dead-end Identification in Safety-Critical Offline Reinforcement Learning
Taylor W. Killian
S. Parbhoo
Marzyeh Ghassemi
OffRL
20
6
0
13 Jan 2023
Offline Reinforcement Learning with Closed-Form Policy Improvement
  Operators
Offline Reinforcement Learning with Closed-Form Policy Improvement Operators
Jiachen Li
Edwin Zhang
Ming Yin
Qinxun Bai
Yu-Xiang Wang
William Yang Wang
OffRL
39
15
0
29 Nov 2022
Q-Ensemble for Offline RL: Don't Scale the Ensemble, Scale the Batch
  Size
Q-Ensemble for Offline RL: Don't Scale the Ensemble, Scale the Batch Size
Alexander Nikulin
Vladislav Kurenkov
Denis Tarasov
Dmitry Akimov
Sergey Kolesnikov
OffRL
33
14
0
20 Nov 2022
Offline Reinforcement Learning with Adaptive Behavior Regularization
Offline Reinforcement Learning with Adaptive Behavior Regularization
Yunfan Zhou
Xijun Li
Qingyu Qu
OffRL
24
1
0
15 Nov 2022
Dual Generator Offline Reinforcement Learning
Dual Generator Offline Reinforcement Learning
Q. Vuong
Aviral Kumar
Sergey Levine
Yevgen Chebotar
OffRL
31
1
0
02 Nov 2022
Optimizing Pessimism in Dynamic Treatment Regimes: A Bayesian Learning
  Approach
Optimizing Pessimism in Dynamic Treatment Regimes: A Bayesian Learning Approach
Yunzhe Zhou
Zhengling Qi
C. Shi
Lexin Li
OffRL
23
8
0
26 Oct 2022
Robust Offline Reinforcement Learning with Gradient Penalty and
  Constraint Relaxation
Robust Offline Reinforcement Learning with Gradient Penalty and Constraint Relaxation
Chengqian Gao
Kelvin Xu
Liu Liu
Deheng Ye
P. Zhao
Zhiqiang Xu
OffRL
42
2
0
19 Oct 2022
A Policy-Guided Imitation Approach for Offline Reinforcement Learning
A Policy-Guided Imitation Approach for Offline Reinforcement Learning
Haoran Xu
Li Jiang
Jianxiong Li
Xianyuan Zhan
OffRL
26
62
0
15 Oct 2022
Model-Based Offline Reinforcement Learning with Pessimism-Modulated
  Dynamics Belief
Model-Based Offline Reinforcement Learning with Pessimism-Modulated Dynamics Belief
Kaiyang Guo
Yunfeng Shao
Yanhui Geng
OffRL
21
23
0
13 Oct 2022
State Advantage Weighting for Offline RL
State Advantage Weighting for Offline RL
Jiafei Lyu
Aicheng Gong
Le Wan
Zongqing Lu
Xiu Li
OffRL
33
9
0
09 Oct 2022
DCE: Offline Reinforcement Learning With Double Conservative Estimates
DCE: Offline Reinforcement Learning With Double Conservative Estimates
Chen Zhao
K. Huang
Chun yuan
OffRL
35
1
0
27 Sep 2022
A Review of Uncertainty for Deep Reinforcement Learning
A Review of Uncertainty for Deep Reinforcement Learning
Owen Lockwood
Mei Si
22
38
0
18 Aug 2022
Robust Reinforcement Learning with Distributional Risk-averse
  formulation
Robust Reinforcement Learning with Distributional Risk-averse formulation
Pierre Clavier
S. Allassonnière
E. L. Pennec
OOD
39
7
0
14 Jun 2022
Mildly Conservative Q-Learning for Offline Reinforcement Learning
Mildly Conservative Q-Learning for Offline Reinforcement Learning
Jiafei Lyu
Xiaoteng Ma
Xiu Li
Zongqing Lu
OffRL
37
103
0
09 Jun 2022
RORL: Robust Offline Reinforcement Learning via Conservative Smoothing
RORL: Robust Offline Reinforcement Learning via Conservative Smoothing
Rui Yang
Chenjia Bai
Xiaoteng Ma
Zhaoran Wang
Chongjie Zhang
Lei Han
OffRL
32
74
0
06 Jun 2022
When Data Geometry Meets Deep Function: Generalizing Offline
  Reinforcement Learning
When Data Geometry Meets Deep Function: Generalizing Offline Reinforcement Learning
Jianxiong Li
Xianyuan Zhan
Haoran Xu
Xiangyu Zhu
Jingjing Liu
Ya Zhang
OffRL
35
25
0
23 May 2022
VRL3: A Data-Driven Framework for Visual Deep Reinforcement Learning
VRL3: A Data-Driven Framework for Visual Deep Reinforcement Learning
Che Wang
Xufang Luo
Keith Ross
Dongsheng Li
OffRL
26
49
0
17 Feb 2022
DNS: Determinantal Point Process Based Neural Network Sampler for
  Ensemble Reinforcement Learning
DNS: Determinantal Point Process Based Neural Network Sampler for Ensemble Reinforcement Learning
Hassam Sheikh
Kizza M Nandyose Frisbee
Mariano Phielipp
25
8
0
31 Jan 2022
False Correlation Reduction for Offline Reinforcement Learning
False Correlation Reduction for Offline Reinforcement Learning
Arvindkumar Krishnakumar
Zuyue Fu
Lingxiao Wang
Zhuoran Yang
Chenjia Bai
Tianyi Zhou
Judy Hoffman
Jing Jiang
OffRL
39
9
0
24 Oct 2021
Uncertainty-Based Offline Reinforcement Learning with Diversified
  Q-Ensemble
Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble
Gaon An
Seungyong Moon
Jang-Hyun Kim
Hyun Oh Song
OffRL
105
262
0
04 Oct 2021
What Matters in Learning from Offline Human Demonstrations for Robot
  Manipulation
What Matters in Learning from Offline Human Demonstrations for Robot Manipulation
Ajay Mandlekar
Danfei Xu
J. Wong
Soroush Nasiriany
Chen Wang
Rohun Kulkarni
Li Fei-Fei
Silvio Savarese
Yuke Zhu
Roberto Martín-Martín
OffRL
161
475
0
06 Aug 2021
COMBO: Conservative Offline Model-Based Policy Optimization
COMBO: Conservative Offline Model-Based Policy Optimization
Tianhe Yu
Aviral Kumar
Rafael Rafailov
Aravind Rajeswaran
Sergey Levine
Chelsea Finn
OffRL
219
415
0
16 Feb 2021
Model-free Representation Learning and Exploration in Low-rank MDPs
Model-free Representation Learning and Exploration in Low-rank MDPs
Aditya Modi
Jinglin Chen
A. Krishnamurthy
Nan Jiang
Alekh Agarwal
OffRL
102
78
0
14 Feb 2021
Provably Efficient Reinforcement Learning with Linear Function
  Approximation Under Adaptivity Constraints
Provably Efficient Reinforcement Learning with Linear Function Approximation Under Adaptivity Constraints
Chi Jin
Zhuoran Yang
Zhaoran Wang
OffRL
122
166
0
06 Jan 2021
EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline
  and Online RL
EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL
Seyed Kamyar Seyed Ghasemipour
Dale Schuurmans
S. Gu
OffRL
209
119
0
21 Jul 2020
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
340
1,960
0
04 May 2020
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,145
0
06 Jun 2015
Previous
12