Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2107.09645
Cited By
Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning
20 July 2021
Denis Yarats
Rob Fergus
A. Lazaric
Lerrel Pinto
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning"
48 / 248 papers shown
Title
EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model
Yifu Yuan
Jianye Hao
Fei Ni
Yao Mu
Yan Zheng
Yujing Hu
Jinyi Liu
Yingfeng Chen
Changjie Fan
77
12
0
02 Oct 2022
S2P: State-conditioned Image Synthesis for Data Augmentation in Offline Reinforcement Learning
Daesol Cho
D. Shim
H. J. Kim
OffRL
42
11
0
30 Sep 2022
Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels
Sai Rajeswar
Pietro Mazzaglia
Tim Verbelen
Alexandre Piché
Bart Dhoedt
Aaron C. Courville
Alexandre Lacoste
SSL
26
21
0
24 Sep 2022
Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective
Raj Ghugare
Homanga Bharadhwaj
Benjamin Eysenbach
Sergey Levine
Ruslan Salakhutdinov
OffRL
45
25
0
18 Sep 2022
Continuous MDP Homomorphisms and Homomorphic Policy Gradient
S. Rezaei-Shoshtari
Rosie Zhao
Prakash Panangaden
D. Meger
Doina Precup
33
18
0
15 Sep 2022
Concept-modulated model-based offline reinforcement learning for rapid generalization
Nicholas A. Ketz
Praveen K. Pilly
OffRL
24
1
0
07 Sep 2022
Learning Bellman Complete Representations for Offline Policy Evaluation
Jonathan D. Chang
Kaiwen Wang
Nathan Kallus
Wen Sun
OffRL
27
14
0
12 Jul 2022
Stabilizing Off-Policy Deep Reinforcement Learning from Pixels
Edoardo Cetin
Philip J. Ball
Steve Roberts
Oya Celiktutan
30
36
0
03 Jul 2022
Watch and Match: Supercharging Imitation with Regularized Optimal Transport
Siddhant Haldar
Vaibhav Mathur
Denis Yarats
Lerrel Pinto
46
62
0
30 Jun 2022
Masked World Models for Visual Control
Younggyo Seo
Danijar Hafner
Hao Liu
Fangchen Liu
Stephen James
Kimin Lee
Pieter Abbeel
OffRL
87
146
0
28 Jun 2022
DayDreamer: World Models for Physical Robot Learning
Philipp Wu
Alejandro Escontrela
Danijar Hafner
Ken Goldberg
Pieter Abbeel
49
277
0
28 Jun 2022
Behavior Transformers: Cloning
k
k
k
modes with one stone
Nur Muhammad (Mahi) Shafiullah
Zichen Jeff Cui
Ariuntuya Altanzaya
Lerrel Pinto
OffRL
28
221
0
22 Jun 2022
Bootstrapped Transformer for Offline Reinforcement Learning
Kerong Wang
Hanye Zhao
Xufang Luo
Kan Ren
Weinan Zhang
Dongsheng Li
OffRL
16
37
0
17 Jun 2022
Contrastive Learning as Goal-Conditioned Reinforcement Learning
Benjamin Eysenbach
Tianjun Zhang
Ruslan Salakhutdinov
Sergey Levine
SSL
OffRL
25
139
0
15 Jun 2022
Does Self-supervised Learning Really Improve Reinforcement Learning from Pixels?
Xiang Li
Jinghuan Shang
Srijan Das
Michael S. Ryoo
SSL
27
31
0
10 Jun 2022
Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations
Cong Lu
Philip J. Ball
Tim G. J. Rudner
Jack Parker-Holder
Michael A. Osborne
Yee Whye Teh
OffRL
29
52
0
09 Jun 2022
Overcoming the Spectral Bias of Neural Value Approximation
Ge Yang
Anurag Ajay
Pulkit Agrawal
32
25
0
09 Jun 2022
Deep Hierarchical Planning from Pixels
Danijar Hafner
Kuang-Huei Lee
Ian S. Fischer
Pieter Abbeel
34
92
0
08 Jun 2022
On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning
Mandi Zhao
Pieter Abbeel
Stephen James
OffRL
28
33
0
07 Jun 2022
Image Augmentation Based Momentum Memory Intrinsic Reward for Sparse Reward Visual Scenes
Zheng Fang
Biao Zhao
Guizhong Liu
16
2
0
19 May 2022
CCLF: A Contrastive-Curiosity-Driven Learning Framework for Sample-Efficient Reinforcement Learning
Chenyu Sun
Hangwei Qian
C. Miao
OffRL
24
12
0
02 May 2022
Offline Visual Representation Learning for Embodied Navigation
Karmesh Yadav
Ram Ramrakhya
Arjun Majumdar
Vincent-Pierre Berges
Sachit Kuhar
Dhruv Batra
Alexei Baevski
Oleksandr Maksymets
OffRL
SSL
33
72
0
27 Apr 2022
What Matters in Language Conditioned Robotic Imitation Learning over Unstructured Data
Oier Mees
Lukás Hermann
Wolfram Burgard
LM&Ro
30
149
0
13 Apr 2022
Reinforcement Learning with Action-Free Pre-Training from Videos
Younggyo Seo
Kimin Lee
Stephen James
Pieter Abbeel
SSL
OnRL
18
117
0
25 Mar 2022
SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning
Jongjin Park
Younggyo Seo
Jinwoo Shin
Honglak Lee
Pieter Abbeel
Kimin Lee
11
82
0
18 Mar 2022
Vision-Based Manipulators Need to Also See from Their Hands
Kyle Hsu
Moo Jin Kim
Rafael Rafailov
Jiajun Wu
Chelsea Finn
29
44
0
15 Mar 2022
Temporal Difference Learning for Model Predictive Control
Nicklas Hansen
Xiaolong Wang
H. Su
PINN
MU
36
222
0
09 Mar 2022
The Unsurprising Effectiveness of Pre-Trained Vision Models for Control
Simone Parisi
Aravind Rajeswaran
Senthil Purushwalkam
Abhinav Gupta
LM&Ro
34
187
0
07 Mar 2022
DreamingV2: Reinforcement Learning with Discrete World Models without Reconstruction
Masashi Okada
T. Taniguchi
3DV
OffRL
28
23
0
01 Mar 2022
VRL3: A Data-Driven Framework for Visual Deep Reinforcement Learning
Che Wang
Xufang Luo
Keith Ross
Dongsheng Li
OffRL
26
49
0
17 Feb 2022
CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Michael Laskin
Hao Liu
Xue Bin Peng
Denis Yarats
Aravind Rajeswaran
Pieter Abbeel
SSL
74
65
0
01 Feb 2022
Revisiting PGD Attacks for Stability Analysis of Large-Scale Nonlinear Systems and Perception-Based Control
Aaron J. Havens
Darioush Keivan
Peter M. Seiler
Geir Dullerud
Bin Hu
AAML
17
3
0
03 Jan 2022
Invariance Through Latent Alignment
Takuma Yoneda
Ge Yang
Matthew R. Walter
Bradly C. Stadie
OOD
21
9
0
15 Dec 2021
The Surprising Effectiveness of Representation Learning for Visual Imitation
Jyothish Pari
Nur Muhammad (Mahi) Shafiullah
Sridhar Pandian Arunachalam
Lerrel Pinto
SSL
25
156
0
02 Dec 2021
Maximum Entropy Model-based Reinforcement Learning
Oleg Svidchenko
A. Shpilman
11
5
0
02 Dec 2021
Learning State Representations via Retracing in Reinforcement Learning
Changmin Yu
Dong Li
Jianye Hao
Jun Wang
Neil Burgess
27
7
0
24 Nov 2021
Off-policy Imitation Learning from Visual Inputs
Zhihao Cheng
Li Shen
Dacheng Tao
17
2
0
08 Nov 2021
URLB: Unsupervised Reinforcement Learning Benchmark
Michael Laskin
Denis Yarats
Hao Liu
Kimin Lee
Albert Zhan
Kevin Lu
Catherine Cang
Lerrel Pinto
Pieter Abbeel
SSL
OffRL
30
132
0
28 Oct 2021
DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations
Fei Deng
Ingook Jang
Sungjin Ahn
VLM
29
62
0
27 Oct 2021
Is High Variance Unavoidable in RL? A Case Study in Continuous Control
Johan Bjorck
Carla P. Gomes
Kilian Q. Weinberger
65
23
0
21 Oct 2021
Discovering and Achieving Goals via World Models
Russell Mendonca
Oleh Rybkin
Kostas Daniilidis
Danijar Hafner
Deepak Pathak
27
117
0
18 Oct 2021
StARformer: Transformer with State-Action-Reward Representations for Visual Reinforcement Learning
Jinghuan Shang
Kumara Kahatapitiya
Xiang Li
Michael S. Ryoo
OffRL
35
33
0
12 Oct 2021
Learning Pessimism for Robust and Efficient Off-Policy Reinforcement Learning
Edoardo Cetin
Oya Celiktutan
OffRL
42
16
0
07 Oct 2021
Pixyz: a Python library for developing deep generative models
Masahiro Suzuki
T. Kaneko
Y. Matsuo
AI4CE
23
2
0
28 Jul 2021
Towards Deeper Deep Reinforcement Learning with Spectral Normalization
Johan Bjorck
Carla P. Gomes
Kilian Q. Weinberger
19
23
0
02 Jun 2021
Decoupling Representation Learning from Reinforcement Learning
Adam Stooke
Kimin Lee
Pieter Abbeel
Michael Laskin
SSL
DRL
284
341
0
14 Sep 2020
On the Convergence of the Monte Carlo Exploring Starts Algorithm for Reinforcement Learning
Che Wang
Shuhan Yuan
Kai Shao
Keith Ross
8
12
0
10 Feb 2020
CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity
Aditya Bhatt
Daniel Palenicek
Boris Belousov
Max Argus
Artemij Amiranashvili
Thomas Brox
Jan Peters
29
43
0
14 Feb 2019
Previous
1
2
3
4
5