ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.04742
  4. Cited By
Visual Reinforcement Learning with Imagined Goals
v1v2 (latest)

Visual Reinforcement Learning with Imagined Goals

12 July 2018
Ashvin Nair
Vitchyr H. Pong
Murtaza Dalal
Shikhar Bahl
Steven Lin
Sergey Levine
    SSL
ArXiv (abs)PDFHTML

Papers citing "Visual Reinforcement Learning with Imagined Goals"

35 / 235 papers shown
Title
Language as an Abstraction for Hierarchical Deep Reinforcement Learning
Language as an Abstraction for Hierarchical Deep Reinforcement Learning
Yiding Jiang
S. Gu
Kevin Patrick Murphy
Chelsea Finn
OffRL
69
225
0
18 Jun 2019
Deep Reinforcement Learning for Industrial Insertion Tasks with Visual
  Inputs and Natural Rewards
Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Rewards
Gerrit Schoettler
Ashvin Nair
Jianlan Luo
Shikhar Bahl
J. A. Ojea
Eugen Solowjow
Sergey Levine
OffRL
59
192
0
13 Jun 2019
Goal-conditioned Imitation Learning
Goal-conditioned Imitation Learning
Yiming Ding
Carlos Florensa
Mariano Phielipp
Pieter Abbeel
78
228
0
13 Jun 2019
Sub-policy Adaptation for Hierarchical Reinforcement Learning
Sub-policy Adaptation for Hierarchical Reinforcement Learning
Alexander C. Li
Carlos Florensa
I. Clavera
Pieter Abbeel
96
74
0
13 Jun 2019
Efficient Exploration via State Marginal Matching
Efficient Exploration via State Marginal Matching
Lisa Lee
Benjamin Eysenbach
Emilio Parisotto
Eric Xing
Sergey Levine
Ruslan Salakhutdinov
147
248
0
12 Jun 2019
Exploration via Hindsight Goal Generation
Exploration via Hindsight Goal Generation
Zhizhou Ren
Kefan Dong
Yuanshuo Zhou
Qiang Liu
Jian-wei Peng
85
90
0
10 Jun 2019
On the Transfer of Inductive Bias from Simulation to the Real World: a
  New Disentanglement Dataset
On the Transfer of Inductive Bias from Simulation to the Real World: a New Disentanglement Dataset
Muhammad Waleed Gondal
Manuel Wüthrich
Ðorðe Miladinovic
Francesco Locatello
M. Breidt
V. Volchkov
J. Akpo
Olivier Bachem
Bernhard Schölkopf
Stefan Bauer
OODDRL
125
139
0
07 Jun 2019
On the Fairness of Disentangled Representations
On the Fairness of Disentangled Representations
Francesco Locatello
G. Abbati
Tom Rainforth
Stefan Bauer
Bernhard Schölkopf
Olivier Bachem
FaMLDRL
81
227
0
31 May 2019
Unsupervised Model Selection for Variational Disentangled Representation
  Learning
Unsupervised Model Selection for Variational Disentangled Representation Learning
Sunny Duan
Loic Matthey
Andre Saraiva
Nicholas Watters
Christopher P. Burgess
Alexander Lerchner
I. Higgins
OODDRL
96
80
0
29 May 2019
Maximum Entropy-Regularized Multi-Goal Reinforcement Learning
Maximum Entropy-Regularized Multi-Goal Reinforcement Learning
Rui Zhao
Xudong Sun
Volker Tresp
67
83
0
21 May 2019
Reinforcement Learning without Ground-Truth State
Reinforcement Learning without Ground-Truth State
Xingyu Lin
H. Baweja
David Held
OffRLSSL
80
24
0
20 May 2019
REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic
  Learning
REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning
Brian Yang
Jesse Zhang
Vitchyr H. Pong
Sergey Levine
Dinesh Jayaraman
77
37
0
17 May 2019
Learning Robotic Manipulation through Visual Planning and Acting
Learning Robotic Manipulation through Visual Planning and Acting
Angelina Wang
Thanard Kurutach
Kara Liu
Pieter Abbeel
Aviv Tamar
59
116
0
11 May 2019
Hierarchical Policy Learning is Sensitive to Goal Space Design
Hierarchical Policy Learning is Sensitive to Goal Space Design
Zach Dwiel
Madhavun Candadai
Mariano Phielipp
Arjun K. Bansal
79
15
0
04 May 2019
Disentangling Factors of Variation Using Few Labels
Disentangling Factors of Variation Using Few Labels
Francesco Locatello
Michael Tschannen
Stefan Bauer
Gunnar Rätsch
Bernhard Schölkopf
Olivier Bachem
DRLCMLCoGe
103
124
0
03 May 2019
Goal-Directed Behavior under Variational Predictive Coding: Dynamic
  Organization of Visual Attention and Working Memory
Goal-Directed Behavior under Variational Predictive Coding: Dynamic Organization of Visual Attention and Working Memory
Minju Jung
Takazumi Matsumoto
Jun Tani
69
20
0
12 Mar 2019
Skew-Fit: State-Covering Self-Supervised Reinforcement Learning
Skew-Fit: State-Covering Self-Supervised Reinforcement Learning
Vitchyr H. Pong
Murtaza Dalal
Steven Lin
Ashvin Nair
Shikhar Bahl
Sergey Levine
OffRLSSL
132
277
0
08 Mar 2019
Learning Latent Plans from Play
Learning Latent Plans from Play
Corey Lynch
Mohi Khansari
Ted Xiao
Vikash Kumar
Jonathan Tompson
Sergey Levine
P. Sermanet
SSLLM&Ro
115
408
0
05 Mar 2019
Discovering Options for Exploration by Minimizing Cover Time
Discovering Options for Exploration by Minimizing Cover Time
Yuu Jinnai
Jee Won Park
David Abel
George Konidaris
78
52
0
02 Mar 2019
Deep Variational Koopman Models: Inferring Koopman Observations for
  Uncertainty-Aware Dynamics Modeling and Control
Deep Variational Koopman Models: Inferring Koopman Observations for Uncertainty-Aware Dynamics Modeling and Control
Jeremy Morton
F. Witherden
Mykel J Kochenderfer
85
47
0
26 Feb 2019
Unsupervised Visuomotor Control through Distributional Planning Networks
Unsupervised Visuomotor Control through Distributional Planning Networks
Tianhe Yu
Gleb Shevchuk
Dorsa Sadigh
Chelsea Finn
SSLOffRL
77
42
0
14 Feb 2019
Preferences Implicit in the State of the World
Preferences Implicit in the State of the World
Rohin Shah
Dmitrii Krasheninnikov
Jordan Alexander
Pieter Abbeel
Anca Dragan
82
55
0
12 Feb 2019
Self-supervised Learning of Image Embedding for Continuous Control
Self-supervised Learning of Image Embedding for Continuous Control
Carlos Florensa
Jonas Degrave
N. Heess
Jost Tobias Springenberg
Martin Riedmiller
SSL
58
53
0
03 Jan 2019
VMAV-C: A Deep Attention-based Reinforcement Learning Algorithm for
  Model-based Control
VMAV-C: A Deep Attention-based Reinforcement Learning Algorithm for Model-based Control
Xingxing Liang
Qi Wang
Yanghe Feng
Zhong Liu
Jincai Huang
65
5
0
24 Dec 2018
Variational Autoencoders Pursue PCA Directions (by Accident)
Variational Autoencoders Pursue PCA Directions (by Accident)
Michal Rolínek
Dominik Zietlow
Georg Martius
OODDRL
78
153
0
17 Dec 2018
Provably Efficient Maximum Entropy Exploration
Provably Efficient Maximum Entropy Exploration
Elad Hazan
Sham Kakade
Karan Singh
A. V. Soest
98
305
0
06 Dec 2018
Challenging Common Assumptions in the Unsupervised Learning of
  Disentangled Representations
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
Francesco Locatello
Stefan Bauer
Mario Lucic
Gunnar Rätsch
Sylvain Gelly
Bernhard Schölkopf
Olivier Bachem
OOD
166
1,475
0
29 Nov 2018
Unsupervised Control Through Non-Parametric Discriminative Rewards
Unsupervised Control Through Non-Parametric Discriminative Rewards
David Warde-Farley
T. Wiele
Tejas D. Kulkarni
Catalin Ionescu
Steven Hansen
Volodymyr Mnih
DRLOffRLSSL
101
178
0
28 Nov 2018
Learning Actionable Representations with Goal-Conditioned Policies
Learning Actionable Representations with Goal-Conditioned Policies
Dibya Ghosh
Abhishek Gupta
Sergey Levine
102
110
0
19 Nov 2018
One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL
One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL
T. Paine
Sergio Gomez Colmenarejo
Ziyun Wang
Scott E. Reed
Y. Aytar
...
Matthew W. Hoffman
Gabriel Barth-Maron
Serkan Cabi
David Budden
Nando de Freitas
OffRL
81
25
0
11 Oct 2018
Scaling All-Goals Updates in Reinforcement Learning Using Convolutional
  Neural Networks
Scaling All-Goals Updates in Reinforcement Learning Using Convolutional Neural Networks
Fabio Pardo
Vitaly Levdik
Petar Kormushev
57
4
0
06 Oct 2018
Time Reversal as Self-Supervision
Time Reversal as Self-Supervision
Suraj Nair
Mohammad Babaeizadeh
Chelsea Finn
Sergey Levine
Vikash Kumar
SSL
89
12
0
02 Oct 2018
Catastrophic Importance of Catastrophic Forgetting
Catastrophic Importance of Catastrophic Forgetting
Albert Ierusalem
CLLAI4CE
18
2
0
20 Aug 2018
Automatically Composing Representation Transformations as a Means for
  Generalization
Automatically Composing Representation Transformations as a Means for Generalization
Michael Chang
Abhishek Gupta
Sergey Levine
Thomas Griffiths
85
70
0
12 Jul 2018
Intrinsically Motivated Goal Exploration Processes with Automatic
  Curriculum Learning
Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning
Sébastien Forestier
Rémy Portelas
Yoan Mollard
Pierre-Yves Oudeyer
99
190
0
07 Aug 2017
Previous
12345