ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.04551
  4. Cited By
Learning Latent Dynamics for Planning from Pixels
v1v2v3v4v5 (latest)

Learning Latent Dynamics for Planning from Pixels

12 November 2018
Danijar Hafner
Timothy Lillicrap
Ian S. Fischer
Ruben Villegas
David R Ha
Honglak Lee
James Davidson
    BDL
ArXiv (abs)PDFHTML

Papers citing "Learning Latent Dynamics for Planning from Pixels"

50 / 994 papers shown
Title
Safe Deep RL in 3D Environments using Human Feedback
Safe Deep RL in 3D Environments using Human Feedback
Matthew Rahtz
Vikrant Varma
Ramana Kumar
Zachary Kenton
Shane Legg
Jan Leike
75
4
0
20 Jan 2022
Accelerating Representation Learning with View-Consistent Dynamics in
  Data-Efficient Reinforcement Learning
Accelerating Representation Learning with View-Consistent Dynamics in Data-Efficient Reinforcement Learning
Tao Huang
Jiacheng Wang
Xiao Chen
75
4
0
18 Jan 2022
Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
Jack Parker-Holder
Raghunandan Rajan
Xingyou Song
André Biedenkapp
Yingjie Miao
...
Vu-Linh Nguyen
Roberto Calandra
Aleksandra Faust
Frank Hutter
Marius Lindauer
AI4CE
105
107
0
11 Jan 2022
Linear Variational State-Space Filtering
Linear Variational State-Space Filtering
Daniel Pfrommer
Nikolai Matni
53
1
0
04 Jan 2022
SimSR: Simple Distance-based State Representation for Deep Reinforcement
  Learning
SimSR: Simple Distance-based State Representation for Deep Reinforcement Learning
Hongyu Zang
Xin Li
Mingzhong Wang
70
15
0
31 Dec 2021
Do Androids Dream of Electric Fences? Safety-Aware Reinforcement
  Learning with Latent Shielding
Do Androids Dream of Electric Fences? Safety-Aware Reinforcement Learning with Latent Shielding
Peter He
Borja G. Leon
Francesco Belardinelli
44
9
0
21 Dec 2021
Compositional Learning-based Planning for Vision POMDPs
Compositional Learning-based Planning for Vision POMDPs
Sampada Deglurkar
M. H. Lim
Johnathan Tucker
Zachary Sunberg
Aleksandra Faust
Claire Tomlin
72
5
0
17 Dec 2021
CEM-GD: Cross-Entropy Method with Gradient Descent Planner for
  Model-Based Reinforcement Learning
CEM-GD: Cross-Entropy Method with Gradient Descent Planner for Model-Based Reinforcement Learning
Kevin Huang
Sahin Lale
Ugo Rosolia
Yuanyuan Shi
Anima Anandkumar
53
9
0
14 Dec 2021
Conservative and Adaptive Penalty for Model-Based Safe Reinforcement
  Learning
Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning
Yecheng Jason Ma
Andrew Shen
Osbert Bastani
Dinesh Jayaraman
61
25
0
14 Dec 2021
Next Steps: Learning a Disentangled Gait Representation for Versatile
  Quadruped Locomotion
Next Steps: Learning a Disentangled Gait Representation for Versatile Quadruped Locomotion
Alexander L. Mitchell
W. Merkt
Mathieu Geisert
Siddhant Gangapurwala
Martin Engelcke
Oiwi Parker Jones
Ioannis Havoutis
Ingmar Posner
39
4
0
09 Dec 2021
Trajectory-Constrained Deep Latent Visual Attention for Improved Local
  Planning in Presence of Heterogeneous Terrain
Trajectory-Constrained Deep Latent Visual Attention for Improved Local Planning in Presence of Heterogeneous Terrain
Stefan Wapnick
Travis Manderson
David Meger
Gregory Dudek
94
5
0
09 Dec 2021
Model-Value Inconsistency as a Signal for Epistemic Uncertainty
Model-Value Inconsistency as a Signal for Epistemic Uncertainty
Angelos Filos
Eszter Vértes
Zita Marinho
Gregory Farquhar
Diana Borsa
A. Friesen
Feryal M. P. Behbahani
Tom Schaul
André Barreto
Simon Osindero
86
7
0
08 Dec 2021
Information is Power: Intrinsic Control via Information Capture
Information is Power: Intrinsic Control via Information Capture
Nick Rhinehart
Jenny Wang
Glen Berseth
John D. Co-Reyes
Danijar Hafner
Chelsea Finn
Sergey Levine
60
9
0
07 Dec 2021
ED2: Environment Dynamics Decomposition World Models for Continuous
  Control
ED2: Environment Dynamics Decomposition World Models for Continuous Control
Jianye Hao
Yifu Yuan
Cong Wang
Zhen Wang
OffRL
78
1
0
06 Dec 2021
Maximum Entropy Model-based Reinforcement Learning
Maximum Entropy Model-based Reinforcement Learning
Oleg Svidchenko
A. Shpilman
52
6
0
02 Dec 2021
Robust Robotic Control from Pixels using Contrastive Recurrent
  State-Space Models
Robust Robotic Control from Pixels using Contrastive Recurrent State-Space Models
Nitish Srivastava
Walter A. Talbott
Martin Bertran Lopez
Shuangfei Zhai
J. Susskind
71
4
0
02 Dec 2021
Differentiable Spatial Planning using Transformers
Differentiable Spatial Planning using Transformers
Devendra Singh Chaplot
Deepak Pathak
Jitendra Malik
136
40
0
02 Dec 2021
Diffusion Autoencoders: Toward a Meaningful and Decodable Representation
Diffusion Autoencoders: Toward a Meaningful and Decodable Representation
Konpat Preechakul
Nattanat Chatthee
Suttisak Wizadwongsa
Supasorn Suwajanakorn
SyDaDiffM
125
434
0
30 Nov 2021
Learning State Representations via Retracing in Reinforcement Learning
Learning State Representations via Retracing in Reinforcement Learning
Changmin Yu
Dong Li
Jianye Hao
Jun Wang
Neil Burgess
77
8
0
24 Nov 2021
A Free Lunch from the Noise: Provable and Practical Exploration for
  Representation Learning
A Free Lunch from the Noise: Provable and Practical Exploration for Representation Learning
Zhaolin Ren
Tianjun Zhang
Csaba Szepesvári
Bo Dai
106
20
0
22 Nov 2021
Learning Representations for Pixel-based Control: What Matters and Why?
Learning Representations for Pixel-based Control: What Matters and Why?
Manan Tomar
Utkarsh Aashu Mishra
Amy Zhang
Matthew E. Taylor
SSLOffRL
101
26
0
15 Nov 2021
Modular Networks Prevent Catastrophic Interference in Model-Based
  Multi-Task Reinforcement Learning
Modular Networks Prevent Catastrophic Interference in Model-Based Multi-Task Reinforcement Learning
Robin Schiewer
Laurenz Wiskott
19
3
0
15 Nov 2021
Model-Based Reinforcement Learning via Stochastic Hybrid Models
Model-Based Reinforcement Learning via Stochastic Hybrid Models
Hany Abdulsamad
Jan Peters
13
2
0
11 Nov 2021
Gradients are Not All You Need
Gradients are Not All You Need
Luke Metz
C. Freeman
S. Schoenholz
Tal Kachman
98
93
0
10 Nov 2021
Dealing with the Unknown: Pessimistic Offline Reinforcement Learning
Dealing with the Unknown: Pessimistic Offline Reinforcement Learning
Jinning Li
Chen Tang
Masayoshi Tomizuka
Wei Zhan
OffRL
86
22
0
09 Nov 2021
Solving PDE-constrained Control Problems Using Operator Learning
Solving PDE-constrained Control Problems Using Operator Learning
Rakhoon Hwang
Jae Yong Lee
J. Shin
H. Hwang
AI4CE
167
48
0
09 Nov 2021
Survey of Deep Learning Methods for Inverse Problems
Survey of Deep Learning Methods for Inverse Problems
S. Kamyab
Zihreh Azimifar
Rasool Sabzi
Paul Fieguth
50
3
0
07 Nov 2021
Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon
  Reasoning
Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning
Dhruv Shah
Peng Xu
Yao Lu
Ted Xiao
Alexander Toshev
Sergey Levine
Brian Ichter
OffRL
81
43
0
04 Nov 2021
Model-Based Episodic Memory Induces Dynamic Hybrid Controls
Model-Based Episodic Memory Induces Dynamic Hybrid Controls
Hung Le
Thommen George Karimpanal
Majid Abdolshah
T. Tran
Svetha Venkatesh
72
19
0
03 Nov 2021
Constructing Neural Network-Based Models for Simulating Dynamical
  Systems
Constructing Neural Network-Based Models for Simulating Dynamical Systems
Christian Møldrup Legaard
Thomas Schranz
G. Schweiger
Ján Drgovna
Basak Falay
C. Gomes
Alexandros Iosifidis
M. Abkar
P. Larsen
PINNAI4CE
63
98
0
02 Nov 2021
Mastering Atari Games with Limited Data
Mastering Atari Games with Limited Data
Weirui Ye
Shao-Wei Liu
Thanard Kurutach
Pieter Abbeel
Yang Gao
VLM
135
242
0
30 Oct 2021
URLB: Unsupervised Reinforcement Learning Benchmark
URLB: Unsupervised Reinforcement Learning Benchmark
Michael Laskin
Denis Yarats
Hao Liu
Kimin Lee
Albert Zhan
Kevin Lu
Catherine Cang
Lerrel Pinto
Pieter Abbeel
SSLOffRL
82
139
0
28 Oct 2021
DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with
  Prototypical Representations
DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations
Fei Deng
Ingook Jang
Sungjin Ahn
VLM
79
62
0
27 Oct 2021
Dream to Explore: Adaptive Simulations for Autonomous Systems
Dream to Explore: Adaptive Simulations for Autonomous Systems
Z. Sheikhbahaee
Dongshu Luo
Blake Vanberlo
S. Yun
A. Safron
Jesse Hoey
DRL
41
0
0
27 Oct 2021
Towards Robust Bisimulation Metric Learning
Towards Robust Bisimulation Metric Learning
Mete Kemertas
Tristan Aumentado-Armstrong
OffRL
75
48
0
27 Oct 2021
Multitask Adaptation by Retrospective Exploration with Learned World
  Models
Multitask Adaptation by Retrospective Exploration with Learned World Models
Artem Zholus
Aleksandr I. Panov
CLL
29
0
0
25 Oct 2021
Recurrent Off-policy Baselines for Memory-based Continuous Control
Recurrent Off-policy Baselines for Memory-based Continuous Control
Zhihan Yang
Hai V. Nguyen
CLLOffRL
80
24
0
25 Oct 2021
Policy Search using Dynamic Mirror Descent MPC for Model Free Off Policy
  RL
Policy Search using Dynamic Mirror Descent MPC for Model Free Off Policy RL
Aarush Gupta
44
0
0
23 Oct 2021
Contrastive Active Inference
Contrastive Active Inference
Pietro Mazzaglia
Tim Verbelen
Bart Dhoedt
80
26
0
19 Oct 2021
Discovering and Achieving Goals via World Models
Discovering and Achieving Goals via World Models
Russell Mendonca
Oleh Rybkin
Kostas Daniilidis
Danijar Hafner
Deepak Pathak
94
127
0
18 Oct 2021
Provable RL with Exogenous Distractors via Multistep Inverse Dynamics
Provable RL with Exogenous Distractors via Multistep Inverse Dynamics
Yonathan Efroni
Dipendra Kumar Misra
A. Krishnamurthy
Alekh Agarwal
John Langford
OffRL
73
23
0
17 Oct 2021
Learn Proportional Derivative Controllable Latent Space from Pixels
Learn Proportional Derivative Controllable Latent Space from Pixels
Weiyao Wang
Marin Kobilarov
Gregory Hager
62
1
0
15 Oct 2021
Block Contextual MDPs for Continual Learning
Block Contextual MDPs for Continual Learning
Shagun Sodhani
Franziska Meier
Joelle Pineau
Amy Zhang
CLL
109
27
0
13 Oct 2021
Planning from Pixels in Environments with Combinatorially Hard Search
  Spaces
Planning from Pixels in Environments with Combinatorially Hard Search Spaces
Marco Bagatella
Miroslav Olsák
Michal Rolínek
Georg Martius
OffRL
54
7
0
12 Oct 2021
Action-Sufficient State Representation Learning for Control with
  Structural Constraints
Action-Sufficient State Representation Learning for Control with Structural Constraints
Erdun Gao
Chaochao Lu
Liu Leqi
José Miguel Hernández-Lobato
Clark Glymour
Bernhard Schölkopf
Kun Zhang
92
35
0
12 Oct 2021
Neural Algorithmic Reasoners are Implicit Planners
Neural Algorithmic Reasoners are Implicit Planners
Andreea Deac
Petar Velivcković
Ognjen Milinković
Pierre-Luc Bacon
Jian Tang
Mladen Nikolic
OffRL
72
24
0
11 Oct 2021
Recurrent Model-Free RL Can Be a Strong Baseline for Many POMDPs
Recurrent Model-Free RL Can Be a Strong Baseline for Many POMDPs
Tianwei Ni
Benjamin Eysenbach
Ruslan Salakhutdinov
81
110
0
11 Oct 2021
Learning Temporally-Consistent Representations for Data-Efficient
  Reinforcement Learning
Learning Temporally-Consistent Representations for Data-Efficient Reinforcement Learning
Trevor A. McInroe
Lukas Schafer
Stefano V. Albrecht
OffRL
59
8
0
11 Oct 2021
Evaluating model-based planning and planner amortization for continuous
  control
Evaluating model-based planning and planner amortization for continuous control
Arunkumar Byravan
Leonard Hasenclever
Piotr Trochim
M. Berk Mirza
Alessandro Davide Ialongo
...
Jost Tobias Springenberg
A. Abdolmaleki
N. Heess
J. Merel
Martin Riedmiller
97
17
0
07 Oct 2021
On The Transferability of Deep-Q Networks
On The Transferability of Deep-Q Networks
M. Sabatelli
Pierre Geurts
83
2
0
06 Oct 2021
Previous
123...121314...181920
Next