Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.12162
Cited By
Model-Based Active Exploration
29 October 2018
Pranav Shyam
Wojciech Ja'skowski
Faustino J. Gomez
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Model-Based Active Exploration"
50 / 133 papers shown
Title
Uncertainty-aware Latent Safety Filters for Avoiding Out-of-Distribution Failures
Junwon Seo
Kensuke Nakamura
Andrea V. Bajcsy
56
0
0
01 May 2025
Disentangling Uncertainties by Learning Compressed Data Representation
Zhiyu An
Zhibo Hou
Wan Du
UQCV
UD
71
0
0
20 Mar 2025
Grounding Video Models to Actions through Goal Conditioned Exploration
Yunhao Luo
Yilun Du
LM&Ro
VGen
85
1
0
11 Nov 2024
Learning World Models for Unconstrained Goal Navigation
Yuanlin Duan
Wensen Mao
He Zhu
34
1
0
03 Nov 2024
Exploring the Edges of Latent State Clusters for Goal-Conditioned Reinforcement Learning
Yuanlin Duan
Guofeng Cui
He Zhu
OffRL
34
0
0
03 Nov 2024
Online Intrinsic Rewards for Decision Making Agents from Large Language Model Feedback
Qinqing Zheng
Mikael Henaff
Amy Zhang
Aditya Grover
Brandon Amos
LLMAG
OffRL
37
3
0
30 Oct 2024
Prioritized Generative Replay
Renhao Wang
Kevin Frans
Pieter Abbeel
Sergey Levine
Alexei A. Efros
OnRL
DiffM
114
2
0
23 Oct 2024
R-AIF: Solving Sparse-Reward Robotic Tasks from Pixels with Active Inference and World Models
Viet Dung Nguyen
Zhizhuo Yang
Christopher L. Buckley
Alexander Ororbia
39
2
0
21 Sep 2024
Physics-Driven AI Correction in Laser Absorption Sensing Quantification
Ruiyuan Kang
P. Liatsis
Meixia Geng
Qingjie Yang
45
0
0
20 Aug 2024
Boosting Efficiency in Task-Agnostic Exploration through Causal Knowledge
Yupei Yang
Erdun Gao
Shikui Tu
Lei Xu
CML
37
1
0
30 Jul 2024
Mixture of Experts in a Mixture of RL settings
Timon Willi
J. Obando-Ceron
Jakob Foerster
Karolina Dziugaite
Pablo Samuel Castro
MoE
46
7
0
26 Jun 2024
Open-Endedness is Essential for Artificial Superhuman Intelligence
Edward Hughes
Michael Dennis
Jack Parker-Holder
Feryal M. P. Behbahani
Aditi Mavalankar
Yuge Shi
Tom Schaul
Tim Rocktaschel
LRM
40
21
0
06 Jun 2024
Goal Exploration via Adaptive Skill Distribution for Goal-Conditioned Reinforcement Learning
Lisheng Wu
Ke Chen
29
0
0
19 Apr 2024
ASID: Active Exploration for System Identification in Robotic Manipulation
Marius Memmel
Andrew Wagenmaker
Chuning Zhu
Patrick Yin
Dieter Fox
Abhishek Gupta
40
13
0
18 Apr 2024
Active Exploration in Bayesian Model-based Reinforcement Learning for Robot Manipulation
Carlos Plou
Ana C. Murillo
Ruben Martinez-Cantin
OffRL
40
0
0
02 Apr 2024
Learning Off-policy with Model-based Intrinsic Motivation For Active Online Exploration
Yibo Wang
Jiang Zhao
OffRL
OnRL
25
0
0
31 Mar 2024
UOEP: User-Oriented Exploration Policy for Enhancing Long-Term User Experiences in Recommender Systems
Changshuo Zhang
Sirui Chen
Xiao Zhang
Sunhao Dai
Weijie Yu
Jun Xu
OffRL
35
1
0
17 Jan 2024
Grow Your Limits: Continuous Improvement with Real-World RL for Robotic Locomotion
Laura M. Smith
Yunhao Cao
Sergey Levine
OffRL
30
19
0
26 Oct 2023
METRA: Scalable Unsupervised RL with Metric-Aware Abstraction
Seohong Park
Oleh Rybkin
Sergey Levine
OffRL
33
34
0
13 Oct 2023
Generative Intrinsic Optimization: Intrinsic Control with Model Learning
Jianfei Ma
18
0
0
12 Oct 2023
COPlanner: Plan to Roll Out Conservatively but to Explore Optimistically for Model-Based RL
Xiyao Wang
Ruijie Zheng
Yanchao Sun
Ruonan Jia
Wichayaporn Wongkamjan
Huazhe Xu
Furong Huang
OffRL
54
12
0
11 Oct 2023
Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning
Trevor A. McInroe
Adam Jelley
Stefano V. Albrecht
Amos Storkey
OffRL
OnRL
25
6
0
09 Oct 2023
Reward Model Ensembles Help Mitigate Overoptimization
Thomas Coste
Usman Anwar
Robert Kirk
David M. Krueger
NoLa
ALM
28
116
0
04 Oct 2023
Physics-Driven ML-Based Modelling for Correcting Inverse Estimation
Ruiyuan Kang
Tingting Mu
P. Liatsis
D. Kyritsis
31
2
0
25 Sep 2023
IxDRL: A Novel Explainable Deep Reinforcement Learning Toolkit based on Analyses of Interestingness
Pedro Sequeira
Melinda Gervasio
15
2
0
18 Jul 2023
Active Sensing with Predictive Coding and Uncertainty Minimization
A. Sharafeldin
N. Imam
Hannah Choi
20
2
0
02 Jul 2023
Safe Navigation in Unstructured Environments by Minimizing Uncertainty in Control and Perception
Junwon Seo
Jungwi Mun
Taekyung Kim
20
4
0
26 Jun 2023
Optimistic Active Exploration of Dynamical Systems
Bhavya Sukhija
Lenart Treven
Cansu Sancaktar
Sebastian Blaes
Stelian Coros
Andreas Krause
29
17
0
21 Jun 2023
Reward-Free Curricula for Training Robust World Models
Marc Rigter
Minqi Jiang
Ingmar Posner
VLM
OffRL
36
6
0
15 Jun 2023
A Study of Global and Episodic Bonuses for Exploration in Contextual MDPs
Mikael Henaff
Minqi Jiang
Roberta Raileanu
40
13
0
05 Jun 2023
What model does MuZero learn?
Jinke He
Thomas M. Moerland
F. Oliehoek
33
4
0
01 Jun 2023
Statistically Efficient Bayesian Sequential Experiment Design via Reinforcement Learning with Cross-Entropy Estimators
Tom Blau
Iadine Chadès
Amir Dezfouli
Daniel M. Steinberg
Edwin V. Bonilla
18
1
0
29 May 2023
Bridging Active Exploration and Uncertainty-Aware Deployment Using Probabilistic Ensemble Neural Network Dynamics
Taekyung Kim
Jungwi Mun
Junwon Seo
Beomsu Kim
S. Hong
34
11
0
20 May 2023
FLEX: an Adaptive Exploration Algorithm for Nonlinear Systems
Matthieu Blanke
Marc Lelarge
31
4
0
26 Apr 2023
EEE, Remediating the failure of machine learning models via a network-based optimization patch
Ruiyuan Kang
D. Kyritsis
P. Liatsis
34
0
0
22 Apr 2023
Planning Goals for Exploration
E. Hu
Richard Chang
Oleh Rybkin
Dinesh Jayaraman
43
24
0
23 Mar 2023
A Survey of Historical Learning: Learning Models with Learning History
Xiang Li
Ge Wu
Lingfeng Yang
Wenzhe Wang
Renjie Song
Jian Yang
MU
AI4TS
31
2
0
23 Mar 2023
Fast exploration and learning of latent graphs with aliased observations
Miguel Lazaro-Gredilla
Ishani Deshpande
Siva K. Swaminathan
Meet Dave
Dileep George
23
3
0
13 Mar 2023
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
Matej Pecháč
M. Chovanec
Igor Farkaš
32
3
0
22 Feb 2023
Predictable MDP Abstraction for Unsupervised Model-Based RL
Seohong Park
Sergey Levine
19
9
0
08 Feb 2023
STEERING: Stein Information Directed Exploration for Model-Based Reinforcement Learning
Souradip Chakraborty
Amrit Singh Bedi
Alec Koppel
Mengdi Wang
Furong Huang
Dinesh Manocha
24
7
0
28 Jan 2023
Intrinsic Motivation in Model-based Reinforcement Learning: A Brief Review
Artem Latyshev
Aleksandr I. Panov
39
1
0
24 Jan 2023
Near-optimal Policy Identification in Active Reinforcement Learning
Xiang Li
Viraj Mehta
Johannes Kirschner
I. Char
W. Neiswanger
J. Schneider
Andreas Krause
Ilija Bogunovic
OffRL
43
6
0
19 Dec 2022
Efficient Exploration in Resource-Restricted Reinforcement Learning
Zhihai Wang
Taoxing Pan
Qi Zhou
Jie Wang
OffRL
15
10
0
14 Dec 2022
A Bayesian Framework for Digital Twin-Based Control, Monitoring, and Data Collection in Wireless Systems
Clement Ruah
Osvaldo Simeone
Bashir M. Al-Hashimi
27
28
0
02 Dec 2022
Five Properties of Specific Curiosity You Didn't Know Curious Machines Should Have
Nadia M. Ady
R. Shariff
J. Günther
P. Pilarski
16
0
0
01 Dec 2022
Curiosity in Hindsight: Intrinsic Exploration in Stochastic Environments
Daniel Jarrett
Corentin Tallec
Florent Altché
Thomas Mesnard
Rémi Munos
Michal Valko
48
5
0
18 Nov 2022
Exploring through Random Curiosity with General Value Functions
Aditya A. Ramesh
Louis Kirsch
Sjoerd van Steenkiste
Jürgen Schmidhuber
32
9
0
18 Nov 2022
Active Exploration for Robotic Manipulation
Tim Schneider
Boris Belousov
Georgia Chalvatzaki
Diego Romeres
Devesh K. Jha
Jan Peters
39
10
0
23 Oct 2022
Learning General World Models in a Handful of Reward-Free Deployments
Yingchen Xu
Jack Parker-Holder
Aldo Pacchiano
Philip J. Ball
Oleh Rybkin
Stephen J. Roberts
Tim Rocktaschel
Edward Grefenstette
OffRL
55
9
0
23 Oct 2022
1
2
3
Next