ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.07865
  4. Cited By
Adapting Behaviour via Intrinsic Reward: A Survey and Empirical Study

Adapting Behaviour via Intrinsic Reward: A Survey and Empirical Study

19 June 2019
Cam Linke
Nadia M. Ady
Martha White
T. Degris
Adam White
ArXivPDFHTML

Papers citing "Adapting Behaviour via Intrinsic Reward: A Survey and Empirical Study"

35 / 35 papers shown
Title
Exploiting Submodular Value Functions For Scaling Up Active Perception
Exploiting Submodular Value Functions For Scaling Up Active Perception
Yash Satsangi
Shimon Whiteson
F. Oliehoek
M. Spaan
105
25
0
21 Sep 2020
Maximizing Information Gain in Partially Observable Environments via
  Prediction Reward
Maximizing Information Gain in Partially Observable Environments via Prediction Reward
Yash Satsangi
Sungsu Lim
Shimon Whiteson
F. Oliehoek
Martha White
44
15
0
11 May 2020
SMiRL: Surprise Minimizing Reinforcement Learning in Unstable
  Environments
SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments
Glen Berseth
Daniel Geng
Coline Devin
Nicholas Rhinehart
Chelsea Finn
Dinesh Jayaraman
Sergey Levine
45
22
0
11 Dec 2019
Meta-descent for Online, Continual Prediction
Meta-descent for Online, Continual Prediction
Andrew Jacobsen
M. Schlegel
Cam Linke
T. Degris
Adam White
Martha White
44
23
0
17 Jul 2019
Self-Supervised Exploration via Disagreement
Self-Supervised Exploration via Disagreement
Deepak Pathak
Dhiraj Gandhi
Abhinav Gupta
SSL
54
377
0
10 Jun 2019
Learning Feature Relevance Through Step Size Adaptation in
  Temporal-Difference Learning
Learning Feature Relevance Through Step Size Adaptation in Temporal-Difference Learning
Alex Kearney
Vivek Veeriah
Jaden B. Travnik
P. Pilarski
R. Sutton
OOD
56
13
0
08 Mar 2019
Large-Scale Study of Curiosity-Driven Learning
Large-Scale Study of Curiosity-Driven Learning
Yuri Burda
Harrison Edwards
Deepak Pathak
Amos Storkey
Trevor Darrell
Alexei A. Efros
LRM
57
700
0
13 Aug 2018
Cleaning up the neighborhood: A full classification for adversarial
  partial monitoring
Cleaning up the neighborhood: A full classification for adversarial partial monitoring
Tor Lattimore
Csaba Szepesvári
33
27
0
23 May 2018
TIDBD: Adapting Temporal-difference Step-sizes Through Stochastic
  Meta-descent
TIDBD: Adapting Temporal-difference Step-sizes Through Stochastic Meta-descent
Alex Kearney
Vivek Veeriah
Jaden B. Travnik
R. Sutton
P. Pilarski
33
16
0
10 Apr 2018
Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal
  Exploration
Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration
Alexandre Péré
Sébastien Forestier
Olivier Sigaud
Pierre-Yves Oudeyer
SSL
DRL
27
95
0
02 Mar 2018
Learning by Playing - Solving Sparse Reward Tasks from Scratch
Learning by Playing - Solving Sparse Reward Tasks from Scratch
Martin Riedmiller
Roland Hafner
Thomas Lampe
Michael Neunert
Jonas Degrave
T. Wiele
Volodymyr Mnih
N. Heess
Jost Tobias Springenberg
68
446
0
28 Feb 2018
Learning to Play with Intrinsically-Motivated Self-Aware Agents
Learning to Play with Intrinsically-Motivated Self-Aware Agents
Nick Haber
Damian Mrowca
Li Fei-Fei
Daniel L. K. Yamins
LRM
58
117
0
21 Feb 2018
GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement
  Learning Algorithms
GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms
Cédric Colas
Olivier Sigaud
Pierre-Yves Oudeyer
48
158
0
14 Feb 2018
Curiosity-driven reinforcement learning with homeostatic regulation
Curiosity-driven reinforcement learning with homeostatic regulation
Ildefons Magrans de Abril
Ryota Kanai
40
28
0
23 Jan 2018
Unsupervised Real-Time Control through Variational Empowerment
Unsupervised Real-Time Control through Variational Empowerment
Maximilian Karl
Maximilian Soelch
Philip Becker-Ehmck
Djalel Benbouzid
Patrick van der Smagt
Justin Bayer
53
55
0
13 Oct 2017
Hindsight Experience Replay
Hindsight Experience Replay
Marcin Andrychowicz
Dwight Crow
Alex Ray
Jonas Schneider
Rachel Fong
Peter Welinder
Bob McGrew
Joshua Tobin
Pieter Abbeel
Wojciech Zaremba
OffRL
230
2,307
0
05 Jul 2017
Teacher-Student Curriculum Learning
Teacher-Student Curriculum Learning
Tambet Matiisen
Avital Oliver
Taco S. Cohen
John Schulman
ODL
76
376
0
01 Jul 2017
Count-Based Exploration in Feature Space for Reinforcement Learning
Count-Based Exploration in Feature Space for Reinforcement Learning
Jarryd Martin
S. N. Sasikumar
Tom Everitt
Marcus Hutter
49
123
0
25 Jun 2017
Curiosity-driven Exploration by Self-supervised Prediction
Curiosity-driven Exploration by Self-supervised Prediction
Deepak Pathak
Pulkit Agrawal
Alexei A. Efros
Trevor Darrell
LRM
SSL
99
2,423
0
15 May 2017
Automated Curriculum Learning for Neural Networks
Automated Curriculum Learning for Neural Networks
Alex Graves
Marc G. Bellemare
Jacob Menick
Rémi Munos
Koray Kavukcuoglu
62
523
0
10 Apr 2017
Learning Active Learning from Data
Learning Active Learning from Data
Ksenia Konyushkova
Raphael Sznitman
Pascal Fua
39
301
0
09 Mar 2017
Surprise-Based Intrinsic Motivation for Deep Reinforcement Learning
Surprise-Based Intrinsic Motivation for Deep Reinforcement Learning
Joshua Achiam
S. Shankar Sastry
66
236
0
06 Mar 2017
The Predictron: End-To-End Learning and Planning
The Predictron: End-To-End Learning and Planning
David Silver
H. V. Hasselt
Matteo Hessel
Tom Schaul
A. Guez
...
Gabriel Dulac-Arnold
David P. Reichert
Neil C. Rabinowitz
André Barreto
T. Degris
50
289
0
28 Dec 2016
Reinforcement Learning with Unsupervised Auxiliary Tasks
Reinforcement Learning with Unsupervised Auxiliary Tasks
Max Jaderberg
Volodymyr Mnih
Wojciech M. Czarnecki
Tom Schaul
Joel Z Leibo
David Silver
Koray Kavukcuoglu
SSL
56
1,225
0
16 Nov 2016
#Exploration: A Study of Count-Based Exploration for Deep Reinforcement
  Learning
#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
Haoran Tang
Rein Houthooft
Davis Foote
Adam Stooke
Xi Chen
Yan Duan
John Schulman
F. Turck
Pieter Abbeel
OffRL
84
764
0
15 Nov 2016
Unifying Count-Based Exploration and Intrinsic Motivation
Unifying Count-Based Exploration and Intrinsic Motivation
Marc G. Bellemare
S. Srinivasan
Georg Ostrovski
Tom Schaul
D. Saxton
Rémi Munos
162
1,465
0
06 Jun 2016
Hierarchical Deep Reinforcement Learning: Integrating Temporal
  Abstraction and Intrinsic Motivation
Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation
Tejas D. Kulkarni
Karthik Narasimhan
A. Saeedi
J. Tenenbaum
55
1,133
0
20 Apr 2016
Revisiting Active Perception
Revisiting Active Perception
R. Bajcsy
Yiannis Aloimonos
John K. Tsotsos
45
304
0
08 Mar 2016
Incentivizing Exploration In Reinforcement Learning With Deep Predictive
  Models
Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models
Bradly C. Stadie
Sergey Levine
Pieter Abbeel
76
502
0
03 Jul 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.0K
149,474
0
22 Dec 2014
Multi-Armed Bandits for Intelligent Tutoring Systems
Multi-Armed Bandits for Intelligent Tutoring Systems
Benjamin Clément
Didier Roy
Pierre-Yves Oudeyer
M. Lopes
51
139
0
11 Oct 2013
Scaling Life-long Off-policy Learning
Scaling Life-long Off-policy Learning
Adam White
Joseph Modayil
R. Sutton
CLL
OffRL
68
26
0
27 Jun 2012
Multi-timescale Nexting in a Reinforcement Learning Robot
Multi-timescale Nexting in a Reinforcement Learning Robot
Joseph Modayil
Adam White
R. Sutton
165
130
0
06 Dec 2011
Adaptive Submodularity: Theory and Applications in Active Learning and
  Stochastic Optimization
Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization
Daniel Golovin
Andreas Krause
128
600
0
21 Mar 2010
Driven by Compression Progress: A Simple Principle Explains Essential
  Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention,
  Curiosity, Creativity, Art, Science, Music, Jokes
Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes
Jürgen Schmidhuber
71
187
0
23 Dec 2008
1