ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.00801
  4. Cited By
Model-Free Active Exploration in Reinforcement Learning

Model-Free Active Exploration in Reinforcement Learning

30 June 2024
Alessio Russo
Alexandre Proutiere
    OffRL
ArXiv (abs)PDFHTML

Papers citing "Model-Free Active Exploration in Reinforcement Learning"

16 / 16 papers shown
Title
Exploration in Deep Reinforcement Learning: A Survey
Exploration in Deep Reinforcement Learning: A Survey
Pawel Ladosz
Lilian Weng
Minwoo Kim
H. Oh
OffRL
82
352
0
02 May 2022
SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep
  Reinforcement Learning
SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep Reinforcement Learning
Kimin Lee
Michael Laskin
A. Srinivas
Pieter Abbeel
OffRL
60
203
0
09 Jul 2020
Array Programming with NumPy
Array Programming with NumPy
Charles R. Harris
K. Millman
S. Walt
R. Gommers
Pauli Virtanen
...
Tyler Reddy
Warren Weckesser
Hameer Abbasi
C. Gohlke
T. Oliphant
154
14,959
0
18 Jun 2020
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
520
42,449
0
03 Dec 2019
Estimating Risk and Uncertainty in Deep Reinforcement Learning
Estimating Risk and Uncertainty in Deep Reinforcement Learning
W. Clements
B. V. Delft
Benoît-Marie Robaglia
Reda Bahi Slaoui
Sébastien Toth
64
97
0
23 May 2019
Distributional Reinforcement Learning for Efficient Exploration
Distributional Reinforcement Learning for Efficient Exploration
B. Mavrin
Shangtong Zhang
Hengshuai Yao
Linglong Kong
Kaiwen Wu
Yaoliang Yu
OODOffRL
52
88
0
13 May 2019
Q-learning with UCB Exploration is Sample Efficient for Infinite-Horizon
  MDP
Q-learning with UCB Exploration is Sample Efficient for Infinite-Horizon MDP
Kefan Dong
Yuanhao Wang
Xiaoyu Chen
Liwei Wang
OffRL
60
96
0
27 Jan 2019
Information-Directed Exploration for Deep Reinforcement Learning
Information-Directed Exploration for Deep Reinforcement Learning
Nikolay Nikolov
Johannes Kirschner
Felix Berkenkamp
Andreas Krause
58
72
0
18 Dec 2018
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement
  Learning with a Stochastic Actor
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Tuomas Haarnoja
Aurick Zhou
Pieter Abbeel
Sergey Levine
311
8,352
0
04 Jan 2018
Efficient exploration with Double Uncertain Value Networks
Efficient exploration with Double Uncertain Value Networks
Thomas M. Moerland
Joost Broekens
Catholijn M. Jonker
49
42
0
29 Nov 2017
Distributional Reinforcement Learning with Quantile Regression
Distributional Reinforcement Learning with Quantile Regression
Will Dabney
Mark Rowland
Marc G. Bellemare
Rémi Munos
92
760
0
27 Oct 2017
Deep Exploration via Randomized Value Functions
Deep Exploration via Randomized Value Functions
Ian Osband
Benjamin Van Roy
Daniel Russo
Zheng Wen
89
306
0
22 Mar 2017
Asynchronous Methods for Deep Reinforcement Learning
Asynchronous Methods for Deep Reinforcement Learning
Volodymyr Mnih
Adria Puigdomenech Badia
M. Berk Mirza
Alex Graves
Timothy Lillicrap
Tim Harley
David Silver
Koray Kavukcuoglu
199
8,859
0
04 Feb 2016
Bootstrapped Thompson Sampling and Deep Exploration
Bootstrapped Thompson Sampling and Deep Exploration
Ian Osband
Benjamin Van Roy
152
105
0
01 Jul 2015
On the Complexity of Best Arm Identification in Multi-Armed Bandit
  Models
On the Complexity of Best Arm Identification in Multi-Armed Bandit Models
E. Kaufmann
Olivier Cappé
Aurélien Garivier
193
1,025
0
16 Jul 2014
Generalization and Exploration via Randomized Value Functions
Generalization and Exploration via Randomized Value Functions
Ian Osband
Benjamin Van Roy
Zheng Wen
79
314
0
04 Feb 2014
1