ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1601.01190
  4. Cited By
On Bayesian index policies for sequential resource allocation

On Bayesian index policies for sequential resource allocation

6 January 2016
E. Kaufmann
ArXivPDFHTML

Papers citing "On Bayesian index policies for sequential resource allocation"

35 / 35 papers shown
Title
Communication Bounds for the Distributed Experts Problem
Zhihao Jia
Qi Pang
Trung Tran
David Woodruff
Zhihao Zhang
Wenting Zheng
68
0
0
06 Jan 2025
UCB algorithms for multi-armed bandits: Precise regret and adaptive
  inference
UCB algorithms for multi-armed bandits: Precise regret and adaptive inference
Q. Han
K. Khamaru
Cun-Hui Zhang
73
3
0
09 Dec 2024
On Lai's Upper Confidence Bound in Multi-Armed Bandits
On Lai's Upper Confidence Bound in Multi-Armed Bandits
Huachen Ren
Cun-Hui Zhang
26
1
0
03 Oct 2024
Active Inference in Contextual Multi-Armed Bandits for Autonomous Robotic Exploration
Active Inference in Contextual Multi-Armed Bandits for Autonomous Robotic Exploration
Shohei Wakayama
Alberto Candela
Paul Hayne
Nisar R. Ahmed
46
0
0
07 Aug 2024
Bayesian Bandit Algorithms with Approximate Inference in Stochastic Linear Bandits
Bayesian Bandit Algorithms with Approximate Inference in Stochastic Linear Bandits
Ziyi Huang
Henry Lam
Haofeng Zhang
33
0
0
20 Jun 2024
An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed
  Bandits
An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits
Biyonka Liang
Iavor Bojinov
43
5
0
09 Nov 2023
Simple Modification of the Upper Confidence Bound Algorithm by
  Generalized Weighted Averages
Simple Modification of the Upper Confidence Bound Algorithm by Generalized Weighted Averages
Nobuhito Manome
Shuji Shinohara
Ung-il Chung
26
5
0
28 Aug 2023
A General Recipe for the Analysis of Randomized Multi-Armed Bandit
  Algorithms
A General Recipe for the Analysis of Randomized Multi-Armed Bandit Algorithms
Dorian Baudry
Kazuya Suzuki
Junya Honda
29
4
0
10 Mar 2023
Optimality of Thompson Sampling with Noninformative Priors for Pareto
  Bandits
Optimality of Thompson Sampling with Noninformative Priors for Pareto Bandits
Jongyeong Lee
Junya Honda
Chao-Kai Chiang
Masashi Sugiyama
29
4
0
03 Feb 2023
A Combinatorial Semi-Bandit Approach to Charging Station Selection for
  Electric Vehicles
A Combinatorial Semi-Bandit Approach to Charging Station Selection for Electric Vehicles
Niklas Åkerblom
M. Chehreghani
25
0
0
17 Jan 2023
Finite-Time Regret of Thompson Sampling Algorithms for Exponential
  Family Multi-Armed Bandits
Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits
Tianyuan Jin
Pan Xu
X. Xiao
Anima Anandkumar
36
12
0
07 Jun 2022
Information-Directed Selection for Top-Two Algorithms
Information-Directed Selection for Top-Two Algorithms
Wei You
Chao Qin
Zihao Wang
Shuoguang Yang
38
13
0
24 May 2022
Some performance considerations when using multi-armed bandit algorithms
  in the presence of missing data
Some performance considerations when using multi-armed bandit algorithms in the presence of missing data
Xijin Chen
K. M. Lee
S. Villar
D. Robertson
47
1
0
08 May 2022
Optimal Regret Is Achievable with Bounded Approximate Inference Error:
  An Enhanced Bayesian Upper Confidence Bound Framework
Optimal Regret Is Achievable with Bounded Approximate Inference Error: An Enhanced Bayesian Upper Confidence Bound Framework
Ziyi Huang
Henry Lam
A. Meisami
Haofeng Zhang
36
4
0
31 Jan 2022
Online Learning of Energy Consumption for Navigation of Electric
  Vehicles
Online Learning of Energy Consumption for Navigation of Electric Vehicles
Niklas Åkerblom
Yuxin Chen
M. Chehreghani
28
12
0
03 Nov 2021
An empirical evaluation of active inference in multi-armed bandits
An empirical evaluation of active inference in multi-armed bandits
D. Marković
Hrvoje Stojić
Sarah Schwöbel
S. Kiebel
42
34
0
21 Jan 2021
Lifelong Learning in Multi-Armed Bandits
Lifelong Learning in Multi-Armed Bandits
Matthieu Jedor
Jonathan Louëdec
Vianney Perchet
34
2
0
28 Dec 2020
MOTS: Minimax Optimal Thompson Sampling
MOTS: Minimax Optimal Thompson Sampling
Tianyuan Jin
Pan Xu
Jieming Shi
Xiaokui Xiao
Quanquan Gu
31
30
0
03 Mar 2020
An Online Learning Framework for Energy-Efficient Navigation of Electric
  Vehicles
An Online Learning Framework for Energy-Efficient Navigation of Electric Vehicles
Niklas Åkerblom
Yuxin Chen
M. Chehreghani
21
15
0
03 Mar 2020
The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed
  Bandit with Many Arms
The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms
Mohsen Bayati
N. Hamidi
Ramesh Johari
Khashayar Khosravi
39
28
0
24 Feb 2020
Double Explore-then-Commit: Asymptotic Optimality and Beyond
Double Explore-then-Commit: Asymptotic Optimality and Beyond
Tianyuan Jin
Pan Xu
Xiaokui Xiao
Quanquan Gu
33
25
0
21 Feb 2020
Exponential two-armed bandit problem
Exponential two-armed bandit problem
A. Kolnogorov
Denis Grunev
25
0
0
15 Aug 2019
Parameterized Exploration
Parameterized Exploration
Jesse Clifton
Lili Wu
E. Laber
42
0
0
13 Jul 2019
The Finite-Horizon Two-Armed Bandit Problem with Binary Responses: A
  Multidisciplinary Survey of the History, State of the Art, and Myths
The Finite-Horizon Two-Armed Bandit Problem with Binary Responses: A Multidisciplinary Survey of the History, State of the Art, and Myths
P. Jacko
33
11
0
20 Jun 2019
A Note on KL-UCB+ Policy for the Stochastic Bandit
A Note on KL-UCB+ Policy for the Stochastic Bandit
Junya Honda
28
4
0
19 Mar 2019
Adaptive Policies for Perimeter Surveillance Problems
Adaptive Policies for Perimeter Surveillance Problems
James A. Grant
David S. Leslie
K. Glazebrook
R. Szechtman
Adam N. Letchford
24
13
0
04 Oct 2018
Profitable Bandits
Profitable Bandits
Mastane Achab
Stéphan Clémençon
Aurélien Garivier
21
5
0
08 May 2018
BelMan: Bayesian Bandits on the Belief--Reward Manifold
BelMan: Bayesian Bandits on the Belief--Reward Manifold
D. Basu
Pierre Senellart
S. Bressan
29
2
0
04 May 2018
Combinatorial Multi-Armed Bandits with Filtered Feedback
Combinatorial Multi-Armed Bandits with Filtered Feedback
James A. Grant
David S. Leslie
K. Glazebrook
R. Szechtman
34
1
0
26 May 2017
A Scale Free Algorithm for Stochastic Bandits with Bounded Kurtosis
A Scale Free Algorithm for Stochastic Bandits with Bounded Kurtosis
Tor Lattimore
27
19
0
27 Mar 2017
A minimax and asymptotically optimal algorithm for stochastic bandits
A minimax and asymptotically optimal algorithm for stochastic bandits
Pierre Ménard
Aurélien Garivier
29
59
0
23 Feb 2017
Learning the distribution with largest mean: two bandit frameworks
Learning the distribution with largest mean: two bandit frameworks
E. Kaufmann
Aurélien Garivier
24
19
0
31 Jan 2017
Regret Analysis of the Anytime Optimally Confident UCB Algorithm
Regret Analysis of the Anytime Optimally Confident UCB Algorithm
Tor Lattimore
32
26
0
29 Mar 2016
Simple Bayesian Algorithms for Best Arm Identification
Simple Bayesian Algorithms for Best Arm Identification
Daniel Russo
25
273
0
26 Feb 2016
Regret Analysis of the Finite-Horizon Gittins Index Strategy for
  Multi-Armed Bandits
Regret Analysis of the Finite-Horizon Gittins Index Strategy for Multi-Armed Bandits
Tor Lattimore
23
46
0
18 Nov 2015
1