ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1508.03326
  4. Cited By
A Survey on Contextual Multi-armed Bandits

A Survey on Contextual Multi-armed Bandits

13 August 2015
Li Zhou
ArXivPDFHTML

Papers citing "A Survey on Contextual Multi-armed Bandits"

15 / 15 papers shown
Title
Active Inference in Contextual Multi-Armed Bandits for Autonomous Robotic Exploration
Active Inference in Contextual Multi-Armed Bandits for Autonomous Robotic Exploration
Shohei Wakayama
Alberto Candela
Paul Hayne
Nisar R. Ahmed
48
0
0
07 Aug 2024
Approximate information for efficient exploration-exploitation
  strategies
Approximate information for efficient exploration-exploitation strategies
A. Barbier–Chebbah
Christian L. Vestergaard
Jean-Baptiste Masson
29
2
0
04 Jul 2023
Geometry-Aware Approaches for Balancing Performance and Theoretical Guarantees in Linear Bandits
Geometry-Aware Approaches for Balancing Performance and Theoretical Guarantees in Linear Bandits
Yuwei Luo
Mohsen Bayati
26
1
0
26 Jun 2023
AdaChain: A Learned Adaptive Blockchain
AdaChain: A Learned Adaptive Blockchain
Chenyuan Wu
Bhavana Mehta
Mohammad Javad Amiri
Ryan Marcus
B. T. Loo
23
14
0
03 Nov 2022
Reinforcement Learning and Bandits for Speech and Language Processing:
  Tutorial, Review and Outlook
Reinforcement Learning and Bandits for Speech and Language Processing: Tutorial, Review and Outlook
Baihan Lin
OffRL
AI4TS
37
27
0
24 Oct 2022
A Scalable Recommendation Engine for New Users and Items
A Scalable Recommendation Engine for New Users and Items
Boya Xu
Yiting Deng
C. Mela
35
2
0
06 Sep 2022
Metadata-based Multi-Task Bandits with Bayesian Hierarchical Models
Metadata-based Multi-Task Bandits with Bayesian Hierarchical Models
Runzhe Wan
Linjuan Ge
Rui Song
38
28
0
13 Aug 2021
Knowledge Infused Policy Gradients with Upper Confidence Bound for
  Relational Bandits
Knowledge Infused Policy Gradients with Upper Confidence Bound for Relational Bandits
Kaushik Roy
Qi Zhang
Manas Gaur
A. Sheth
OffRL
38
15
0
25 Jun 2021
Contextual Constrained Learning for Dose-Finding Clinical Trials
Contextual Constrained Learning for Dose-Finding Clinical Trials
Hyun-Suk Lee
Cong Shen
James Jordon
M. Schaar
22
14
0
08 Jan 2020
Multi-Armed Bandits with Correlated Arms
Multi-Armed Bandits with Correlated Arms
Samarth Gupta
Shreyas Chaudhari
Gauri Joshi
Osman Yağan
22
50
0
06 Nov 2019
Rarely-switching linear bandits: optimization of causal effects for the
  real world
Rarely-switching linear bandits: optimization of causal effects for the real world
B. Lansdell
Sofia Triantafillou
Konrad Paul Kording
22
4
0
30 May 2019
Multi-Statistic Approximate Bayesian Computation with Multi-Armed
  Bandits
Multi-Statistic Approximate Bayesian Computation with Multi-Armed Bandits
Prashant Singh
Andreas Hellander
34
2
0
22 May 2018
Online Learning: A Comprehensive Survey
Online Learning: A Comprehensive Survey
Guosheng Lin
Doyen Sahoo
Jing Lu
P. Zhao
OffRL
31
636
0
08 Feb 2018
Latent Contextual Bandits and their Application to Personalized
  Recommendations for New Users
Latent Contextual Bandits and their Application to Personalized Recommendations for New Users
Li Zhou
Emma Brunskill
19
62
0
22 Apr 2016
A Survey of Online Experiment Design with the Stochastic Multi-Armed
  Bandit
A Survey of Online Experiment Design with the Stochastic Multi-Armed Bandit
Giuseppe Burtini
Jason L. Loeppky
Ramon Lawrence
39
119
0
02 Oct 2015
1