ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.10040
  4. Cited By
A Survey on Practical Applications of Multi-Armed and Contextual Bandits

A Survey on Practical Applications of Multi-Armed and Contextual Bandits

2 April 2019
Djallel Bouneffouf
Irina Rish
ArXivPDFHTML

Papers citing "A Survey on Practical Applications of Multi-Armed and Contextual Bandits"

13 / 13 papers shown
Title
Causal Inference out of Control: Estimating the Steerability of
  Consumption
Causal Inference out of Control: Estimating the Steerability of Consumption
Gary Cheng
Moritz Hardt
Celestine Mendler-Dünner
CML
37
1
0
10 Feb 2023
Scalable Representation Learning in Linear Contextual Bandits with
  Constant Regret Guarantees
Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees
Andrea Tirinzoni
Matteo Papini
Ahmed Touati
A. Lazaric
Matteo Pirotta
28
4
0
24 Oct 2022
Differentially Private Stochastic Linear Bandits: (Almost) for Free
Differentially Private Stochastic Linear Bandits: (Almost) for Free
Osama A. Hanna
Antonious M. Girgis
Christina Fragouli
Suhas Diggavi
FedML
27
18
0
07 Jul 2022
Machine Learning Prescriptive Canvas for Optimizing Business Outcomes
Machine Learning Prescriptive Canvas for Optimizing Business Outcomes
H. Shteingart
Gerben Oostra
Ohad Levinkron
Naama Parush
G. Shabat
Daniel Aronovich
29
0
0
21 Jun 2022
Multi-Armed Bandits in Brain-Computer Interfaces
Multi-Armed Bandits in Brain-Computer Interfaces
Frida Heskebeck
Carolina Bergeling
Bo Bernhardsson
27
4
0
19 May 2022
Reinforcement Learning in Practice: Opportunities and Challenges
Reinforcement Learning in Practice: Opportunities and Challenges
Yuxi Li
OffRL
38
9
0
23 Feb 2022
Solving Multi-Arm Bandit Using a Few Bits of Communication
Solving Multi-Arm Bandit Using a Few Bits of Communication
Osama A. Hanna
Lin F. Yang
Christina Fragouli
24
16
0
11 Nov 2021
Risk averse non-stationary multi-armed bandits
Risk averse non-stationary multi-armed bandits
Leo Benac
Frédéric Godin
33
2
0
28 Sep 2021
Leveraging Good Representations in Linear Contextual Bandits
Leveraging Good Representations in Linear Contextual Bandits
Matteo Papini
Andrea Tirinzoni
Marcello Restelli
A. Lazaric
Matteo Pirotta
30
26
0
08 Apr 2021
A bandit-learning approach to multifidelity approximation
A bandit-learning approach to multifidelity approximation
Yiming Xu
Vahid Keshavarzzadeh
Robert M. Kirby
A. Narayan
8
6
0
29 Mar 2021
Regret Analysis of a Markov Policy Gradient Algorithm for Multi-arm
  Bandits
Regret Analysis of a Markov Policy Gradient Algorithm for Multi-arm Bandits
D. Denisov
N. Walton
29
8
0
20 Jul 2020
Thompson Sampling via Local Uncertainty
Thompson Sampling via Local Uncertainty
Zhendong Wang
Mingyuan Zhou
16
19
0
30 Oct 2019
Optimal Exploitation of Clustering and History Information in
  Multi-Armed Bandit
Optimal Exploitation of Clustering and History Information in Multi-Armed Bandit
Djallel Bouneffouf
Srinivasan Parthasarathy
Horst Samulowitz
Martin Wistuba
11
29
0
31 May 2019
1