ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.04491
  4. Cited By
The End of Optimism? An Asymptotic Analysis of Finite-Armed Linear
  Bandits

The End of Optimism? An Asymptotic Analysis of Finite-Armed Linear Bandits

14 October 2016
Tor Lattimore
Csaba Szepesvári
ArXivPDFHTML

Papers citing "The End of Optimism? An Asymptotic Analysis of Finite-Armed Linear Bandits"

22 / 22 papers shown
Title
Causally Abstracted Multi-armed Bandits
Causally Abstracted Multi-armed Bandits
Fabio Massimo Zennaro
Nicholas Bishop
Joel Dyer
Yorgos Felekis
Anisoara Calinescu
Michael Wooldridge
Theodoros Damoulas
38
2
0
26 Apr 2024
Multi-Armed Bandits with Abstention
Multi-Armed Bandits with Abstention
Junwen Yang
Tianyuan Jin
Vincent Y. F. Tan
31
0
0
23 Feb 2024
Best-of-Both-Worlds Linear Contextual Bandits
Best-of-Both-Worlds Linear Contextual Bandits
Masahiro Kato
Shinji Ito
53
0
0
27 Dec 2023
Geometry-Aware Approaches for Balancing Performance and Theoretical Guarantees in Linear Bandits
Geometry-Aware Approaches for Balancing Performance and Theoretical Guarantees in Linear Bandits
Yuwei Luo
Mohsen Bayati
26
1
0
26 Jun 2023
Exploration in Linear Bandits with Rich Action Sets and its Implications
  for Inference
Exploration in Linear Bandits with Rich Action Sets and its Implications for Inference
Debangshu Banerjee
Avishek Ghosh
Sayak Ray Chowdhury
Aditya Gopalan
35
9
0
23 Jul 2022
Truncated LinUCB for Stochastic Linear Bandits
Truncated LinUCB for Stochastic Linear Bandits
Yanglei Song
Meng zhou
52
0
0
23 Feb 2022
Reinforcement Learning in Linear MDPs: Constant Regret and
  Representation Selection
Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection
Matteo Papini
Andrea Tirinzoni
Aldo Pacchiano
Marcello Restelli
A. Lazaric
Matteo Pirotta
19
18
0
27 Oct 2021
Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits
  with Super Heavy-Tailed Payoffs
Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs
Han Zhong
Jiayi Huang
Lin F. Yang
Liwei Wang
19
7
0
26 Oct 2021
Fair Exploration via Axiomatic Bargaining
Fair Exploration via Axiomatic Bargaining
Jackie Baek
Vivek F. Farias
FaML
18
28
0
04 Jun 2021
Information Directed Sampling for Sparse Linear Bandits
Information Directed Sampling for Sparse Linear Bandits
Botao Hao
Tor Lattimore
Wei Deng
25
19
0
29 May 2021
Incentivizing Exploration in Linear Bandits under Information Gap
Incentivizing Exploration in Linear Bandits under Information Gap
Huazheng Wang
Haifeng Xu
Chuanhao Li
Zhiyuan Liu
Hongning Wang
75
4
0
08 Apr 2021
Leveraging Good Representations in Linear Contextual Bandits
Leveraging Good Representations in Linear Contextual Bandits
Matteo Papini
Andrea Tirinzoni
Marcello Restelli
A. Lazaric
Matteo Pirotta
33
26
0
08 Apr 2021
Multi-Armed Bandits with Dependent Arms
Multi-Armed Bandits with Dependent Arms
Rahul Singh
Fang Liu
Yin Sun
Ness B. Shroff
21
11
0
13 Oct 2020
Optimal Best-arm Identification in Linear Bandits
Optimal Best-arm Identification in Linear Bandits
Yassir Jedra
Alexandre Proutiere
11
75
0
29 Jun 2020
Crush Optimism with Pessimism: Structured Bandits Beyond Asymptotic
  Optimality
Crush Optimism with Pessimism: Structured Bandits Beyond Asymptotic Optimality
Kwang-Sung Jun
Chicheng Zhang
28
10
0
15 Jun 2020
Categorized Bandits
Categorized Bandits
Matthieu Jedor
Jonathan Louëdec
Vianney Perchet
25
11
0
04 May 2020
Adaptive Exploration in Linear Contextual Bandit
Adaptive Exploration in Linear Contextual Bandit
Botao Hao
Tor Lattimore
Csaba Szepesvári
22
74
0
15 Oct 2019
Polynomial-time Algorithms for Multiple-arm Identification with
  Full-bandit Feedback
Polynomial-time Algorithms for Multiple-arm Identification with Full-bandit Feedback
Yuko Kuroki
Liyuan Xu
Atsushi Miyauchi
Junya Honda
Masashi Sugiyama
25
17
0
27 Feb 2019
Beating Stochastic and Adversarial Semi-bandits Optimally and
  Simultaneously
Beating Stochastic and Adversarial Semi-bandits Optimally and Simultaneously
Julian Zimmert
Haipeng Luo
Chen-Yu Wei
11
79
0
25 Jan 2019
Differentially Private Contextual Linear Bandits
Differentially Private Contextual Linear Bandits
R. Shariff
Or Sheffet
16
114
0
28 Sep 2018
Information Directed Sampling and Bandits with Heteroscedastic Noise
Information Directed Sampling and Bandits with Heteroscedastic Noise
Johannes Kirschner
Andreas Krause
24
122
0
29 Jan 2018
Minimal Exploration in Structured Stochastic Bandits
Minimal Exploration in Structured Stochastic Bandits
Richard Combes
Stefan Magureanu
Alexandre Proutiere
33
115
0
01 Nov 2017
1