ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1512.04152
  4. Cited By
Fighting Bandits with a New Kind of Smoothness

Fighting Bandits with a New Kind of Smoothness

14 December 2015
Jacob D. Abernethy
Chansoo Lee
Ambuj Tewari
    AAML
ArXivPDFHTML

Papers citing "Fighting Bandits with a New Kind of Smoothness"

16 / 16 papers shown
Title
Beyond Minimax Rates in Group Distributionally Robust Optimization via a Novel Notion of Sparsity
Beyond Minimax Rates in Group Distributionally Robust Optimization via a Novel Notion of Sparsity
Quan Nguyen
Nishant A. Mehta
Cristóbal Guzmán
39
1
0
01 Oct 2024
Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits
Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits
Mengmeng Li
Daniel Kuhn
Bahar Taşkesen
44
0
0
30 Sep 2024
Improved Regret Bounds for Bandits with Expert Advice
Improved Regret Bounds for Bandits with Expert Advice
Nicolò Cesa-Bianchi
Khaled Eldowa
Emmanuel Esposito
Julia Olkhovskaya
38
0
0
24 Jun 2024
A Simple and Adaptive Learning Rate for FTRL in Online Learning with
  Minimax Regret of $Θ(T^{2/3})$ and its Application to
  Best-of-Both-Worlds
A Simple and Adaptive Learning Rate for FTRL in Online Learning with Minimax Regret of Θ(T2/3)Θ(T^{2/3})Θ(T2/3) and its Application to Best-of-Both-Worlds
Taira Tsuchiya
Shinji Ito
26
0
0
30 May 2024
Distributed No-Regret Learning for Multi-Stage Systems with End-to-End
  Bandit Feedback
Distributed No-Regret Learning for Multi-Stage Systems with End-to-End Bandit Feedback
I-Hong Hou
OffRL
44
0
0
06 Apr 2024
A Best-of-both-worlds Algorithm for Bandits with Delayed Feedback with
  Robustness to Excessive Delays
A Best-of-both-worlds Algorithm for Bandits with Delayed Feedback with Robustness to Excessive Delays
Saeed Masoudian
Julian Zimmert
Yevgeny Seldin
45
3
0
21 Aug 2023
Meta-Learning Adversarial Bandit Algorithms
Meta-Learning Adversarial Bandit Algorithms
M. Khodak
Ilya Osadchiy
Keegan Harris
Maria-Florina Balcan
Kfir Y. Levy
Ron Meir
Zhiwei Steven Wu
FedML
30
2
0
05 Jul 2023
On the Minimax Regret for Online Learning with Feedback Graphs
On the Minimax Regret for Online Learning with Feedback Graphs
Khaled Eldowa
Emmanuel Esposito
Tommaso Cesari
Nicolò Cesa-Bianchi
33
8
0
24 May 2023
No-Regret Online Prediction with Strategic Experts
No-Regret Online Prediction with Strategic Experts
Omid Sadeghi
Maryam Fazel
43
1
0
24 May 2023
Banker Online Mirror Descent: A Universal Approach for Delayed Online
  Bandit Learning
Banker Online Mirror Descent: A Universal Approach for Delayed Online Bandit Learning
Jiatai Huang
Yan Dai
Longbo Huang
27
6
0
25 Jan 2023
Adaptive Best-of-Both-Worlds Algorithm for Heavy-Tailed Multi-Armed
  Bandits
Adaptive Best-of-Both-Worlds Algorithm for Heavy-Tailed Multi-Armed Bandits
Jiatai Huang
Yan Dai
Longbo Huang
27
14
0
28 Jan 2022
Improved Analysis of the Tsallis-INF Algorithm in Stochastically
  Constrained Adversarial Bandits and Stochastic Bandits with Adversarial
  Corruptions
Improved Analysis of the Tsallis-INF Algorithm in Stochastically Constrained Adversarial Bandits and Stochastic Bandits with Adversarial Corruptions
Saeed Masoudian
Yevgeny Seldin
22
14
0
23 Mar 2021
Beating Stochastic and Adversarial Semi-bandits Optimally and
  Simultaneously
Beating Stochastic and Adversarial Semi-bandits Optimally and Simultaneously
Julian Zimmert
Haipeng Luo
Chen-Yu Wei
11
79
0
25 Jan 2019
Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits
Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits
Julian Zimmert
Yevgeny Seldin
AAML
24
175
0
19 Jul 2018
Corralling a Band of Bandit Algorithms
Corralling a Band of Bandit Algorithms
Alekh Agarwal
Haipeng Luo
Behnam Neyshabur
Robert Schapire
30
154
0
19 Dec 2016
Prediction by Random-Walk Perturbation
Prediction by Random-Walk Perturbation
Luc Devroye
Gábor Lugosi
Gergely Neu
62
37
0
23 Feb 2013
1