ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.00287
  4. Cited By
Efficient and Robust Algorithms for Adversarial Linear Contextual
  Bandits

Efficient and Robust Algorithms for Adversarial Linear Contextual Bandits

1 February 2020
Gergely Neu
Julia Olkhovskaya
ArXivPDFHTML

Papers citing "Efficient and Robust Algorithms for Adversarial Linear Contextual Bandits"

15 / 15 papers shown
Title
Second Order Bounds for Contextual Bandits with Function Approximation
Second Order Bounds for Contextual Bandits with Function Approximation
Aldo Pacchiano
220
4
0
24 Sep 2024
On Bits and Bandits: Quantifying the Regret-Information Trade-off
On Bits and Bandits: Quantifying the Regret-Information Trade-off
Itai Shufaro
Nadav Merlis
Nir Weinberger
Shie Mannor
159
0
0
26 May 2024
LC-Tsallis-INF: Generalized Best-of-Both-Worlds Linear Contextual Bandits
LC-Tsallis-INF: Generalized Best-of-Both-Worlds Linear Contextual Bandits
Masahiro Kato
Shinji Ito
139
0
0
05 Mar 2024
Beyond UCB: Optimal and Efficient Contextual Bandits with Regression
  Oracles
Beyond UCB: Optimal and Efficient Contextual Bandits with Regression Oracles
Dylan J. Foster
Alexander Rakhlin
365
207
0
12 Feb 2020
Comments on the Du-Kakade-Wang-Yang Lower Bounds
Comments on the Du-Kakade-Wang-Yang Lower Bounds
Benjamin Van Roy
Shi Dong
135
38
0
18 Nov 2019
Weighted Linear Bandits for Non-Stationary Environments
Weighted Linear Bandits for Non-Stationary Environments
Yoan Russac
Claire Vernade
Olivier Cappé
130
108
0
19 Sep 2019
Model selection for contextual bandits
Model selection for contextual bandits
Dylan J. Foster
A. Krishnamurthy
Haipeng Luo
OffRL
169
90
0
03 Jun 2019
Iterate averaging as regularization for stochastic gradient descent
Iterate averaging as regularization for stochastic gradient descent
Gergely Neu
Lorenzo Rosasco
MoMe
76
61
0
22 Feb 2018
Explore no more: Improved high-probability regret bounds for
  non-stochastic bandits
Explore no more: Improved high-probability regret bounds for non-stochastic bandits
Gergely Neu
381
183
0
10 Jun 2015
First-order regret bounds for combinatorial semi-bandits
First-order regret bounds for combinatorial semi-bandits
Gergely Neu
184
58
0
23 Feb 2015
Non-strongly-convex smooth stochastic approximation with convergence
  rate O(1/n)
Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)
Francis R. Bach
Eric Moulines
91
405
0
10 Jun 2013
An efficient algorithm for learning with semi-bandit feedback
An efficient algorithm for learning with semi-bandit feedback
Gergely Neu
Gábor Bartók
112
80
0
13 May 2013
Thompson Sampling for Contextual Bandits with Linear Payoffs
Thompson Sampling for Contextual Bandits with Linear Payoffs
Shipra Agrawal
Navin Goyal
195
1,000
0
15 Sep 2012
Regret in Online Combinatorial Optimization
Regret in Online Combinatorial Optimization
Jean-Yves Audibert
Sébastien Bubeck
Gábor Lugosi
OffRL
54
257
0
20 Apr 2012
Gaussian Process Optimization in the Bandit Setting: No Regret and
  Experimental Design
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
Niranjan Srinivas
Andreas Krause
Sham Kakade
Matthias Seeger
146
1,622
0
21 Dec 2009
1