ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.11685
  4. Cited By
An Empirical Process Approach to the Union Bound: Practical Algorithms
  for Combinatorial and Linear Bandits

An Empirical Process Approach to the Union Bound: Practical Algorithms for Combinatorial and Linear Bandits

21 June 2020
Julian Katz-Samuels
Lalit P. Jain
Zohar Karnin
Kevin Jamieson
ArXivPDFHTML

Papers citing "An Empirical Process Approach to the Union Bound: Practical Algorithms for Combinatorial and Linear Bandits"

18 / 18 papers shown
Title
Prior-Dependent Allocations for Bayesian Fixed-Budget Best-Arm Identification in Structured Bandits
Prior-Dependent Allocations for Bayesian Fixed-Budget Best-Arm Identification in Structured Bandits
Nicolas Nguyen
Imad Aouali
András Gyorgy
Claire Vernade
56
2
0
08 Feb 2024
Optimal Batched Best Arm Identification
Optimal Batched Best Arm Identification
Tianyuan Jin
Yu Yang
Jing Tang
Xiaokui Xiao
Pan Xu
59
3
0
21 Oct 2023
A New Perspective on Pool-Based Active Classification and
  False-Discovery Control
A New Perspective on Pool-Based Active Classification and False-Discovery Control
Lalit P. Jain
Kevin Jamieson
14
11
0
14 Aug 2020
Sequential Experimental Design for Transductive Linear Bandits
Sequential Experimental Design for Transductive Linear Bandits
Tanner Fiez
Lalit P. Jain
Kevin Jamieson
Lillian J. Ratliff
34
105
0
20 Jun 2019
An Efficient Bandit Algorithm for Realtime Multivariate Optimization
An Efficient Bandit Algorithm for Realtime Multivariate Optimization
Daniel N. Hill
Houssam Nassif
Yi Liu
Anand Iyer
S.V.N. Vishwanathan
17
109
0
22 Oct 2018
Time-uniform, nonparametric, nonasymptotic confidence sequences
Time-uniform, nonparametric, nonasymptotic confidence sequences
Steven R. Howard
Aaditya Ramdas
Jon D. McAuliffe
Jasjeet Sekhon
40
243
0
18 Oct 2018
Near-Optimal Discrete Optimization for Experimental Design: A Regret
  Minimization Approach
Near-Optimal Discrete Optimization for Experimental Design: A Regret Minimization Approach
Zeyuan Allen-Zhu
Yuanzhi Li
Aarti Singh
Yining Wang
48
58
0
14 Nov 2017
Fully adaptive algorithm for pure exploration in linear bandits
Fully adaptive algorithm for pure exploration in linear bandits
Liyuan Xu
Junya Honda
Masashi Sugiyama
29
84
0
16 Oct 2017
Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration
Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration
Lijie Chen
Anupam Gupta
Jiacheng Li
Mingda Qiao
Ruosong Wang
78
47
0
04 Jun 2017
The Simulator: Understanding Adaptive Sampling in the
  Moderate-Confidence Regime
The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime
Max Simchowitz
Kevin Jamieson
Benjamin Recht
67
66
0
16 Feb 2017
Nearly Instance Optimal Sample Complexity Bounds for Top-k Arm Selection
Nearly Instance Optimal Sample Complexity Bounds for Top-k Arm Selection
Lijie Chen
Jian Li
Mingda Qiao
29
58
0
13 Feb 2017
An optimal algorithm for the Thresholding Bandit Problem
An optimal algorithm for the Thresholding Bandit Problem
A. Locatelli
Maurilio Gutzeit
Alexandra Carpentier
35
132
0
27 May 2016
Pure Exploration of Multi-armed Bandit Under Matroid Constraints
Pure Exploration of Multi-armed Bandit Under Matroid Constraints
Lijie Chen
Anupam Gupta
Jian Li
44
49
0
23 May 2016
Optimal Best Arm Identification with Fixed Confidence
Optimal Best Arm Identification with Fixed Confidence
Aurélien Garivier
E. Kaufmann
48
341
0
15 Feb 2016
On the Optimal Sample Complexity for Best Arm Identification
On the Optimal Sample Complexity for Best Arm Identification
Lijie Chen
Jian Li
43
59
0
12 Nov 2015
Best-Arm Identification in Linear Bandits
Best-Arm Identification in Linear Bandits
Marta Soare
A. Lazaric
Rémi Munos
33
178
0
22 Sep 2014
On the Complexity of Best Arm Identification in Multi-Armed Bandit
  Models
On the Complexity of Best Arm Identification in Multi-Armed Bandit Models
E. Kaufmann
Olivier Cappé
Aurélien Garivier
98
1,021
0
16 Jul 2014
lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits
lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits
Kevin Jamieson
Matthew Malloy
Robert D. Nowak
Sébastien Bubeck
49
411
0
27 Dec 2013
1