ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1506.03374
  4. Cited By
An efficient algorithm for contextual bandits with knapsacks, and an
  extension to concave objectives

An efficient algorithm for contextual bandits with knapsacks, and an extension to concave objectives

10 June 2015
Shipra Agrawal
Nikhil R. Devanur
Lihong Li
ArXivPDFHTML

Papers citing "An efficient algorithm for contextual bandits with knapsacks, and an extension to concave objectives"

17 / 17 papers shown
Title
$α$-Fair Contextual Bandits
ααα-Fair Contextual Bandits
Siddhant Chaudhary
Abhishek Sinha
FaML
42
0
0
22 Oct 2023
Bandits with Replenishable Knapsacks: the Best of both Worlds
Bandits with Replenishable Knapsacks: the Best of both Worlds
Martino Bernasconi
Matteo Castiglioni
A. Celli
Federico Fusco
41
4
0
14 Jun 2023
No-regret Algorithms for Fair Resource Allocation
No-regret Algorithms for Fair Resource Allocation
Abhishek Sinha
Ativ Joshi
Rajarshi Bhattacharjee
Cameron Musco
Mohammad Hajiesmaili
FaML
43
5
0
11 Mar 2023
Optimal Contextual Bandits with Knapsacks under Realizability via
  Regression Oracles
Optimal Contextual Bandits with Knapsacks under Realizability via Regression Oracles
Yuxuan Han
Jialin Zeng
Yang Wang
Yangzhen Xiang
Jiheng Zhang
59
9
0
21 Oct 2022
Safe Linear Bandits over Unknown Polytopes
Safe Linear Bandits over Unknown Polytopes
Aditya Gangrade
Tianrui Chen
Venkatesh Saligrama
40
6
0
27 Sep 2022
Contextual Bandits with Knapsacks for a Conversion Model
Contextual Bandits with Knapsacks for a Conversion Model
Zerui Li
Gilles Stoltz
68
3
0
01 Jun 2022
No-regret Learning in Repeated First-Price Auctions with Budget
  Constraints
No-regret Learning in Repeated First-Price Auctions with Budget Constraints
Rui Ai
Chang Wang
Chenchen Li
Jinshan Zhang
Wenhan Huang
Xiaotie Deng
35
10
0
29 May 2022
The Symmetry between Arms and Knapsacks: A Primal-Dual Approach for
  Bandits with Knapsacks
The Symmetry between Arms and Knapsacks: A Primal-Dual Approach for Bandits with Knapsacks
Xiaocheng Li
Chunlin Sun
Yinyu Ye
24
21
0
12 Feb 2021
The Best of Many Worlds: Dual Mirror Descent for Online Allocation
  Problems
The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems
S. Balseiro
Haihao Lu
Vahab Mirrokni
29
101
0
18 Nov 2020
Online Learning with Vector Costs and Bandits with Knapsacks
Online Learning with Vector Costs and Bandits with Knapsacks
Thomas Kesselheim
Sahil Singla
17
32
0
14 Oct 2020
Contextual Blocking Bandits
Contextual Blocking Bandits
Soumya Basu
Orestis Papadigenopoulos
Constantine Caramanis
Sanjay Shakkottai
35
20
0
06 Mar 2020
Inventory Balancing with Online Learning
Inventory Balancing with Online Learning
Wang Chi Cheung
Will Ma
D. Simchi-Levi
Xinshang Wang
24
16
0
11 Oct 2018
Multi-level Feedback Web Links Selection Problem: Learning and
  Optimization
Multi-level Feedback Web Links Selection Problem: Learning and Optimization
Kechao Cai
Kun Chen
Longbo Huang
John C. S. Lui
18
3
0
08 Sep 2017
Combinatorial Semi-Bandits with Knapsacks
Combinatorial Semi-Bandits with Knapsacks
Karthik Abinav Sankararaman
Aleksandrs Slivkins
33
48
0
23 May 2017
Fast Rates for Bandit Optimization with Upper-Confidence Frank-Wolfe
Fast Rates for Bandit Optimization with Upper-Confidence Frank-Wolfe
Quentin Berthet
Vianney Perchet
38
31
0
22 Feb 2017
Linear Contextual Bandits with Knapsacks
Linear Contextual Bandits with Knapsacks
Shipra Agrawal
Nikhil R. Devanur
49
142
0
24 Jul 2015
Resourceful Contextual Bandits
Resourceful Contextual Bandits
Ashwinkumar Badanidiyuru
John Langford
Aleksandrs Slivkins
45
117
0
27 Feb 2014
1