ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.00365
  4. Cited By
Budget-Constrained Bandits over General Cost and Reward Distributions

Budget-Constrained Bandits over General Cost and Reward Distributions

29 February 2020
Semih Cayci
A. Eryilmaz
R. Srikant
ArXivPDFHTML

Papers citing "Budget-Constrained Bandits over General Cost and Reward Distributions"

17 / 17 papers shown
Title
Bayesian Optimization for Unknown Cost-Varying Variable Subsets with
  No-Regret Costs
Bayesian Optimization for Unknown Cost-Varying Variable Subsets with No-Regret Costs
Vu Viet Hoang
Quoc Anh Hoang Nguyen
Hung Tran The
89
0
0
20 Dec 2024
Directional Optimism for Safe Linear Bandits
Directional Optimism for Safe Linear Bandits
Spencer Hutchinson
Berkay Turan
M. Alizadeh
24
8
0
29 Aug 2023
Clustered Linear Contextual Bandits with Knapsacks
Clustered Linear Contextual Bandits with Knapsacks
Yichuan Deng
M. Mamakos
Zhao Song
24
0
0
21 Aug 2023
On Collaboration in Distributed Parameter Estimation with Resource
  Constraints
On Collaboration in Distributed Parameter Estimation with Resource Constraints
Y. Chen
Daniel S. Menasché
Don Towsley
52
0
0
12 Jul 2023
Provably Robust Temporal Difference Learning for Heavy-Tailed Rewards
Provably Robust Temporal Difference Learning for Heavy-Tailed Rewards
Semih Cayci
A. Eryilmaz
33
2
0
20 Jun 2023
Budgeted Multi-Armed Bandits with Asymmetric Confidence Intervals
Budgeted Multi-Armed Bandits with Asymmetric Confidence Intervals
Marco Heyden
Vadim Arzamasov
Edouard Fouché
Klemens Bohm
8
0
0
12 Jun 2023
Balancing Risk and Reward: An Automated Phased Release Strategy
Balancing Risk and Reward: An Automated Phased Release Strategy
Yufan Li
Jialiang Mao
Iavor Bojinov
16
0
0
16 May 2023
MADDM: Multi-Advisor Dynamic Binary Decision-Making by Maximizing the
  Utility
MADDM: Multi-Advisor Dynamic Binary Decision-Making by Maximizing the Utility
Zhaori Guo
Timothy J. Norman
E. Gerding
19
0
0
15 May 2023
A Lyapunov-Based Methodology for Constrained Optimization with Bandit
  Feedback
A Lyapunov-Based Methodology for Constrained Optimization with Bandit Feedback
Semih Cayci
Yilin Zheng
A. Eryilmaz
26
10
0
09 Jun 2021
Making the most of your day: online learning for optimal allocation of
  time
Making the most of your day: online learning for optimal allocation of time
Etienne Boursier
Tristan Garrec
Vianney Perchet
M. Scarsini
13
0
0
16 Feb 2021
An Efficient Pessimistic-Optimistic Algorithm for Stochastic Linear
  Bandits with General Constraints
An Efficient Pessimistic-Optimistic Algorithm for Stochastic Linear Bandits with General Constraints
Xin Liu
Bin Li
P. Shi
Lei Ying
28
44
0
10 Feb 2021
Multi-Armed Bandits with Censored Consumption of Resources
Multi-Armed Bandits with Censored Consumption of Resources
Viktor Bengs
Eyke Hüllermeier
33
2
0
02 Nov 2020
POND: Pessimistic-Optimistic oNline Dispatching
POND: Pessimistic-Optimistic oNline Dispatching
Xin Liu
Bin Li
P. Shi
Lei Ying
13
13
0
20 Oct 2020
Continuous-Time Multi-Armed Bandits with Controlled Restarts
Continuous-Time Multi-Armed Bandits with Controlled Restarts
Semih Cayci
A. Eryilmaz
R. Srikant
12
4
0
30 Jun 2020
Group-Fair Online Allocation in Continuous Time
Group-Fair Online Allocation in Continuous Time
Semih Cayci
Swati Gupta
A. Eryilmaz
FaML
32
19
0
11 Jun 2020
ROI Maximization in Stochastic Online Decision-Making
ROI Maximization in Stochastic Online Decision-Making
Nicolò Cesa-Bianchi
Tommaso Cesari
Yishay Mansour
Vianney Perchet
14
4
0
28 May 2019
Resourceful Contextual Bandits
Resourceful Contextual Bandits
Ashwinkumar Badanidiyuru
John Langford
Aleksandrs Slivkins
45
117
0
27 Feb 2014
1