ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.12424
  4. Cited By
Online Continuous Submodular Maximization: From Full-Information to
  Bandit Feedback

Online Continuous Submodular Maximization: From Full-Information to Bandit Feedback

28 October 2019
Mingrui Zhang
Lin Chen
Hamed Hassani
Amin Karbasi
ArXivPDFHTML

Papers citing "Online Continuous Submodular Maximization: From Full-Information to Bandit Feedback"

6 / 6 papers shown
Title
Stochastic Submodular Bandits with Delayed Composite Anonymous Bandit Feedback
Stochastic Submodular Bandits with Delayed Composite Anonymous Bandit Feedback
M. Pedramfar
Vaneet Aggarwal
57
2
0
23 Mar 2023
Introduction to Online Convex Optimization
Introduction to Online Convex Optimization
Elad Hazan
OffRL
172
1,929
0
07 Sep 2019
Conditional Gradient Method for Stochastic Submodular Maximization:
  Closing the Gap
Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap
Aryan Mokhtari
S. Hassani
Amin Karbasi
53
74
0
05 Nov 2017
Bandit Convex Optimization: sqrt{T} Regret in One Dimension
Bandit Convex Optimization: sqrt{T} Regret in One Dimension
Sébastien Bubeck
O. Dekel
Tomer Koren
Yuval Peres
121
36
0
23 Feb 2015
On the Complexity of Bandit and Derivative-Free Stochastic Convex
  Optimization
On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization
Ohad Shamir
414
193
0
11 Sep 2012
Determinantal point processes for machine learning
Determinantal point processes for machine learning
Alex Kulesza
B. Taskar
250
1,139
0
25 Jul 2012
1