ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.06052
21
83

Online Continuous Submodular Maximization

16 February 2018
Lin Chen
Hamed Hassani
Amin Karbasi
ArXivPDFHTML
Abstract

In this paper, we consider an online optimization process, where the objective functions are not convex (nor concave) but instead belong to a broad class of continuous submodular functions. We first propose a variant of the Frank-Wolfe algorithm that has access to the full gradient of the objective functions. We show that it achieves a regret bound of O(T)O(\sqrt{T})O(T​) (where TTT is the horizon of the online optimization problem) against a (1−1/e)(1-1/e)(1−1/e)-approximation to the best feasible solution in hindsight. However, in many scenarios, only an unbiased estimate of the gradients are available. For such settings, we then propose an online stochastic gradient ascent algorithm that also achieves a regret bound of O(T)O(\sqrt{T})O(T​) regret, albeit against a weaker 1/21/21/2-approximation to the best feasible solution in hindsight. We also generalize our results to γ\gammaγ-weakly submodular functions and prove the same sublinear regret bounds. Finally, we demonstrate the efficiency of our algorithms on a few problem instances, including non-convex/non-concave quadratic programs, multilinear extensions of submodular set functions, and D-optimal design.

View on arXiv
Comments on this paper