ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.01660
19
70

Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap

5 November 2017
Aryan Mokhtari
S. Hassani
Amin Karbasi
ArXivPDFHTML
Abstract

In this paper, we study the problem of \textit{constrained} and \textit{stochastic} continuous submodular maximization. Even though the objective function is not concave (nor convex) and is defined in terms of an expectation, we develop a variant of the conditional gradient method, called \alg, which achieves a \textit{tight} approximation guarantee. More precisely, for a monotone and continuous DR-submodular function and subject to a \textit{general} convex body constraint, we prove that \alg achieves a [(1−1/e)OPT−\eps][(1-1/e)\text{OPT} -\eps][(1−1/e)OPT−\eps] guarantee (in expectation) with O(1/\eps3)\mathcal{O}{(1/\eps^3)}O(1/\eps3) stochastic gradient computations. This guarantee matches the known hardness results and closes the gap between deterministic and stochastic continuous submodular maximization. By using stochastic continuous optimization as an interface, we also provide the first (1−1/e)(1-1/e)(1−1/e) tight approximation guarantee for maximizing a \textit{monotone but stochastic} submodular \textit{set} function subject to a general matroid constraint.

View on arXiv
Comments on this paper