8
22

Stochastic Conditional Gradient++

Abstract

In this paper, we consider the general non-oblivious stochastic optimization where the underlying stochasticity may change during the optimization procedure and depends on the point at which the function is evaluated. We develop Stochastic Frank-Wolfe++ (SFW++\text{SFW}{++} ), an efficient variant of the conditional gradient method for minimizing a smooth non-convex function subject to a convex body constraint. We show that SFW++\text{SFW}{++} converges to an ϵ\epsilon-first order stationary point by using O(1/ϵ3)O(1/\epsilon^3) stochastic gradients. Once further structures are present, SFW++\text{SFW}{++}'s theoretical guarantees, in terms of the convergence rate and quality of its solution, improve. In particular, for minimizing a convex function, SFW++\text{SFW}{++} achieves an ϵ\epsilon-approximate optimum while using O(1/ϵ2)O(1/\epsilon^2) stochastic gradients. It is known that this rate is optimal in terms of stochastic gradient evaluations. Similarly, for maximizing a monotone continuous DR-submodular function, a slightly different form of SFW++\text{SFW}{++} , called Stochastic Continuous Greedy++ (SCG++\text{SCG}{++} ), achieves a tight [(11/e)OPTϵ][(1-1/e)\text{OPT} -\epsilon] solution while using O(1/ϵ2)O(1/\epsilon^2) stochastic gradients. Through an information theoretic argument, we also prove that SCG++\text{SCG}{++} 's convergence rate is optimal. Finally, for maximizing a non-monotone continuous DR-submodular function, we can achieve a [(1/e)OPTϵ][(1/e)\text{OPT} -\epsilon] solution by using O(1/ϵ2)O(1/\epsilon^2) stochastic gradients. We should highlight that our results and our novel variance reduction technique trivially extend to the standard and easier oblivious stochastic optimization settings for (non-)covex and continuous submodular settings.

View on arXiv
Comments on this paper