69
24
v1v2v3v4v5 (latest)

Improved Sample Complexity for Stochastic Compositional Variance Reduced Gradient

Abstract

Convex composition optimization is an emerging topic that covers a wide range of applications arising from stochastic optimal control, reinforcement learning and multi-stage stochastic programming. Existing algorithms suffer from unsatisfactory sample complexity and practical issues since they ignore the convexity structure in the algorithmic design. In this paper, we develop a new stochastic compositional variance-reduced gradient algorithm with the sample complexity of O((m+n)log(1/ϵ)+1/ϵ3)O((m+n)\log(1/\epsilon)+1/\epsilon^3) where m+nm+n is the total number of samples. Our algorithm is near-optimal as the dependence on m+nm+n is optimal up to a logarithmic factor. Experimental results on real-world datasets demonstrate the effectiveness and efficiency of the new algorithm.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.