430

Optimal rates for zero-order optimization: the power of two function evaluations

IEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2013
Abstract

We consider derivative-free algorithms for stochastic and non-stochastic optimization problems that use only function values rather than gradients. Focusing on non-asymptotic bounds on convergence rates, we show that if pairs of function values are available, algorithms for dd-dimensional optimization that use gradient estimates based on random perturbations suffer a factor of at most d\sqrt{d} in convergence rate over traditional stochastic gradient methods. We establish such results for both smooth and non-smooth cases, sharpening previous analyses that suggested a worse dimension dependence. We complement our algorithmic development with information-theoretic lower bounds on the minimax convergence rate of such problems, establishing the sharpness of our achievable results up to constant factors.

View on arXiv
Comments on this paper