ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1312.2139
41
480

Optimal rates for zero-order convex optimization: the power of two function evaluations

7 December 2013
John C. Duchi
Michael I. Jordan
Martin J. Wainwright
Andre Wibisono
ArXivPDFHTML
Abstract

We consider derivative-free algorithms for stochastic and non-stochastic convex optimization problems that use only function values rather than gradients. Focusing on non-asymptotic bounds on convergence rates, we show that if pairs of function values are available, algorithms for ddd-dimensional optimization that use gradient estimates based on random perturbations suffer a factor of at most d\sqrt{d}d​ in convergence rate over traditional stochastic gradient methods. We establish such results for both smooth and non-smooth cases, sharpening previous analyses that suggested a worse dimension dependence, and extend our results to the case of multiple (m≥2m \ge 2m≥2) evaluations. We complement our algorithmic development with information-theoretic lower bounds on the minimax convergence rate of such problems, establishing the sharpness of our achievable results up to constant (sometimes logarithmic) factors.

View on arXiv
Comments on this paper