ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.01277
21
7

Escaping Saddle Points for Zeroth-order Nonconvex Optimization using Estimated Gradient Descent

3 October 2019
Qinbo Bai
Mridul Agarwal
Vaneet Aggarwal
ArXivPDFHTML
Abstract

Gradient descent and its variants are widely used in machine learning. However, oracle access of gradient may not be available in many applications, limiting the direct use of gradient descent. This paper proposes a method of estimating gradient to perform gradient descent, that converges to a stationary point for general non-convex optimization problems. Beyond the first-order stationary properties, the second-order stationary properties are important in machine learning applications to achieve better performance. We show that the proposed model-free non-convex optimization algorithm returns an ϵ\epsilonϵ-second-order stationary point with O~(d2+θ2ϵ8+θ)\widetilde{O}(\frac{d^{2+\frac{\theta}{2}}}{\epsilon^{8+\theta}})O(ϵ8+θd2+2θ​​) queries of the function for any arbitrary θ>0\theta>0θ>0.

View on arXiv
Comments on this paper