ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.13305
14
2

Stochastic gradient-free descents

31 December 2019
Xiaopeng Luo
Xin Xu
    ODL
ArXivPDFHTML
Abstract

In this paper we propose stochastic gradient-free methods and accelerated methods with momentum for solving stochastic optimization problems. All these methods rely on stochastic directions rather than stochastic gradients. We analyze the convergence behavior of these methods under the mean-variance framework, and also provide a theoretical analysis about the inclusion of momentum in stochastic settings which reveals that the momentum term we used adds a deviation of order O(1/k)\mathcal{O}(1/k)O(1/k) but controls the variance at the order O(1/k)\mathcal{O}(1/k)O(1/k) for the kkkth iteration. So it is shown that, when employing a decaying stepsize αk=O(1/k)\alpha_k=\mathcal{O}(1/k)αk​=O(1/k), the stochastic gradient-free methods can still maintain the sublinear convergence rate O(1/k)\mathcal{O}(1/k)O(1/k) and the accelerated methods with momentum can achieve a convergence rate O(1/k2)\mathcal{O}(1/k^2)O(1/k2) in probability for the strongly convex objectives with Lipschitz gradients; and all these methods converge to a solution with a zero expected gradient norm when the objective function is nonconvex, twice differentiable and bounded below.

View on arXiv
Comments on this paper