ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.11985
19
70

Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond

27 June 2019
Oliver Hinder
Aaron Sidford
N. Sohoni
ArXivPDFHTML
Abstract

In this paper, we provide near-optimal accelerated first-order methods for minimizing a broad class of smooth nonconvex functions that are strictly unimodal on all lines through a minimizer. This function class, which we call the class of smooth quasar-convex functions, is parameterized by a constant γ∈(0,1]\gamma \in (0,1]γ∈(0,1], where γ=1\gamma = 1γ=1 encompasses the classes of smooth convex and star-convex functions, and smaller values of γ\gammaγ indicate that the function can be "more nonconvex." We develop a variant of accelerated gradient descent that computes an ϵ\epsilonϵ-approximate minimizer of a smooth γ\gammaγ-quasar-convex function with at most O(γ−1ϵ−1/2log⁡(γ−1ϵ−1))O(\gamma^{-1} \epsilon^{-1/2} \log(\gamma^{-1} \epsilon^{-1}))O(γ−1ϵ−1/2log(γ−1ϵ−1)) total function and gradient evaluations. We also derive a lower bound of Ω(γ−1ϵ−1/2)\Omega(\gamma^{-1} \epsilon^{-1/2})Ω(γ−1ϵ−1/2) on the worst-case number of gradient evaluations required by any deterministic first-order method, showing that, up to a logarithmic factor, no deterministic first-order method can improve upon ours.

View on arXiv
Comments on this paper