ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.02212
25
8

Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization

3 June 2023
Ruichen Jiang
Aryan Mokhtari
ArXivPDFHTML
Abstract

In this paper, we propose an accelerated quasi-Newton proximal extragradient (A-QPNE) method for solving unconstrained smooth convex optimization problems. With access only to the gradients of the objective, we prove that our method can achieve a convergence rate of O(min⁡{1k2,dlog⁡kk2.5}){O}\bigl(\min\{\frac{1}{k^2}, \frac{\sqrt{d\log k}}{k^{2.5}}\}\bigr)O(min{k21​,k2.5dlogk​​}), where ddd is the problem dimension and kkk is the number of iterations. In particular, in the regime where k=O(d)k = {O}(d)k=O(d), our method matches the optimal rate of O(1k2){O}(\frac{1}{k^2})O(k21​) by Nesterov's accelerated gradient (NAG). Moreover, in the the regime where k=Ω(dlog⁡d)k = \Omega(d \log d)k=Ω(dlogd), it outperforms NAG and converges at a faster rate of O(dlog⁡kk2.5){O}\bigl(\frac{\sqrt{d\log k}}{k^{2.5}}\bigr)O(k2.5dlogk​​). To the best of our knowledge, this result is the first to demonstrate a provable gain of a quasi-Newton-type method over NAG in the convex setting. To achieve such results, we build our method on a recent variant of the Monteiro-Svaiter acceleration framework and adopt an online learning perspective to update the Hessian approximation matrices, in which we relate the convergence rate of our method to the dynamic regret of a specific online convex optimization problem in the space of matrices.

View on arXiv
Comments on this paper