ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.18431
32
11

On the convergence of adaptive first order methods: proximal gradient and alternating minimization algorithms

30 November 2023
P. Latafat
Andreas Themelis
Panagiotis Patrinos
ArXivPDFHTML
Abstract

Building upon recent works on linesearch-free adaptive proximal gradient methods, this paper proposes adaPGq,r^{q,r}q,r, a framework that unifies and extends existing results by providing larger stepsize policies and improved lower bounds. Different choices of the parameters qqq and rrr are discussed and the efficacy of the resulting methods is demonstrated through numerical simulations. In an attempt to better understand the underlying theory, its convergence is established in a more general setting that allows for time-varying parameters. Finally, an adaptive alternating minimization algorithm is presented by exploring the dual setting. This algorithm not only incorporates additional adaptivity, but also expands its applicability beyond standard strongly convex settings.

View on arXiv
Comments on this paper