ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1407.1723
39
69

The Primal-Dual Hybrid Gradient Method for Semiconvex Splittings

7 July 2014
Thomas Möllenhoff
Evgeny Strekalovskiy
Michael Möller
Daniel Cremers
ArXivPDFHTML
Abstract

This paper deals with the analysis of a recent reformulation of the primal-dual hybrid gradient method [Zhu and Chan 2008, Pock, Cremers, Bischof and Chambolle 2009, Esser, Zhang and Chan 2010, Chambolle and Pock 2011], which allows to apply it to nonconvex regularizers as first proposed for truncated quadratic penalization in [Strekalovskiy and Cremers 2014]. Particularly, it investigates variational problems for which the energy to be minimized can be written as G(u)+F(Ku)G(u) + F(Ku)G(u)+F(Ku), where GGG is convex, FFF semiconvex, and KKK is a linear operator. We study the method and prove convergence in the case where the nonconvexity of FFF is compensated by the strong convexity of the GGG. The convergence proof yields an interesting requirement for the choice of algorithm parameters, which we show to not only be sufficient, but necessary. Additionally, we show boundedness of the iterates under much weaker conditions. Finally, we demonstrate effectiveness and convergence of the algorithm beyond the theoretical guarantees in several numerical experiments.

View on arXiv
Comments on this paper