ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.15001
19
3

Early stopping for conjugate gradients in statistical inverse problems

21 June 2024
Laura Hucker
Markus Reiß
ArXivPDFHTML
Abstract

We consider estimators obtained by iterates of the conjugate gradient (CG) algorithm applied to the normal equation of prototypical statistical inverse problems. Stopping the CG algorithm early induces regularisation, and optimal convergence rates of prediction and reconstruction error are established in wide generality for an ideal oracle stopping time. Based on this insight, a fully data-driven early stopping rule τ\tauτ is constructed, which also attains optimal rates, provided the error in estimating the noise level is not dominant. The error analysis of CG under statistical noise is subtle due to its nonlinear dependence on the observations. We provide an explicit error decomposition into two terms, which shares important properties of the classical bias-variance decomposition. Together with a continuous interpolation between CG iterates, this paves the way for a comprehensive error analysis of early stopping. In particular, a general oracle-type inequality is proved for the prediction error at τ\tauτ. For bounding the reconstruction error, a more refined probabilistic analysis, based on concentration of self-normalised Gaussian processes, is developed. The methodology also provides some new insights into early stopping for CG in deterministic inverse problems. A numerical study for standard examples shows good results in practice for early stopping at τ\tauτ.

View on arXiv
Comments on this paper