ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.07702
40
35
v1v2 (latest)

Optimal adaptation for early stopping in statistical inverse problems

24 June 2016
Gilles Blanchard
M. Hoffmann
M. Reiß
ArXiv (abs)PDFHTML
Abstract

For linear inverse problems Y=Aμ+ξY=\mathsf{A}\mu+\xiY=Aμ+ξ, it is classical to recover the unknown signal μ\muμ by iterative regularisation methods (μ^(m),m=0,1,…)(\widehat \mu^{(m)}, m=0,1,\ldots)(μ​(m),m=0,1,…) and halt at a data-dependent iteration τ\tauτ using some stopping rule, typically based on a discrepancy principle, so that the weak (or prediction) squared-error ∥A(μ^(τ)−μ)∥2\|\mathsf{A}(\widehat \mu^{(\tau)}-\mu)\|^2∥A(μ​(τ)−μ)∥2 is controlled. In the context of statistical estimation with stochastic noise ξ\xiξ, we study oracle adaptation (that is, compared to the best possible stopping iteration) in strong squared-error E[∥μ^(τ)−μ∥2]E[\|\hat \mu^{(\tau)}-\mu\|^2]E[∥μ^​(τ)−μ∥2]. For a residual-based stopping rule oracle adaptation bounds are established for general spectral regularisation methods. The proofs use bias and variance transfer techniques from weak prediction error to strong L2L^2L2-error, as well as convexity arguments and concentration bounds for the stochastic part. Adaptive early stopping for the Landweber method is studied in further detail and illustrated numerically.

View on arXiv
Comments on this paper