42
35

Optimal adaptation for early stopping in statistical inverse problems

Abstract

For linear inverse problems Y=Aμ+ξY=\mathsf{A}\mu+\xi, it is classical to recover the unknown signal μ\mu by iterative regularisation methods (μ^(m),m=0,1,)(\hat \mu^{(m)}, m=0,1,\ldots) and halt at a data-dependent iteration τ\tau using some stopping rule, typically based on a discrepancy principle, so that the weak (or prediction) error A(μ^(τ)μ)2\|\mathsf{A}(\hat \mu^{(\tau)}-\mu)\|^2 is controlled. In the context of statistical estimation with stochastic noise ξ\xi, we study oracle adaptation (that is, compared to the best possible stopping iteration) in strong squared-error E[μ^(τ)μ2]E[\|\hat \mu^{(\tau)}-\mu\|^2]. We give sharp lower bounds for such stopping rules in the case of the spectral cutoff method, and thus delineate precisely when adaptation is possible. For a stopping rule based on the residual process, oracle adaptation bounds whithin a certain domain are established for general linear iterative methods. For Sobolev balls, the domain of adaptivity is shown to match the lower bounds. The proofs use bias and variance transfer techniques from weak prediction error to strong L2L^2-error, as well as convexity arguments and concentration bounds for the stochastic part. Adaptive early stopping for the Landweber and spectral cutoff methods is studied in further detail and illustrated numerically.

View on arXiv
Comments on this paper