For linear inverse problems , it is classical to recover the unknown signal by iterative regularisation methods and halt at a data-dependent iteration using some stopping rule, typically based on a discrepancy principle, so that the weak (or prediction) squared-error is controlled. In the context of statistical estimation with stochastic noise , we study oracle adaptation (that is, compared to the best possible stopping iteration) in strong squared-error . For a residual-based stopping rule oracle adaptation bounds are established for general spectral regularisation methods. The proofs use bias and variance transfer techniques from weak prediction error to strong -error, as well as convexity arguments and concentration bounds for the stochastic part. Adaptive early stopping for the Landweber method is studied in further detail and illustrated numerically.
View on arXiv