314
143

Optimal Rates For Regularization Of Statistical Inverse Learning Problems

Abstract

We consider a statistical inverse learning problem, where we observe the image of a function ff through a linear operator AA at i.i.d. random design points XiX_i, superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of AfAf) and the inverse (estimation of ff) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations nn grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in nn but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set.

View on arXiv
Comments on this paper