Convergence rates for Penalised Least Squares Estimators in PDE-constrained regression problems

We consider PDE constrained nonparametric regression problems in which the parameter is the unknown coefficient function of a second order elliptic partial differential operator , and the unique solution of the boundary value problem \[L_fu=g_1\text{ on } \mathcal O, \quad u=g_2 \text{ on }\partial \mathcal O,\] is observed corrupted by additive Gaussian white noise. Here is a bounded domain in with smooth boundary , and are given functions defined on , respectively. Concrete examples include (Schr\"odinger equation with attenuation potential ) and (divergence form equation with conductivity ). In both cases, the parameter space \[\mathcal F=\{f\in H^\alpha(\mathcal O)| f > 0\}, ~\alpha>0, \] where is the usual order Sobolev space, induces a set of non-linearly constrained regression functions . We study Tikhonov-type penalised least squares estimators for . The penalty functionals are of squared Sobolev-norm type and thus can also be interpreted as a Bayesian `MAP'-estimator corresponding to some Gaussian process prior. We derive rates of convergence of and of , to , respectively. We prove that the rates obtained are minimax-optimal in prediction loss. Our bounds are derived from a general convergence rate result for non-linear inverse problems whose forward map satisfies a modulus of continuity condition, a result of independent interest that is applicable also to linear inverse problems, illustrated in an example with the Radon transform.
View on arXiv