We consider an additive regression model consisting of two components and , where the first component is in some sense "smoother" than the second . Smoothness is here described in terms of a semi-norm on the class of regression functions. We use a penalized least squares estimator of and show that the rate of convergence for is faster than the rate of convergence for . In fact, both rates are generally as fast as in the case where one of the two components is known. The theory is illustrated by a simulation study. Our proofs rely on recent results from empirical process theory.
View on arXiv