Prediction bounds for higher order total variation regularized least squares

Abstract
We establish adaptive results for trend filtering: least squares estimation with a penalty on the total variation of order differences. Our approach is based on combining a general oracle inequality for the -penalized least squares estimator with "interpolating vectors" to upper-bound the "effective sparsity". This allows one to show that the -penalty on the order differences leads to an estimator that can adapt to the number of jumps in the order differences of the underlying signal or an approximation thereof. We show the result for and indicate how it could be derived for general .
View on arXivComments on this paper