49
21
v1v2v3v4 (latest)

Prediction bounds for higher order total variation regularized least squares

Abstract

We establish adaptive results for trend filtering: least squares estimation with a penalty on the total variation of (k1)th(k-1)^{\rm th} order differences. Our approach is based on combining a general oracle inequality for the 1\ell_1-penalized least squares estimator with "interpolating vectors" to upper-bound the "effective sparsity". This allows one to show that the 1\ell_1-penalty on the kthk^{\text{th}} order differences leads to an estimator that can adapt to the number of jumps in the (k1)th(k-1)^{\text{th}} order differences of the underlying signal or an approximation thereof. We show the result for k{1,2,3,4}k \in \{1,2,3,4\} and indicate how it could be derived for general kNk\in \mathbb{N}.

View on arXiv
Comments on this paper