ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 0803.2119
61
13

Jump estimation in inverse regression

14 March 2008
L. Boysen
Axel Munk
ArXivPDFHTML
Abstract

We consider estimation of a step function fff from noisy observations of a deconvolution ϕ∗f\phi*fϕ∗f, where ϕ\phiϕ is some bounded L1L_1L1​-function. We use a penalized least squares estimator to reconstruct the signal fff from the observations, with penalty equal to the number of jumps of the reconstruction. Asymptotically, it is possible to correctly estimate the number of jumps with probability one. Given that the number of jumps is correctly estimated, we show that the corresponding parameter estimates of the jump locations and jump heights are n−1/2n^{-1/2}n−1/2 consistent and converge to a joint normal distribution with covariance structure depending on ϕ\phiϕ, and that this rate is minimax for bounded continuous kernels ϕ\phiϕ. As special case we obtain the asymptotic distribution of the least squares estimator in multiphase regression and generalisations thereof. In contrast to the results obtained for bounded ϕ\phiϕ, we show that for kernels with a singularity of order O(∣x∣−α),1/2<α<1O(| x|^{-\alpha}),1/2<\alpha<1O(∣x∣−α),1/2<α<1, a jump location can be estimated at a rate of n−1/(3−2α)n^{-1/(3-2\alpha)}n−1/(3−2α), which is again the minimax rate. We find that these rate do not depend on the spectral information of the operator rather on its localization properties in the time domain. Finally, it turns out that adaptive sampling does not improve the rate of convergence, in strict contrast to the case of direct regression.

View on arXiv
Comments on this paper