ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.08515
16
6

Total variation multiscale estimators for linear inverse problems

21 May 2019
Miguel del Alamo
Axel Munk
ArXivPDFHTML
Abstract

Even though the statistical theory of linear inverse problems is a well-studied topic, certain relevant cases remain open. Among these is the estimation of functions of bounded variation (BVBVBV), meaning L1L^1L1 functions on a ddd-dimensional domain whose weak first derivatives are finite Radon measures. The estimation of BVBVBV functions is relevant in many applications, since it involves minimal smoothness assumptions and gives simplified, interpretable cartoonized reconstructions. In this paper we propose a novel technique for estimating BVBVBV functions in an inverse problem setting, and provide theoretical guaranties by showing that the proposed estimator is minimax optimal up to logarithms with respect to the LqL^qLq-risk, for any q∈[1,∞)q\in[1,\infty)q∈[1,∞). This is to the best of our knowledge the first convergence result for BVBVBV functions in inverse problems in dimension d≥2d\geq 2d≥2, and it extends the results by Donoho (Appl. Comput. Harmon. Anal., 2(2):101--126, 1995) in d=1d=1d=1. Furthermore, our analysis unravels a novel regime for large qqq in which the minimax rate is slower than n−1/(d+2β+2)n^{-1/(d+2\beta+2)}n−1/(d+2β+2), where β\betaβ is the degree of ill-posedness: our analysis shows that this slower rate arises from the low smoothness of BVBVBV functions. The proposed estimator combines variational regularization techniques with the wavelet-vaguelette decomposition of operators.

View on arXiv
Comments on this paper