ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.16532
35
1

Deep unrolling for learning optimal spatially varying regularisation parameters for Total Generalised Variation

23 February 2025
Thanh Trung Vu
Andreas Kofler
Kostas Papafitsoros
ArXivPDFHTML
Abstract

We extend a recently introduced deep unrolling framework for learning spatially varying regularisation parameters in inverse imaging problems to the case of Total Generalised Variation (TGV). The framework combines a deep convolutional neural network (CNN) inferring the two spatially varying TGV parameters with an unrolled algorithmic scheme that solves the corresponding variational problem. The two subnetworks are jointly trained end-to-end in a supervised fashion and as such the CNN learns to compute those parameters that drive the reconstructed images as close to the ground truth as possible. Numerical results in image denoising and MRI reconstruction show a significant qualitative and quantitative improvement compared to the best TGV scalar parameter case as well as to other approaches employing spatially varying parameters computed by unsupervised methods. We also observe that the inferred spatially varying parameter maps have a consistent structure near the image edges, asking for further theoretical investigations. In particular, the parameter that weighs the first-order TGV term has a triple-edge structure with alternating high-low-high values whereas the one that weighs the second-order term attains small values in a large neighbourhood around the edges.

View on arXiv
@article{vu2025_2502.16532,
  title={ Deep unrolling for learning optimal spatially varying regularisation parameters for Total Generalised Variation },
  author={ Thanh Trung Vu and Andreas Kofler and Kostas Papafitsoros },
  journal={arXiv preprint arXiv:2502.16532},
  year={ 2025 }
}
Comments on this paper