ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.03006
11
3

Augmented NETT Regularization of Inverse Problems

8 August 2019
D. Obmann
Linh V. Nguyen
Johannes Schwab
Markus Haltmeier
ArXivPDFHTML
Abstract

We propose aNETT (augmented NETwork Tikhonov) regularization as a novel data-driven reconstruction framework for solving inverse problems. An encoder-decoder type network defines a regularizer consisting of a penalty term that enforces regularity in the encoder domain, augmented by a penalty that penalizes the distance to the data manifold. We present a rigorous convergence analysis including stability estimates and convergence rates. For that purpose, we prove the coercivity of the regularizer used without requiring explicit coercivity assumptions for the networks involved. We propose a possible realization together with a network architecture and a modular training strategy. Applications to sparse-view and low-dose CT show that aNETT achieves results comparable to state-of-the-art deep-learning-based reconstruction methods. Unlike learned iterative methods, aNETT does not require repeated application of the forward and adjoint models, which enables the use of aNETT for inverse problems with numerically expensive forward models. Furthermore, we show that aNETT trained on coarsely sampled data can leverage an increased sampling rate without the need for retraining.

View on arXiv
Comments on this paper