261

Sparse q\ell^q-regularization of Inverse Problems Using Deep Learning

Abstract

We propose a novel data driven sparse reconstruction framework for solving inverse problems named aNET (augmented NEtwork Tikhonov regularization). Opposed to existing sparse reconstruction techniques that are based on linear sparsifying transformations, we train an encoder-decoder network DE\mathbf{D} \circ \mathbf{E} to construct a regularizer formed by q\ell^q-norm of the encoder coefficients enforcing sparsity and an additional term penalizing the distance to the data manifold. We present a full convergence analysis and derive convergence rates in a general setting including the q\ell^q-norm of the encoder coefficients. As a main ingredient for the theoretical analysis we establish the coercivity of the augmented regularization term. Application to sparse view and low dose CT demonstrate the practical benefits of the proposal. We show that the proposed method is able to leverage increased sampling rates without retraining the networks.

View on arXiv
Comments on this paper