394

Limitations of Deep Learning for Inverse Problems on Digital Hardware

IEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2022
Main:26 Pages
3 Figures
Bibliography:3 Pages
Abstract

Deep neural networks have seen tremendous success over the last years. Since the training is performed on digital hardware, in this paper, we analyze what actually can be computed on current hardware platforms modeled as Turing machines, which would lead to inherent restrictions of deep learning. For this, we focus on the class of inverse problems, which, in particular, encompasses any task to reconstruct data from measurements. We prove that finite-dimensional inverse problems are not Banach-Mazur computable for small relaxation parameters. In fact, our result even holds for Borel-Turing computability., i.e., there does not exist an algorithm which performs the training of a neural network on digital hardware for any given accuracy. Even more, our results introduce a lower bound on the accuracy that can be obtained algorithmically. This establishes a conceptual barrier on the capabilities of neural networks for finite-dimensional inverse problems given that the computations are performed on digital hardware.

View on arXiv
Comments on this paper