Fixed points of nonnegative neural networks
We consider the existence of fixed points of nonnegative neural networks, i.e., neural networks that take as an input nonnegative vectors and process them using nonnegative parameters. We first show that nonnegative neural networks can be recognized as monotonic and (weakly) scalable functions within the framework of nonlinear Perron-Frobenius theory. This fact enables us to provide conditions for the existence of fixed points of nonnegative neural networks, and these conditions are weaker than those obtained recently using arguments in convex analysis. Furthermore, we prove that the shape of the fixed point set of nonnegative neural networks is often an interval, which degenerates to a point for the case of scalable networks. The results of this paper contribute to the understanding of the behavior of autoencoders, because the fixed point set of an autoencoder is precisely the set of points that can be perfectly reconstructed. Moreover, they provide insight into neural networks designed using the loop-unrolling technique, which can be seen as a fixed point searching algorithm. The chief theoretical results of this paper are verified in numerical simulations, where we consider an autoencoder that first compresses angular power spectra in massive MIMO systems, and, second, reconstruct the input spectra from the compressed signals.
View on arXiv