Fixed points of nonnegative neural networks
We derive conditions for the existence of fixed points of nonnegative neural networks, an important research objective to understand the behavior of neural networks in modern applications involving autoencoders and loop unrolling techniques, among others. In particular, we show that neural networks with nonnegative inputs and nonnegative parameters can be recognized as monotonic and (weakly) scalable functions within the framework of nonlinear Perron-Frobenius theory. This fact enables us to derive conditions for the existence of a nonempty fixed point set of the nonnegative neural networks, and these conditions are weaker than those obtained recently using arguments in convex analysis, which are typically based on the assumption of nonexpansivity of the activation functions. Furthermore, we prove that the shape of the fixed point set of monotonic and weakly scalable neural networks is often an interval, which degenerates to a point for the case of scalable networks. The chief results of this paper are verified in numerical simulations, where we consider an autoencoder-type network that first compresses angular power spectra in massive MIMO systems, and, second, reconstruct the input spectra from the compressed signals.
View on arXiv