An Embedding of ReLU Networks and an Analysis of their Identifiability

Neural networks with the Rectified Linear Unit (ReLU) nonlinearity are described by a vector of parameters , and realized as a piecewise linear continuous function . Natural scalings and permutations operations on the parameters leave the realization unchanged, leading to equivalence classes of parameters that yield the same realization. These considerations in turn lead to the notion of identifiability -- the ability to recover (the equivalence class of) from the sole knowledge of its realization . The overall objective of this paper is to introduce an embedding for ReLU neural networks of any depth, , that is invariant to scalings and that provides a locally linear parameterization of the realization of the network. Leveraging these two key properties, we derive some conditions under which a deep ReLU network is indeed locally identifiable from the knowledge of the realization on a finite set of samples . We study the shallow case in more depth, establishing necessary and sufficient conditions for the network to be identifiable from a bounded subset .
View on arXiv