21
17

An Embedding of ReLU Networks and an Analysis of their Identifiability

Pierre Stock
Rémi Gribonval
Abstract

Neural networks with the Rectified Linear Unit (ReLU) nonlinearity are described by a vector of parameters θ\theta, and realized as a piecewise linear continuous function Rθ:xRdRθ(x)RkR_{\theta}: x \in \mathbb R^{d} \mapsto R_{\theta}(x) \in \mathbb R^{k}. Natural scalings and permutations operations on the parameters θ\theta leave the realization unchanged, leading to equivalence classes of parameters that yield the same realization. These considerations in turn lead to the notion of identifiability -- the ability to recover (the equivalence class of) θ\theta from the sole knowledge of its realization RθR_{\theta}. The overall objective of this paper is to introduce an embedding for ReLU neural networks of any depth, Φ(θ)\Phi(\theta), that is invariant to scalings and that provides a locally linear parameterization of the realization of the network. Leveraging these two key properties, we derive some conditions under which a deep ReLU network is indeed locally identifiable from the knowledge of the realization on a finite set of samples xiRdx_{i} \in \mathbb R^{d}. We study the shallow case in more depth, establishing necessary and sufficient conditions for the network to be identifiable from a bounded subset XRd\mathcal X \subseteq \mathbb R^{d}.

View on arXiv
Comments on this paper