14
6

Representation Learning and Recovery in the ReLU Model

Abstract

Rectified linear units, or ReLUs, have become the preferred activation function for artificial neural networks. In this paper we consider two basic learning problems assuming that the underlying data follow a generative model based on a ReLU-network -- a neural network with ReLU activations. As a primarily theoretical study, we limit ourselves to a single-layer network. The first problem we study corresponds to dictionary-learning in the presence of nonlinearity (modeled by the ReLU functions). Given a set of observation vectors yiRd,i=1,2,,n\mathbf{y}^i \in \mathbb{R}^d, i =1, 2, \dots , n, we aim to recover d×kd\times k matrix AA and the latent vectors {ci}Rk\{\mathbf{c}^i\} \subset \mathbb{R}^k under the model yi=ReLU(Aci+b)\mathbf{y}^i = \mathrm{ReLU}(A\mathbf{c}^i +\mathbf{b}), where bRd\mathbf{b}\in \mathbb{R}^d is a random bias. We show that it is possible to recover the column space of AA within an error of O(d)O(d) (in Frobenius norm) under certain conditions on the probability distribution of b\mathbf{b}. The second problem we consider is that of robust recovery of the signal in the presence of outliers, i.e., large but sparse noise. In this setting we are interested in recovering the latent vector c\mathbf{c} from its noisy nonlinear sketches of the form v=ReLU(Ac)+e+w\mathbf{v} = \mathrm{ReLU}(A\mathbf{c}) + \mathbf{e}+\mathbf{w}, where eRd\mathbf{e} \in \mathbb{R}^d denotes the outliers with sparsity ss and wRd\mathbf{w} \in \mathbb{R}^d denote the dense but small noise. This line of work has recently been studied (Soltanolkotabi, 2017) without the presence of outliers. For this problem, we show that a generalized LASSO algorithm is able to recover the signal cRk\mathbf{c} \in \mathbb{R}^k within an 2\ell_2 error of O((k+s)logdd)O(\sqrt{\frac{(k+s)\log d}{d}}) when AA is a random Gaussian matrix.

View on arXiv
Comments on this paper