In this paper, we study the sample complexity lower bounds for the exact recovery of parameters and for a positive excess risk of a feed-forward, fully-connected neural network for binary classification, using information-theoretic tools. We prove these lower bounds by the existence of a generative network characterized by a backwards data generating process, where the input is generated based on the binary output, and the network is parametrized by weight parameters for the hidden layers. The sample complexity lower bound for the exact recovery of parameters is and for a positive excess risk is , where is the dimension of the input, reflects the rank of the weight matrices and is the number of hidden layers. To the best of our knowledge, our results are the first information theoretic lower bounds.
View on arXiv