ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.00796
22
0

Information Theoretic Lower Bounds for Feed-Forward Fully-Connected Deep Networks

1 July 2020
Xiaochen Yang
Jean Honorio
ArXivPDFHTML
Abstract

In this paper, we study the sample complexity lower bounds for the exact recovery of parameters and for a positive excess risk of a feed-forward, fully-connected neural network for binary classification, using information-theoretic tools. We prove these lower bounds by the existence of a generative network characterized by a backwards data generating process, where the input is generated based on the binary output, and the network is parametrized by weight parameters for the hidden layers. The sample complexity lower bound for the exact recovery of parameters is Ω(drlog⁡(r)+p)\Omega(d r \log(r) + p )Ω(drlog(r)+p) and for a positive excess risk is Ω(rlog⁡(r)+p)\Omega(r \log(r) + p )Ω(rlog(r)+p), where ppp is the dimension of the input, rrr reflects the rank of the weight matrices and ddd is the number of hidden layers. To the best of our knowledge, our results are the first information theoretic lower bounds.

View on arXiv
Comments on this paper