ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.07419
16
1

Spatially heterogeneous learning by a deep student machine

15 February 2023
H. Yoshino
ArXivPDFHTML
Abstract

Deep neural networks (DNN) with a huge number of adjustable parameters remain largely black boxes. To shed light on the hidden layers of DNN, we study supervised learning by a DNN of width NNN and depth LLL consisting of NLNLNL perceptrons with ccc inputs by a statistical mechanics approach called the teacher-student setting. We consider an ensemble of student machines that exactly reproduce MMM sets of NNN dimensional input/output relations provided by a teacher machine. We show that the problem becomes exactly solvable in what we call as 'dense limit': N≫c≫1N \gg c \gg 1N≫c≫1 and M≫1M \gg 1M≫1 with fixed α=M/c\alpha=M/cα=M/c using the replica method developed in (H. Yoshino, (2020)). We also study the model numerically performing simple greedy MC simulations. Simulations reveal that learning by the DNN is quite heterogeneous in the network space: configurations of the teacher and the student machines are more correlated within the layers closer to the input/output boundaries while the central region remains much less correlated due to the over-parametrization in qualitative agreement with the theoretical prediction. We evaluate the generalization-error of the DNN with various depth LLL both theoretically and numerically. Remarkably both the theory and simulation suggest generalization-ability of the student machines, which are only weakly correlated with the teacher in the center, does not vanish even in the deep limit L≫1L \gg 1L≫1 where the system becomes heavily over-parametrized. We also consider the impact of effective dimension D(≤N)D(\leq N)D(≤N) of data by incorporating the hidden manifold model (S. Goldt et. al., (2020)) into our model. The theory implies that the loop corrections to the dense limit become enhanced by either decreasing the width NNN or decreasing the effective dimension DDD of the data. Simulation suggests both lead to significant improvements in generalization-ability.

View on arXiv
Comments on this paper