ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.01644
25
6

Should Under-parameterized Student Networks Copy or Average Teacher Weights?

3 November 2023
Berfin Simsek
Amire Bendjeddou
W. Gerstner
Johanni Brea
ArXivPDFHTML
Abstract

Any continuous function f∗f^*f∗ can be approximated arbitrarily well by a neural network with sufficiently many neurons kkk. We consider the case when f∗f^*f∗ itself is a neural network with one hidden layer and kkk neurons. Approximating f∗f^*f∗ with a neural network with n<kn< kn<k neurons can thus be seen as fitting an under-parameterized "student" network with nnn neurons to a "teacher" network with kkk neurons. As the student has fewer neurons than the teacher, it is unclear, whether each of the nnn student neurons should copy one of the teacher neurons or rather average a group of teacher neurons. For shallow neural networks with erf activation function and for the standard Gaussian input distribution, we prove that "copy-average" configurations are critical points if the teacher's incoming vectors are orthonormal and its outgoing weights are unitary. Moreover, the optimum among such configurations is reached when n−1n-1n−1 student neurons each copy one teacher neuron and the nnn-th student neuron averages the remaining k−n+1k-n+1k−n+1 teacher neurons. For the student network with n=1n=1n=1 neuron, we provide additionally a closed-form solution of the non-trivial critical point(s) for commonly used activation functions through solving an equivalent constrained optimization problem. Empirically, we find for the erf activation function that gradient flow converges either to the optimal copy-average critical point or to another point where each student neuron approximately copies a different teacher neuron. Finally, we find similar results for the ReLU activation function, suggesting that the optimal solution of underparameterized networks has a universal structure.

View on arXiv
Comments on this paper