383

Memory capacity of two layer neural networks with smooth activations

SIAM Journal on Mathematics of Data Science (SIMODS), 2023
Main:16 Pages
Bibliography:3 Pages
Appendix:1 Pages
Abstract

Determining the memory capacity of two layer neural networks with mm hidden neurons and input dimension dd (i.e., md+mmd+m total trainable parameters), which refers to the largest size of general data the network can memorize, is a fundamental machine learning question. For polynomial activations of sufficiently high degree, such as xkx^k with (d+kd1)n\binom{d+k}{d-1}\ge n, and real analytic activations, such as sigmoids and smoothed rectified linear units (smoothed ReLUs), we establish a lower bound of md/2\lfloor md/2\rfloor and optimality up to a factor of approximately 2. Analogous prior results were limited to Heaviside and ReLU activations. In order to analyze general real analytic activations, we derive the precise generic rank of the network's Jacobian, which can be written in terms of Hadamard powers and the Khatri-Rao product. Our analysis extends classical linear algebraic facts about the rank of Hadamard powers. Overall, our approach differs from prior works on memory capacity and holds promise for extending to deeper models and other architectures.

View on arXiv
Comments on this paper