Memory capacity of two layer neural networks with smooth activations
- MLT
Determining the memory capacity of two layer neural networks with hidden neurons and input dimension (i.e., total trainable parameters), which refers to the largest size of general data the network can memorize, is a fundamental machine learning question. For polynomial activations of sufficiently high degree, such as with , and real analytic activations, such as sigmoids and smoothed rectified linear units (smoothed ReLUs), we establish a lower bound of and optimality up to a factor of approximately 2. Analogous prior results were limited to Heaviside and ReLU activations. In order to analyze general real analytic activations, we derive the precise generic rank of the network's Jacobian, which can be written in terms of Hadamard powers and the Khatri-Rao product. Our analysis extends classical linear algebraic facts about the rank of Hadamard powers. Overall, our approach differs from prior works on memory capacity and holds promise for extending to deeper models and other architectures.
View on arXiv