16
31

On the Optimal Memorization Power of ReLU Neural Networks

Abstract

We study the memorization power of feedforward ReLU neural networks. We show that such networks can memorize any NN points that satisfy a mild separability assumption using O~(N)\tilde{O}\left(\sqrt{N}\right) parameters. Known VC-dimension upper bounds imply that memorizing NN samples requires Ω(N)\Omega(\sqrt{N}) parameters, and hence our construction is optimal up to logarithmic factors. We also give a generalized construction for networks with depth bounded by 1LN1 \leq L \leq \sqrt{N}, for memorizing NN samples using O~(N/L)\tilde{O}(N/L) parameters. This bound is also optimal up to logarithmic factors. Our construction uses weights with large bit complexity. We prove that having such a large bit complexity is both necessary and sufficient for memorization with a sub-linear number of parameters.

View on arXiv
Comments on this paper