10
1

On the Sample Complexity of Two-Layer Networks: Lipschitz vs. Element-Wise Lipschitz Activation

Abstract

We investigate the sample complexity of bounded two-layer neural networks using different activation functions. In particular, we consider the class \mathcal{H} = \left\{\textbf{x}\mapsto \langle \textbf{v}, \sigma \circ W\textbf{b} + \textbf{b} \rangle : \textbf{b}\in\mathbb{R}^d, W \in \mathbb{R}^{\mathcal{T}\times d}, \textbf{v} \in \mathbb{R}^{\mathcal{T}}\right\} where the spectral norm of WW and v\textbf{v} is bounded by O(1)O(1), the Frobenius norm of WW is bounded from its initialization by R>0R > 0, and σ\sigma is a Lipschitz activation function. We prove that if σ\sigma is element-wise, then the sample complexity of H\mathcal{H} has only logarithmic dependency in width and that this complexity is tight, up to logarithmic factors. We further show that the element-wise property of σ\sigma is essential for a logarithmic dependency bound in width, in the sense that there exist non-element-wise activation functions whose sample complexity is linear in width, for widths that can be up to exponential in the input dimension. For the upper bound, we use the recent approach for norm-based bounds named Approximate Description Length (ADL) by arXiv:1910.05697. We further develop new techniques and tools for this approach that will hopefully inspire future works.

View on arXiv
Comments on this paper