24
6

On the Complexity of Learning Sparse Functions with Statistical and Gradient Queries

Abstract

The goal of this paper is to investigate the complexity of gradient algorithms when learning sparse functions (juntas). We introduce a type of Statistical Queries (SQ\mathsf{SQ}), which we call Differentiable Learning Queries (DLQ\mathsf{DLQ}), to model gradient queries on a specified loss with respect to an arbitrary model. We provide a tight characterization of the query complexity of DLQ\mathsf{DLQ} for learning the support of a sparse function over generic product distributions. This complexity crucially depends on the loss function. For the squared loss, DLQ\mathsf{DLQ} matches the complexity of Correlation Statistical Queries (CSQ)(\mathsf{CSQ})--potentially much worse than SQ\mathsf{SQ}. But for other simple loss functions, including the 1\ell_1 loss, DLQ\mathsf{DLQ} always achieves the same complexity as SQ\mathsf{SQ}. We also provide evidence that DLQ\mathsf{DLQ} can indeed capture learning with (stochastic) gradient descent by showing it correctly describes the complexity of learning with a two-layer neural network in the mean field regime and linear scaling.

View on arXiv
Comments on this paper