Learning a Single Neuron with Adversarial Label Noise via Gradient Descent
- MLT

We study the fundamental problem of learning a single neuron, i.e., a function of the form for monotone activations , with respect to the -loss in the presence of adversarial label noise. Specifically, we are given labeled examples from a distribution on such that there exists achieving , where . The goal of the learner is to output a hypothesis vector such that with high probability, where is a universal constant. As our main contribution, we give efficient constant-factor approximate learners for a broad class of distributions (including log-concave distributions) and activation functions. Concretely, for the class of isotropic log-concave distributions, we obtain the following important corollaries: For the logistic activation, we obtain the first polynomial-time constant factor approximation (even under the Gaussian distribution). Our algorithm has sample complexity , which is tight within polylogarithmic factors. For the ReLU activation, we give an efficient algorithm with sample complexity . Prior to our work, the best known constant-factor approximate learner had sample complexity . In both of these settings, our algorithms are simple, performing gradient-descent on the (regularized) -loss. The correctness of our algorithms relies on novel structural results that we establish, showing that (essentially all) stationary points of the underlying non-convex loss are approximately optimal.
View on arXiv