Robustly Learning a Single Neuron via Sharpness

Abstract
We study the problem of learning a single neuron with respect to the -loss in the presence of adversarial label noise. We give an efficient algorithm that, for a broad family of activations including ReLUs, approximates the optimal -error within a constant factor. Our algorithm applies under much milder distributional assumptions compared to prior work. The key ingredient enabling our results is a novel connection to local error bounds from optimization theory.
View on arXivComments on this paper