Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces
and ReLU Regression under Gaussian Marginals
International Conference on Machine Learning (ICML), 2023
Main:12 Pages
Bibliography:4 Pages
Appendix:7 Pages
Abstract
We study the task of agnostically learning halfspaces under the Gaussian distribution. Specifically, given labeled examples from an unknown distribution on , whose marginal distribution on is the standard Gaussian and the labels can be arbitrary, the goal is to output a hypothesis with 0-1 loss , where is the 0-1 loss of the best-fitting halfspace. We prove a near-optimal computational hardness result for this task, under the widely believed sub-exponential time hardness of the Learning with Errors (LWE) problem. Prior hardness results are either qualitatively suboptimal or apply to restricted families of algorithms. Our techniques extend to yield near-optimal lower bounds for related problems, including ReLU regression.
View on arXivComments on this paper
