Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin

We study the problem of {\em properly} learning large margin halfspaces in the agnostic PAC model. In more detail, we study the complexity of properly learning -dimensional halfspaces on the unit ball within misclassification error , where is the optimal -margin error rate and is the approximation ratio. We give learning algorithms and computational hardness results for this problem, for all values of the approximation ratio , that are nearly-matching for a range of parameters. Specifically, for the natural setting that is any constant bigger than one, we provide an essentially tight complexity characterization. On the positive side, we give an -approximate proper learner that uses samples (which is optimal) and runs in time . On the negative side, we show that {\em any} constant factor approximate proper learner has runtime , assuming the Exponential Time Hypothesis.
View on arXiv