6
19

Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin

Abstract

We study the problem of {\em properly} learning large margin halfspaces in the agnostic PAC model. In more detail, we study the complexity of properly learning dd-dimensional halfspaces on the unit ball within misclassification error αOPTγ+ϵ\alpha \cdot \mathrm{OPT}_{\gamma} + \epsilon, where OPTγ\mathrm{OPT}_{\gamma} is the optimal γ\gamma-margin error rate and α1\alpha \geq 1 is the approximation ratio. We give learning algorithms and computational hardness results for this problem, for all values of the approximation ratio α1\alpha \geq 1, that are nearly-matching for a range of parameters. Specifically, for the natural setting that α\alpha is any constant bigger than one, we provide an essentially tight complexity characterization. On the positive side, we give an α=1.01\alpha = 1.01-approximate proper learner that uses O(1/(ϵ2γ2))O(1/(\epsilon^2\gamma^2)) samples (which is optimal) and runs in time poly(d/ϵ)2O~(1/γ2)\mathrm{poly}(d/\epsilon) \cdot 2^{\tilde{O}(1/\gamma^2)}. On the negative side, we show that {\em any} constant factor approximate proper learner has runtime poly(d/ϵ)2(1/γ)2o(1)\mathrm{poly}(d/\epsilon) \cdot 2^{(1/\gamma)^{2-o(1)}}, assuming the Exponential Time Hypothesis.

View on arXiv
Comments on this paper