ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.09720
11
24

Near-Optimal Statistical Query Hardness of Learning Halfspaces with Massart Noise

17 December 2020
Ilias Diakonikolas
D. Kane
ArXivPDFHTML
Abstract

We study the problem of PAC learning halfspaces with Massart noise. Given labeled samples (x,y)(x, y)(x,y) from a distribution DDD on Rd×{±1}\mathbb{R}^{d} \times \{ \pm 1\}Rd×{±1} such that the marginal DxD_xDx​ on the examples is arbitrary and the label yyy of example xxx is generated from the target halfspace corrupted by a Massart adversary with flipping probability η(x)≤η≤1/2\eta(x) \leq \eta \leq 1/2η(x)≤η≤1/2, the goal is to compute a hypothesis with small misclassification error. The best known poly(d,1/ϵ)\mathrm{poly}(d, 1/\epsilon)poly(d,1/ϵ)-time algorithms for this problem achieve error of η+ϵ\eta+\epsilonη+ϵ, which can be far from the optimal bound of OPT+ϵ\mathrm{OPT}+\epsilonOPT+ϵ, where OPT=Ex∼Dx[η(x)]\mathrm{OPT} = \mathbf{E}_{x \sim D_x} [\eta(x)]OPT=Ex∼Dx​​[η(x)]. While it is known that achieving OPT+o(1)\mathrm{OPT}+o(1)OPT+o(1) error requires super-polynomial time in the Statistical Query model, a large gap remains between known upper and lower bounds. In this work, we essentially characterize the efficient learnability of Massart halfspaces in the Statistical Query (SQ) model. Specifically, we show that no efficient SQ algorithm for learning Massart halfspaces on Rd\mathbb{R}^dRd can achieve error better than Ω(η)\Omega(\eta)Ω(η), even if OPT=2−log⁡c(d)\mathrm{OPT} = 2^{-\log^{c} (d)}OPT=2−logc(d), for any universal constant c∈(0,1)c \in (0, 1)c∈(0,1). Furthermore, when the noise upper bound η\etaη is close to 1/21/21/2, our error lower bound becomes η−oη(1)\eta - o_{\eta}(1)η−oη​(1), where the oη(1)o_{\eta}(1)oη​(1) term goes to 000 when η\etaη approaches 1/21/21/2. Our results provide strong evidence that known learning algorithms for Massart halfspaces are nearly best possible, thereby resolving a longstanding open problem in learning theory.

View on arXiv
Comments on this paper