ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.00539
14
13

Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins

1 October 2020
Spencer Frei
Yuan Cao
Quanquan Gu
ArXivPDFHTML
Abstract

We analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of linear halfspaces. If OPT\mathsf{OPT}OPT is the best classification error achieved by a halfspace, by appealing to the notion of soft margins we are able to show that gradient descent finds halfspaces with classification error O~(OPT1/2)+ε\tilde O(\mathsf{OPT}^{1/2}) + \varepsilonO~(OPT1/2)+ε in poly(d,1/ε)\mathrm{poly}(d,1/\varepsilon)poly(d,1/ε) time and sample complexity for a broad class of distributions that includes log-concave isotropic distributions as a subclass. Along the way we answer a question recently posed by Ji et al. (2020) on how the tail behavior of a loss function can affect sample complexity and runtime guarantees for gradient descent.

View on arXiv
Comments on this paper