ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.00176
31
4

Universality of max-margin classifiers

29 September 2023
Andrea Montanari
Feng Ruan
Basil Saeed
Youngtak Sohn
ArXivPDFHTML
Abstract

Maximum margin binary classification is one of the most fundamental algorithms in machine learning, yet the role of featurization maps and the high-dimensional asymptotics of the misclassification error for non-Gaussian features are still poorly understood. We consider settings in which we observe binary labels yiy_iyi​ and either ddd-dimensional covariates zi{\boldsymbol z}_izi​ that are mapped to a ppp-dimension space via a randomized featurization map ϕ:Rd→Rp{\boldsymbol \phi}:\mathbb{R}^d \to\mathbb{R}^pϕ:Rd→Rp, or ppp-dimensional features of non-Gaussian independent entries. In this context, we study two fundamental questions: (i)(i)(i) At what overparametrization ratio p/np/np/n do the data become linearly separable? (ii)(ii)(ii) What is the generalization error of the max-margin classifier? Working in the high-dimensional regime in which the number of features ppp, the number of samples nnn and the input dimension ddd (in the nonlinear featurization setting) diverge, with ratios of order one, we prove a universality result establishing that the asymptotic behavior is completely determined by the expected covariance of feature vectors and by the covariance between features and labels. In particular, the overparametrization threshold and generalization error can be computed within a simpler Gaussian model. The main technical challenge lies in the fact that max-margin is not the maximizer (or minimizer) of an empirical average, but the maximizer of a minimum over the samples. We address this by representing the classifier as an average over support vectors. Crucially, we find that in high dimensions, the support vector count is proportional to the number of samples, which ultimately yields universality.

View on arXiv
Comments on this paper