ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.14084
  4. Cited By
Support vector machines and linear regression coincide with very
  high-dimensional features

Support vector machines and linear regression coincide with very high-dimensional features

28 May 2021
Navid Ardeshir
Clayton Sanford
Daniel J. Hsu
ArXivPDFHTML

Papers citing "Support vector machines and linear regression coincide with very high-dimensional features"

19 / 19 papers shown
Title
Risk Bounds for Over-parameterized Maximum Margin Classification on
  Sub-Gaussian Mixtures
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures
Yuan Cao
Quanquan Gu
M. Belkin
28
52
0
28 Apr 2021
Dimensionality reduction, regularization, and generalization in
  overparameterized regressions
Dimensionality reduction, regularization, and generalization in overparameterized regressions
Ningyuan Huang
D. Hogg
Soledad Villar
29
14
0
23 Nov 2020
Binary Classification of Gaussian Mixtures: Abundance of Support
  Vectors, Benign Overfitting and Regularization
Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting and Regularization
Ke Wang
Christos Thrampoulidis
44
28
0
18 Nov 2020
On the proliferation of support vectors in high dimensions
On the proliferation of support vectors in high dimensions
Daniel J. Hsu
Vidya Muthukumar
Ji Xu
36
44
0
22 Sep 2020
Classification vs regression in overparameterized regimes: Does the loss
  function matter?
Classification vs regression in overparameterized regimes: Does the loss function matter?
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
61
150
0
16 May 2020
Finite-sample Analysis of Interpolating Linear Classifiers in the
  Overparameterized Regime
Finite-sample Analysis of Interpolating Linear Classifiers in the Overparameterized Regime
Niladri S. Chatterji
Philip M. Long
18
109
0
25 Apr 2020
A Precise High-Dimensional Asymptotic Theory for Boosting and
  Minimum-$\ell_1$-Norm Interpolated Classifiers
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-ℓ1\ell_1ℓ1​-Norm Interpolated Classifiers
Tengyuan Liang
Pragya Sur
60
69
0
05 Feb 2020
Risk of the Least Squares Minimum Norm Estimator under the Spike
  Covariance Model
Risk of the Least Squares Minimum Norm Estimator under the Spike Covariance Model
Yasaman Mahdaviyeh
Zacharie Naulet
27
4
0
31 Dec 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
66
631
0
14 Aug 2019
Benign Overfitting in Linear Regression
Benign Overfitting in Linear Regression
Peter L. Bartlett
Philip M. Long
Gábor Lugosi
Alexander Tsigler
MLT
36
769
0
26 Jun 2019
Understanding overfitting peaks in generalization error: Analytical risk
  curves for $l_2$ and $l_1$ penalized interpolation
Understanding overfitting peaks in generalization error: Analytical risk curves for l2l_2l2​ and l1l_1l1​ penalized interpolation
P. Mitra
31
50
0
09 Jun 2019
Exact high-dimensional asymptotics for Support Vector Machine
Exact high-dimensional asymptotics for Support Vector Machine
Haoyang Liu
91
2
0
13 May 2019
Harmless interpolation of noisy data in regression
Harmless interpolation of noisy data in regression
Vidya Muthukumar
Kailas Vodrahalli
Vignesh Subramanian
A. Sahai
45
204
0
21 Mar 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
92
737
0
19 Mar 2019
Two models of double descent for weak features
Two models of double descent for weak features
M. Belkin
Daniel J. Hsu
Ji Xu
73
375
0
18 Mar 2019
The phase transition for the existence of the maximum likelihood
  estimate in high-dimensional logistic regression
The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression
Emmanuel J. Candes
Pragya Sur
29
140
0
25 Apr 2018
The Implicit Bias of Gradient Descent on Separable Data
The Implicit Bias of Gradient Descent on Separable Data
Daniel Soudry
Elad Hoffer
Mor Shpigel Nacson
Suriya Gunasekar
Nathan Srebro
51
908
0
27 Oct 2017
Estimation in high dimensions: a geometric perspective
Estimation in high dimensions: a geometric perspective
Roman Vershynin
65
134
0
20 May 2014
Observed Universality of Phase Transitions in High-Dimensional Geometry,
  with Implications for Modern Data Analysis and Signal Processing
Observed Universality of Phase Transitions in High-Dimensional Geometry, with Implications for Modern Data Analysis and Signal Processing
D. Donoho
Jared Tanner
57
462
0
14 Jun 2009
1