ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.11815
  4. Cited By
Interpolating Classifiers Make Few Mistakes
v1v2 (latest)

Interpolating Classifiers Make Few Mistakes

28 January 2021
Tengyuan Liang
Benjamin Recht
ArXiv (abs)PDFHTML

Papers citing "Interpolating Classifiers Make Few Mistakes"

17 / 17 papers shown
Title
Classification vs regression in overparameterized regimes: Does the loss
  function matter?
Classification vs regression in overparameterized regimes: Does the loss function matter?
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
89
151
0
16 May 2020
Compressive sensing with un-trained neural networks: Gradient descent
  finds the smoothest approximation
Compressive sensing with un-trained neural networks: Gradient descent finds the smoothest approximation
Reinhard Heckel
Mahdi Soltanolkotabi
61
81
0
07 May 2020
Finite-sample Analysis of Interpolating Linear Classifiers in the
  Overparameterized Regime
Finite-sample Analysis of Interpolating Linear Classifiers in the Overparameterized Regime
Niladri S. Chatterji
Philip M. Long
46
108
0
25 Apr 2020
Neural Kernels Without Tangents
Neural Kernels Without Tangents
Vaishaal Shankar
Alex Fang
Wenshuo Guo
Sara Fridovich-Keil
Ludwig Schmidt
Jonathan Ragan-Kelley
Benjamin Recht
46
91
0
04 Mar 2020
A Precise High-Dimensional Asymptotic Theory for Boosting and
  Minimum-$\ell_1$-Norm Interpolated Classifiers
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-ℓ1\ell_1ℓ1​-Norm Interpolated Classifiers
Tengyuan Liang
Pragya Sur
79
70
0
05 Feb 2020
A Model of Double Descent for High-dimensional Binary Linear
  Classification
A Model of Double Descent for High-dimensional Binary Linear Classification
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
85
146
0
13 Nov 2019
Benign Overfitting in Linear Regression
Benign Overfitting in Linear Regression
Peter L. Bartlett
Philip M. Long
Gábor Lugosi
Alexander Tsigler
MLT
86
776
0
26 Jun 2019
Exact Gaussian Processes on a Million Data Points
Exact Gaussian Processes on a Million Data Points
Ke Alexander Wang
Geoff Pleiss
Jacob R. Gardner
Stephen Tyree
Kilian Q. Weinberger
A. Wilson
GP
55
230
0
19 Mar 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
194
743
0
19 Mar 2019
numpywren: serverless linear algebra
numpywren: serverless linear algebra
Vaishaal Shankar
K. Krauth
Qifan Pu
Eric Jonas
Shivaram Venkataraman
Ion Stoica
Benjamin Recht
Jonathan Ragan-Kelley
60
110
0
23 Oct 2018
Just Interpolate: Kernel "Ridgeless" Regression Can Generalize
Just Interpolate: Kernel "Ridgeless" Regression Can Generalize
Tengyuan Liang
Alexander Rakhlin
65
354
0
01 Aug 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
267
3,203
0
20 Jun 2018
FALKON: An Optimal Large Scale Kernel Method
FALKON: An Optimal Large Scale Kernel Method
Alessandro Rudi
Luigi Carratino
Lorenzo Rosasco
78
196
0
31 May 2017
Diving into the shallows: a computational perspective on large-scale
  shallow learning
Diving into the shallows: a computational perspective on large-scale shallow learning
Siyuan Ma
M. Belkin
64
78
0
30 Mar 2017
Faster Kernel Ridge Regression Using Sketching and Preconditioning
Faster Kernel Ridge Regression Using Sketching and Preconditioning
H. Avron
K. Clarkson
David P. Woodruff
73
125
0
10 Nov 2016
Optimistic Rates for Learning with a Smooth Loss
Optimistic Rates for Learning with a Smooth Loss
Nathan Srebro
Karthik Sridharan
Ambuj Tewari
160
283
0
20 Sep 2010
The spectrum of kernel random matrices
The spectrum of kernel random matrices
N. Karoui
162
224
0
04 Jan 2010
1