ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.08054
  4. Cited By
Classification vs regression in overparameterized regimes: Does the loss
  function matter?

Classification vs regression in overparameterized regimes: Does the loss function matter?

16 May 2020
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
ArXivPDFHTML

Papers citing "Classification vs regression in overparameterized regimes: Does the loss function matter?"

43 / 93 papers shown
Title
Tight bounds for minimum l1-norm interpolation of noisy data
Tight bounds for minimum l1-norm interpolation of noisy data
Guillaume Wang
Konstantin Donhauser
Fanny Yang
79
20
0
10 Nov 2021
Harmless interpolation in regression and classification with structured
  features
Harmless interpolation in regression and classification with structured features
Andrew D. McRae
Santhosh Karnik
Mark A. Davenport
Vidya Muthukumar
95
11
0
09 Nov 2021
Selective Regression Under Fairness Criteria
Selective Regression Under Fairness Criteria
Abhin Shah
Yuheng Bu
Joshua K. Lee
Subhro Das
Rameswar Panda
P. Sattigeri
G. Wornell
17
27
0
28 Oct 2021
On the Regularization of Autoencoders
On the Regularization of Autoencoders
Harald Steck
Dario Garcia-Garcia
SSL
AI4CE
22
4
0
21 Oct 2021
Towards Understanding the Data Dependency of Mixup-style Training
Towards Understanding the Data Dependency of Mixup-style Training
Muthuraman Chidambaram
Xiang Wang
Yuzheng Hu
Chenwei Wu
Rong Ge
UQCV
39
24
0
14 Oct 2021
Information-Theoretic Characterization of the Generalization Error for
  Iterative Semi-Supervised Learning
Information-Theoretic Characterization of the Generalization Error for Iterative Semi-Supervised Learning
Haiyun He
Hanshu Yan
Vincent Y. F. Tan
21
11
0
03 Oct 2021
Classification and Adversarial examples in an Overparameterized Linear
  Model: A Signal Processing Perspective
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective
Adhyyan Narang
Vidya Muthukumar
A. Sahai
SILM
AAML
26
1
0
27 Sep 2021
Reconstruction on Trees and Low-Degree Polynomials
Reconstruction on Trees and Low-Degree Polynomials
Frederic Koehler
Elchanan Mossel
22
9
0
14 Sep 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
26
71
0
06 Sep 2021
Interpolation can hurt robust generalization even when there is no noise
Interpolation can hurt robust generalization even when there is no noise
Konstantin Donhauser
Alexandru cTifrea
Michael Aerni
Reinhard Heckel
Fanny Yang
26
14
0
05 Aug 2021
Memorization in Deep Neural Networks: Does the Loss Function matter?
Memorization in Deep Neural Networks: Does the Loss Function matter?
Deep Patel
P. Sastry
TDI
6
8
0
21 Jul 2021
Benign Overfitting in Multiclass Classification: All Roads Lead to
  Interpolation
Benign Overfitting in Multiclass Classification: All Roads Lead to Interpolation
Ke Wang
Vidya Muthukumar
Christos Thrampoulidis
20
47
0
21 Jun 2021
The Flip Side of the Reweighted Coin: Duality of Adaptive Dropout and
  Regularization
The Flip Side of the Reweighted Coin: Duality of Adaptive Dropout and Regularization
Daniel LeJeune
Hamid Javadi
Richard G. Baraniuk
11
7
0
14 Jun 2021
Double Descent and Other Interpolation Phenomena in GANs
Double Descent and Other Interpolation Phenomena in GANs
Lorenzo Luzi
Yehuda Dar
Richard Baraniuk
15
5
0
07 Jun 2021
Towards an Understanding of Benign Overfitting in Neural Networks
Towards an Understanding of Benign Overfitting in Neural Networks
Zhu Li
Zhi-Hua Zhou
A. Gretton
MLT
31
35
0
06 Jun 2021
Fit without fear: remarkable mathematical phenomena of deep learning
  through the prism of interpolation
Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
M. Belkin
9
182
0
29 May 2021
Support vector machines and linear regression coincide with very
  high-dimensional features
Support vector machines and linear regression coincide with very high-dimensional features
Navid Ardeshir
Clayton Sanford
Daniel J. Hsu
8
21
0
28 May 2021
AdaBoost and robust one-bit compressed sensing
AdaBoost and robust one-bit compressed sensing
Geoffrey Chinot
Felix Kuchelmeister
Matthias Löffler
Sara van de Geer
27
5
0
05 May 2021
RATT: Leveraging Unlabeled Data to Guarantee Generalization
RATT: Leveraging Unlabeled Data to Guarantee Generalization
Saurabh Garg
Sivaraman Balakrishnan
J. Zico Kolter
Zachary Chase Lipton
25
29
0
01 May 2021
Risk Bounds for Over-parameterized Maximum Margin Classification on
  Sub-Gaussian Mixtures
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures
Yuan Cao
Quanquan Gu
M. Belkin
4
51
0
28 Apr 2021
How rotational invariance of common kernels prevents generalization in
  high dimensions
How rotational invariance of common kernels prevents generalization in high dimensions
Konstantin Donhauser
Mingqi Wu
Fanny Yang
13
22
0
09 Apr 2021
Benign Overfitting of Constant-Stepsize SGD for Linear Regression
Benign Overfitting of Constant-Stepsize SGD for Linear Regression
Difan Zou
Jingfeng Wu
Vladimir Braverman
Quanquan Gu
Sham Kakade
8
62
0
23 Mar 2021
The Common Intuition to Transfer Learning Can Win or Lose: Case Studies
  for Linear Regression
The Common Intuition to Transfer Learning Can Win or Lose: Case Studies for Linear Regression
Yehuda Dar
Daniel LeJeune
Richard G. Baraniuk
MLT
19
5
0
09 Mar 2021
On the interplay between data structure and loss function in
  classification problems
On the interplay between data structure and loss function in classification problems
Stéphane dÁscoli
Marylou Gabrié
Levent Sagun
Giulio Biroli
19
17
0
09 Mar 2021
Label-Imbalanced and Group-Sensitive Classification under
  Overparameterization
Label-Imbalanced and Group-Sensitive Classification under Overparameterization
Ganesh Ramachandra Kini
Orestis Paraskevas
Samet Oymak
Christos Thrampoulidis
27
93
0
02 Mar 2021
Multiplicative Reweighting for Robust Neural Network Optimization
Multiplicative Reweighting for Robust Neural Network Optimization
Noga Bar
Tomer Koren
Raja Giryes
OOD
NoLa
11
9
0
24 Feb 2021
Understanding and Mitigating Accuracy Disparity in Regression
Understanding and Mitigating Accuracy Disparity in Regression
Jianfeng Chi
Yuan Tian
Geoffrey J. Gordon
Han Zhao
13
25
0
24 Feb 2021
Distilling Double Descent
Distilling Double Descent
Andrew Cotter
A. Menon
Harikrishna Narasimhan
A. S. Rawat
Sashank J. Reddi
Yichen Zhou
12
7
0
13 Feb 2021
Interpolating Classifiers Make Few Mistakes
Interpolating Classifiers Make Few Mistakes
Tengyuan Liang
Benjamin Recht
14
28
0
28 Jan 2021
Mixed-Privacy Forgetting in Deep Networks
Mixed-Privacy Forgetting in Deep Networks
Aditya Golatkar
Alessandro Achille
Avinash Ravichandran
M. Polito
Stefano Soatto
CLL
MU
127
159
0
24 Dec 2020
On the robustness of minimum norm interpolators and regularized
  empirical risk minimizers
On the robustness of minimum norm interpolators and regularized empirical risk minimizers
Geoffrey Chinot
Matthias Löffler
Sara van de Geer
21
20
0
01 Dec 2020
On Generalization of Adaptive Methods for Over-parameterized Linear
  Regression
On Generalization of Adaptive Methods for Over-parameterized Linear Regression
Vatsal Shah
Soumya Basu
Anastasios Kyrillidis
Sujay Sanghavi
AI4CE
17
4
0
28 Nov 2020
Binary Classification of Gaussian Mixtures: Abundance of Support
  Vectors, Benign Overfitting and Regularization
Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting and Regularization
Ke Wang
Christos Thrampoulidis
15
27
0
18 Nov 2020
Benign overfitting in ridge regression
Benign overfitting in ridge regression
Alexander Tsigler
Peter L. Bartlett
13
159
0
29 Sep 2020
On the proliferation of support vectors in high dimensions
On the proliferation of support vectors in high dimensions
Daniel J. Hsu
Vidya Muthukumar
Ji Xu
14
42
0
22 Sep 2020
Implicit Regularization via Neural Feature Alignment
Implicit Regularization via Neural Feature Alignment
A. Baratin
Thomas George
César Laurent
R. Devon Hjelm
Guillaume Lajoie
Pascal Vincent
Simon Lacoste-Julien
13
6
0
03 Aug 2020
Prediction in latent factor regression: Adaptive PCR and beyond
Prediction in latent factor regression: Adaptive PCR and beyond
Xin Bing
F. Bunea
Seth Strimas-Mackey
M. Wegkamp
17
2
0
20 Jul 2020
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
28
32
0
18 Jun 2020
Evaluation of Neural Architectures Trained with Square Loss vs
  Cross-Entropy in Classification Tasks
Evaluation of Neural Architectures Trained with Square Loss vs Cross-Entropy in Classification Tasks
Like Hui
M. Belkin
UQCV
AAML
VLM
6
164
0
12 Jun 2020
Finite-sample Analysis of Interpolating Linear Classifiers in the
  Overparameterized Regime
Finite-sample Analysis of Interpolating Linear Classifiers in the Overparameterized Regime
Niladri S. Chatterji
Philip M. Long
8
108
0
25 Apr 2020
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
R. Tibshirani
18
726
0
19 Mar 2019
Newton-MR: Inexact Newton Method With Minimum Residual Sub-problem
  Solver
Newton-MR: Inexact Newton Method With Minimum Residual Sub-problem Solver
Fred Roosta
Yang Liu
Peng Xu
Michael W. Mahoney
8
12
0
30 Sep 2018
High-dimensional generalized linear models and the lasso
High-dimensional generalized linear models and the lasso
Sara van de Geer
189
750
0
04 Apr 2008
Previous
12