ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.08054
  4. Cited By
Classification vs regression in overparameterized regimes: Does the loss
  function matter?

Classification vs regression in overparameterized regimes: Does the loss function matter?

16 May 2020
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
ArXivPDFHTML

Papers citing "Classification vs regression in overparameterized regimes: Does the loss function matter?"

50 / 93 papers shown
Title
Structural Entropy Guided Probabilistic Coding
Structural Entropy Guided Probabilistic Coding
Xiang Huang
Hao Peng
Li Sun
Hui Lin
Chunyang Liu
Jiang Cao
Philip S. Yu
74
0
0
12 Dec 2024
Analyzing Deep Transformer Models for Time Series Forecasting via
  Manifold Learning
Analyzing Deep Transformer Models for Time Series Forecasting via Manifold Learning
Ilya Kaufman
Omri Azencot
AI4TS
29
2
0
17 Oct 2024
Provable Weak-to-Strong Generalization via Benign Overfitting
Provable Weak-to-Strong Generalization via Benign Overfitting
David X. Wu
A. Sahai
58
6
0
06 Oct 2024
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying
  Bandwidth or Dimensionality
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying Bandwidth or Dimensionality
Marko Medvedev
Gal Vardi
Nathan Srebro
54
3
0
05 Sep 2024
EUR-USD Exchange Rate Forecasting Based on Information Fusion with Large
  Language Models and Deep Learning Methods
EUR-USD Exchange Rate Forecasting Based on Information Fusion with Large Language Models and Deep Learning Methods
Hongcheng Ding
Xuanze Zhao
Zixiao Jiang
Shamsul Nahar Abdullah
Deshinta Arrova Dewi
21
0
0
23 Aug 2024
First-Order Manifold Data Augmentation for Regression Learning
First-Order Manifold Data Augmentation for Regression Learning
Ilya Kaufman
Omri Azencot
23
3
0
16 Jun 2024
Class-wise Activation Unravelling the Engima of Deep Double Descent
Class-wise Activation Unravelling the Engima of Deep Double Descent
Yufei Gu
28
0
0
13 May 2024
Sharp analysis of out-of-distribution error for "importance-weighted"
  estimators in the overparameterized regime
Sharp analysis of out-of-distribution error for "importance-weighted" estimators in the overparameterized regime
Kuo-Wei Lai
Vidya Muthukumar
31
0
0
10 May 2024
On the Benefits of Over-parameterization for Out-of-Distribution
  Generalization
On the Benefits of Over-parameterization for Out-of-Distribution Generalization
Yifan Hao
Yong Lin
Difan Zou
Tong Zhang
OODD
OOD
28
4
0
26 Mar 2024
Benign overfitting in leaky ReLU networks with moderate input dimension
Benign overfitting in leaky ReLU networks with moderate input dimension
Kedar Karhadkar
Erin E. George
Michael Murray
Guido Montúfar
Deanna Needell
MLT
35
2
0
11 Mar 2024
Balanced Data, Imbalanced Spectra: Unveiling Class Disparities with
  Spectral Imbalance
Balanced Data, Imbalanced Spectra: Unveiling Class Disparities with Spectral Imbalance
Chiraag Kaushik
Ran Liu
Chi-Heng Lin
Amrit Khera
Matthew Y Jin
Wenrui Ma
Vidya Muthukumar
Eva L. Dyer
38
3
0
18 Feb 2024
Why do Random Forests Work? Understanding Tree Ensembles as
  Self-Regularizing Adaptive Smoothers
Why do Random Forests Work? Understanding Tree Ensembles as Self-Regularizing Adaptive Smoothers
Alicia Curth
Alan Jeffares
M. Schaar
UQCV
34
11
0
02 Feb 2024
The Surprising Harmfulness of Benign Overfitting for Adversarial
  Robustness
The Surprising Harmfulness of Benign Overfitting for Adversarial Robustness
Yifan Hao
Tong Zhang
AAML
19
4
0
19 Jan 2024
Unraveling the Enigma of Double Descent: An In-depth Analysis through
  the Lens of Learned Feature Space
Unraveling the Enigma of Double Descent: An In-depth Analysis through the Lens of Learned Feature Space
Yufei Gu
Xiaoqing Zheng
T. Aste
35
3
0
20 Oct 2023
Transformers as Support Vector Machines
Transformers as Support Vector Machines
Davoud Ataee Tarzanagh
Yingcong Li
Christos Thrampoulidis
Samet Oymak
35
43
0
31 Aug 2023
Noisy Interpolation Learning with Shallow Univariate ReLU Networks
Noisy Interpolation Learning with Shallow Univariate ReLU Networks
Nirmit Joshi
Gal Vardi
Nathan Srebro
27
8
0
28 Jul 2023
A Unified Approach to Controlling Implicit Regularization via Mirror
  Descent
A Unified Approach to Controlling Implicit Regularization via Mirror Descent
Haoyuan Sun
Khashayar Gatmiry
Kwangjun Ahn
Navid Azizan
AI4CE
10
11
0
24 Jun 2023
Precise Asymptotic Generalization for Multiclass Classification with Overparameterized Linear Models
Precise Asymptotic Generalization for Multiclass Classification with Overparameterized Linear Models
David X. Wu
A. Sahai
21
2
0
23 Jun 2023
Training shallow ReLU networks on noisy data using hinge loss: when do
  we overfit and is it benign?
Training shallow ReLU networks on noisy data using hinge loss: when do we overfit and is it benign?
Erin E. George
Michael Murray
W. Swartworth
Deanna Needell
MLT
8
4
0
16 Jun 2023
Bayesian Analysis for Over-parameterized Linear Model via Effective Spectra
Bayesian Analysis for Over-parameterized Linear Model via Effective Spectra
Tomoya Wakayama
Masaaki Imaizumi
47
1
0
25 May 2023
From Tempered to Benign Overfitting in ReLU Neural Networks
From Tempered to Benign Overfitting in ReLU Neural Networks
Guy Kornowski
Gilad Yehudai
Ohad Shamir
18
12
0
24 May 2023
Mind the spikes: Benign overfitting of kernels and neural networks in
  fixed dimension
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
Moritz Haas
David Holzmüller
U. V. Luxburg
Ingo Steinwart
MLT
27
13
0
23 May 2023
New Equivalences Between Interpolation and SVMs: Kernels and Structured
  Features
New Equivalences Between Interpolation and SVMs: Kernels and Structured Features
Chiraag Kaushik
Andrew D. McRae
Mark A. Davenport
Vidya Muthukumar
20
2
0
03 May 2023
General Loss Functions Lead to (Approximate) Interpolation in High
  Dimensions
General Loss Functions Lead to (Approximate) Interpolation in High Dimensions
Kuo-Wei Lai
Vidya Muthukumar
16
5
0
13 Mar 2023
Benign Overfitting for Two-layer ReLU Convolutional Neural Networks
Benign Overfitting for Two-layer ReLU Convolutional Neural Networks
Yiwen Kou
Zi-Yuan Chen
Yuanzhou Chen
Quanquan Gu
MLT
47
12
0
07 Mar 2023
Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from
  KKT Conditions for Margin Maximization
Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from KKT Conditions for Margin Maximization
Spencer Frei
Gal Vardi
Peter L. Bartlett
Nathan Srebro
30
22
0
02 Mar 2023
Are Gaussian data all you need? Extents and limits of universality in
  high-dimensional generalized linear estimation
Are Gaussian data all you need? Extents and limits of universality in high-dimensional generalized linear estimation
Luca Pesce
Florent Krzakala
Bruno Loureiro
Ludovic Stephan
16
26
0
17 Feb 2023
Interpolation Learning With Minimum Description Length
Interpolation Learning With Minimum Description Length
N. Manoj
Nathan Srebro
17
4
0
14 Feb 2023
Sketched Ridgeless Linear Regression: The Role of Downsampling
Sketched Ridgeless Linear Regression: The Role of Downsampling
Xin Chen
Yicheng Zeng
Siyue Yang
Qiang Sun
11
7
0
02 Feb 2023
Tight bounds for maximum $\ell_1$-margin classifiers
Tight bounds for maximum ℓ1\ell_1ℓ1​-margin classifiers
Stefan Stojanovic
Konstantin Donhauser
Fanny Yang
32
0
0
07 Dec 2022
Margin-based sampling in high dimensions: When being active is less
  efficient than staying passive
Margin-based sampling in high dimensions: When being active is less efficient than staying passive
A. Tifrea
Jacob Clarysse
Fanny Yang
17
2
0
01 Dec 2022
Evaluating the Impact of Loss Function Variation in Deep Learning for
  Classification
Evaluating the Impact of Loss Function Variation in Deep Learning for Classification
Simon Dräger
Jannik Dunkelau
24
2
0
28 Oct 2022
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized
  Linear Models
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models
Lijia Zhou
Frederic Koehler
Pragya Sur
Danica J. Sutherland
Nathan Srebro
83
9
0
21 Oct 2022
The good, the bad and the ugly sides of data augmentation: An implicit
  spectral regularization perspective
The good, the bad and the ugly sides of data augmentation: An implicit spectral regularization perspective
Chi-Heng Lin
Chiraag Kaushik
Eva L. Dyer
Vidya Muthukumar
19
26
0
10 Oct 2022
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Niladri S. Chatterji
Philip M. Long
13
8
0
19 Sep 2022
Intersection of Parallels as an Early Stopping Criterion
Intersection of Parallels as an Early Stopping Criterion
Ali Vardasbi
Maarten de Rijke
Mostafa Dehghani
MoMe
28
5
0
19 Aug 2022
SphereFed: Hyperspherical Federated Learning
SphereFed: Hyperspherical Federated Learning
Xin Dong
S. Zhang
Ang Li
H. T. Kung
FedML
31
19
0
19 Jul 2022
A law of adversarial risk, interpolation, and label noise
A law of adversarial risk, interpolation, and label noise
Daniel Paleka
Amartya Sanyal
NoLa
AAML
13
9
0
08 Jul 2022
On how to avoid exacerbating spurious correlations when models are
  overparameterized
On how to avoid exacerbating spurious correlations when models are overparameterized
Tina Behnia
Ke Wang
Christos Thrampoulidis
31
2
0
25 Jun 2022
Max-Margin Works while Large Margin Fails: Generalization without
  Uniform Convergence
Max-Margin Works while Large Margin Fails: Generalization without Uniform Convergence
Margalit Glasgow
Colin Wei
Mary Wootters
Tengyu Ma
36
5
0
16 Jun 2022
Generalization for multiclass classification with overparameterized
  linear models
Generalization for multiclass classification with overparameterized linear models
Vignesh Subramanian
Rahul Arya
A. Sahai
AI4CE
19
9
0
03 Jun 2022
A Blessing of Dimensionality in Membership Inference through
  Regularization
A Blessing of Dimensionality in Membership Inference through Regularization
Jasper Tan
Daniel LeJeune
Blake Mason
Hamid Javadi
Richard G. Baraniuk
16
18
0
27 May 2022
Fast Rates for Noisy Interpolation Require Rethinking the Effects of
  Inductive Bias
Fast Rates for Noisy Interpolation Require Rethinking the Effects of Inductive Bias
Konstantin Donhauser
Nicolò Ruggeri
Stefan Stojanovic
Fanny Yang
10
20
0
07 Mar 2022
Benign Overfitting without Linearity: Neural Network Classifiers Trained
  by Gradient Descent for Noisy Linear Data
Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
21
69
0
11 Feb 2022
Error Scaling Laws for Kernel Classification under Source and Capacity
  Conditions
Error Scaling Laws for Kernel Classification under Source and Capacity Conditions
Hugo Cui
Bruno Loureiro
Florent Krzakala
Lenka Zdeborová
38
10
0
29 Jan 2022
Kernel Methods and Multi-layer Perceptrons Learn Linear Models in High
  Dimensions
Kernel Methods and Multi-layer Perceptrons Learn Linear Models in High Dimensions
Mojtaba Sahraee-Ardakan
M. Emami
Parthe Pandit
S. Rangan
A. Fletcher
20
9
0
20 Jan 2022
Towards Sample-efficient Overparameterized Meta-learning
Towards Sample-efficient Overparameterized Meta-learning
Yue Sun
Adhyyan Narang
Halil Ibrahim Gulluk
Samet Oymak
Maryam Fazel
BDL
17
24
0
16 Jan 2022
Is Importance Weighting Incompatible with Interpolating Classifiers?
Is Importance Weighting Incompatible with Interpolating Classifiers?
Ke Alexander Wang
Niladri S. Chatterji
Saminul Haque
Tatsunori Hashimoto
11
20
0
24 Dec 2021
Understanding Square Loss in Training Overparametrized Neural Network
  Classifiers
Understanding Square Loss in Training Overparametrized Neural Network Classifiers
Tianyang Hu
Jun Wang
Wenjia Wang
Zhenguo Li
UQCV
AAML
25
19
0
07 Dec 2021
Minimax Supervised Clustering in the Anisotropic Gaussian Mixture Model:
  A new take on Robust Interpolation
Minimax Supervised Clustering in the Anisotropic Gaussian Mixture Model: A new take on Robust Interpolation
Stanislav Minsker
M. Ndaoud
Yiqiu Shen
25
4
0
13 Nov 2021
12
Next