ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.05987
  4. Cited By
Tight bounds for minimum l1-norm interpolation of noisy data
v1v2 (latest)

Tight bounds for minimum l1-norm interpolation of noisy data

10 November 2021
Guillaume Wang
Konstantin Donhauser
Fanny Yang
ArXiv (abs)PDFHTML

Papers citing "Tight bounds for minimum l1-norm interpolation of noisy data"

17 / 17 papers shown
Title
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying
  Bandwidth or Dimensionality
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying Bandwidth or Dimensionality
Marko Medvedev
Gal Vardi
Nathan Srebro
99
3
0
05 Sep 2024
Learning with Norm Constrained, Over-parameterized, Two-layer Neural
  Networks
Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks
Fanghui Liu
L. Dadi
Volkan Cevher
135
2
0
29 Apr 2024
Noisy Interpolation Learning with Shallow Univariate ReLU Networks
Noisy Interpolation Learning with Shallow Univariate ReLU Networks
Nirmit Joshi
Gal Vardi
Nathan Srebro
107
8
0
28 Jul 2023
An Agnostic View on the Cost of Overfitting in (Kernel) Ridge Regression
An Agnostic View on the Cost of Overfitting in (Kernel) Ridge Regression
Lijia Zhou
James B. Simon
Gal Vardi
Nathan Srebro
72
2
0
22 Jun 2023
Training shallow ReLU networks on noisy data using hinge loss: when do
  we overfit and is it benign?
Training shallow ReLU networks on noisy data using hinge loss: when do we overfit and is it benign?
Erin E. George
Michael Murray
W. Swartworth
Deanna Needell
MLT
61
5
0
16 Jun 2023
Benign Overfitting in Deep Neural Networks under Lazy Training
Benign Overfitting in Deep Neural Networks under Lazy Training
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
Francesco Locatello
Volkan Cevher
AI4CE
71
10
0
30 May 2023
Bayesian Analysis for Over-parameterized Linear Model via Effective Spectra
Bayesian Analysis for Over-parameterized Linear Model via Effective Spectra
Tomoya Wakayama
Masaaki Imaizumi
112
1
0
25 May 2023
Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from
  KKT Conditions for Margin Maximization
Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from KKT Conditions for Margin Maximization
Spencer Frei
Gal Vardi
Peter L. Bartlett
Nathan Srebro
86
23
0
02 Mar 2023
Implicit Regularization Leads to Benign Overfitting for Sparse Linear
  Regression
Implicit Regularization Leads to Benign Overfitting for Sparse Linear Regression
Mo Zhou
Rong Ge
121
2
0
01 Feb 2023
Strong inductive biases provably prevent harmless interpolation
Strong inductive biases provably prevent harmless interpolation
Michael Aerni
Marco Milanta
Konstantin Donhauser
Fanny Yang
87
9
0
18 Jan 2023
Tight bounds for maximum $\ell_1$-margin classifiers
Tight bounds for maximum ℓ1\ell_1ℓ1​-margin classifiers
Stefan Stojanovic
Konstantin Donhauser
Fanny Yang
76
0
0
07 Dec 2022
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized
  Linear Models
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models
Lijia Zhou
Frederic Koehler
Pragya Sur
Danica J. Sutherland
Nathan Srebro
155
9
0
21 Oct 2022
Surprises in adversarially-trained linear regression
Surprises in adversarially-trained linear regression
Antônio H. Ribeiro
Dave Zachariah
Thomas B. Schon
AAML
196
2
0
25 May 2022
Fast Rates for Noisy Interpolation Require Rethinking the Effects of
  Inductive Bias
Fast Rates for Noisy Interpolation Require Rethinking the Effects of Inductive Bias
Konstantin Donhauser
Nicolò Ruggeri
Stefan Stojanovic
Fanny Yang
94
22
0
07 Mar 2022
Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data
Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
110
75
0
11 Feb 2022
Foolish Crowds Support Benign Overfitting
Foolish Crowds Support Benign Overfitting
Niladri S. Chatterji
Philip M. Long
197
21
0
06 Oct 2021
Double Double Descent: On Generalization Errors in Transfer Learning
  between Linear Regression Tasks
Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks
Yehuda Dar
Richard G. Baraniuk
174
19
0
12 Jun 2020
1