Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1812.11118
Cited By
Reconciling modern machine learning practice and the bias-variance trade-off
28 December 2018
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Reconciling modern machine learning practice and the bias-variance trade-off"
50 / 315 papers shown
Title
Symmetry Teleportation for Accelerated Optimization
B. Zhao
Nima Dehmamy
Robin Walters
Rose Yu
ODL
23
20
0
21 May 2022
Large Neural Networks Learning from Scratch with Very Few Data and without Explicit Regularization
C. Linse
T. Martinetz
SSL
VLM
14
4
0
18 May 2022
Deep learning of quantum entanglement from incomplete measurements
Dominik Koutný
L. Ginés
M. Moczała-Dusanowska
Sven Höfling
Christian Schneider
Ana Predojevic
M. Ježek
24
28
0
03 May 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
42
121
0
03 May 2022
Benign Overfitting in Time Series Linear Models with Over-Parameterization
Shogo H. Nakakita
Masaaki Imaizumi
AI4TS
27
5
0
18 Apr 2022
Concentration of Random Feature Matrices in High-Dimensions
Zhijun Chen
Hayden Schaeffer
Rachel A. Ward
22
6
0
14 Apr 2022
GemNet-OC: Developing Graph Neural Networks for Large and Diverse Molecular Simulation Datasets
Johannes Gasteiger
Muhammed Shuaibi
Anuroop Sriram
Stephan Günnemann
Zachary W. Ulissi
C. L. Zitnick
Abhishek Das
AI4TS
MLAU
39
66
0
06 Apr 2022
Random Features Model with General Convex Regularization: A Fine Grained Analysis with Precise Asymptotic Learning Curves
David Bosch
Ashkan Panahi
Ayça Özçelikkale
Devdatt Dubhash
MLT
24
2
0
06 Apr 2022
Random matrix analysis of deep neural network weight matrices
M. Thamm
Max Staats
B. Rosenow
35
12
0
28 Mar 2022
The Mathematics of Artificial Intelligence
Gitta Kutyniok
19
0
0
16 Mar 2022
Vision-Based Manipulators Need to Also See from Their Hands
Kyle Hsu
Moo Jin Kim
Rafael Rafailov
Jiajun Wu
Chelsea Finn
34
44
0
15 Mar 2022
Generalization Through The Lens Of Leave-One-Out Error
Gregor Bachmann
Thomas Hofmann
Aurelien Lucchi
55
7
0
07 Mar 2022
Estimation under Model Misspecification with Fake Features
Martin Hellkvist
Ayça Özçelikkale
Anders Ahlén
27
11
0
07 Mar 2022
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
Cengiz Pehlevan
BDL
MLT
28
26
0
01 Mar 2022
Amortized Proximal Optimization
Juhan Bae
Paul Vicol
Jeff Z. HaoChen
Roger C. Grosse
ODL
27
14
0
28 Feb 2022
Robust Training under Label Noise by Over-parameterization
Sheng Liu
Zhihui Zhu
Qing Qu
Chong You
NoLa
OOD
27
106
0
28 Feb 2022
Benefit of Interpolation in Nearest Neighbor Algorithms
Yue Xing
Qifan Song
Guang Cheng
14
28
0
23 Feb 2022
Overparametrization improves robustness against adversarial attacks: A replication study
Ali Borji
AAML
19
1
0
20 Feb 2022
Deep Ensembles Work, But Are They Necessary?
Taiga Abe
E. Kelly Buchanan
Geoff Pleiss
R. Zemel
John P. Cunningham
OOD
UQCV
44
60
0
14 Feb 2022
HARFE: Hard-Ridge Random Feature Expansion
Esha Saha
Hayden Schaeffer
Giang Tran
38
14
0
06 Feb 2022
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Aaron Mishkin
Arda Sahiner
Mert Pilanci
OffRL
77
30
0
02 Feb 2022
Interplay between depth of neural networks and locality of target functions
Takashi Mori
Masakuni Ueda
25
0
0
28 Jan 2022
To what extent should we trust AI models when they extrapolate?
Roozbeh Yousefzadeh
Xuenan Cao
27
5
0
27 Jan 2022
On the Robustness of Sparse Counterfactual Explanations to Adverse Perturbations
M. Virgolin
Saverio Fracaros
CML
26
36
0
22 Jan 2022
Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning Optimization Landscape
Devansh Bisla
Jing Wang
A. Choromańska
25
34
0
20 Jan 2022
Scientific Machine Learning through Physics-Informed Neural Networks: Where we are and What's next
S. Cuomo
Vincenzo Schiano Di Cola
F. Giampaolo
G. Rozza
Maizar Raissi
F. Piccialli
PINN
26
1,180
0
14 Jan 2022
Benign Overfitting in Adversarially Robust Linear Classification
Jinghui Chen
Yuan Cao
Quanquan Gu
AAML
SILM
34
10
0
31 Dec 2021
Recur, Attend or Convolve? On Whether Temporal Modeling Matters for Cross-Domain Robustness in Action Recognition
Sofia Broomé
Ernest Pokropek
Boyu Li
Hedvig Kjellström
21
7
0
22 Dec 2021
Approximation of functions with one-bit neural networks
C. S. Güntürk
Weilin Li
19
8
0
16 Dec 2021
SCORE: Approximating Curvature Information under Self-Concordant Regularization
Adeyemi Damilare Adeoye
Alberto Bemporad
20
4
0
14 Dec 2021
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Mohammad Pezeshki
Amartya Mitra
Yoshua Bengio
Guillaume Lajoie
61
25
0
06 Dec 2021
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
43
16
0
05 Dec 2021
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Tri Dao
Beidi Chen
Kaizhao Liang
Jiaming Yang
Zhao Song
Atri Rudra
Christopher Ré
33
75
0
30 Nov 2021
Tight bounds for minimum l1-norm interpolation of noisy data
Guillaume Wang
Konstantin Donhauser
Fanny Yang
79
20
0
10 Nov 2021
There is no Double-Descent in Random Forests
Sebastian Buschjäger
K. Morik
17
8
0
08 Nov 2021
Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks
A. Shevchenko
Vyacheslav Kungurtsev
Marco Mondelli
MLT
41
13
0
03 Nov 2021
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
37
13
0
22 Oct 2021
Conditioning of Random Feature Matrices: Double Descent and Generalization Error
Zhijun Chen
Hayden Schaeffer
35
12
0
21 Oct 2021
Behavioral Experiments for Understanding Catastrophic Forgetting
Samuel J. Bell
Neil D. Lawrence
35
4
0
20 Oct 2021
A-Optimal Active Learning
Tue Boesen
E. Haber
29
0
0
18 Oct 2021
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
R. Entezari
Hanie Sedghi
O. Saukh
Behnam Neyshabur
MoMe
39
217
0
12 Oct 2021
Label Noise in Adversarial Training: A Novel Perspective to Study Robust Overfitting
Chengyu Dong
Liyuan Liu
Jingbo Shang
NoLa
AAML
56
18
0
07 Oct 2021
On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and Implications
Ziqiao Wang
Yongyi Mao
FedML
MLT
37
22
0
07 Oct 2021
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
119
356
0
04 Oct 2021
Learning through atypical "phase transitions" in overparameterized neural networks
Carlo Baldassi
Clarissa Lauditi
Enrico M. Malatesta
R. Pacelli
Gabriele Perugini
R. Zecchina
26
26
0
01 Oct 2021
Powerpropagation: A sparsity inducing weight reparameterisation
Jonathan Richard Schwarz
Siddhant M. Jayakumar
Razvan Pascanu
P. Latham
Yee Whye Teh
92
54
0
01 Oct 2021
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective
Adhyyan Narang
Vidya Muthukumar
A. Sahai
SILM
AAML
36
1
0
27 Sep 2021
Is the Number of Trainable Parameters All That Actually Matters?
A. Chatelain
Amine Djeghri
Daniel Hesslow
Julien Launay
Iacopo Poli
51
7
0
24 Sep 2021
Learning the hypotheses space from data through a U-curve algorithm
Diego Marcondes
Adilson Simonis
Junior Barrera
19
1
0
08 Sep 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
34
71
0
06 Sep 2021
Previous
1
2
3
4
5
6
7
Next