ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.05932
  4. Cited By
Early-stopped neural networks are consistent

Early-stopped neural networks are consistent

10 June 2021
Ziwei Ji
Justin D. Li
Matus Telgarsky
ArXivPDFHTML

Papers citing "Early-stopped neural networks are consistent"

9 / 9 papers shown
Title
EdgeSync: Faster Edge-model Updating via Adaptive Continuous Learning
  for Video Data Drift
EdgeSync: Faster Edge-model Updating via Adaptive Continuous Learning for Video Data Drift
Peng Zhao
Runchu Dong
Guiqin Wang
Cong Zhao
40
1
0
05 Jun 2024
Connecting NTK and NNGP: A Unified Theoretical Framework for Wide Neural Network Learning Dynamics
Connecting NTK and NNGP: A Unified Theoretical Framework for Wide Neural Network Learning Dynamics
Yehonatan Avidan
Qianyi Li
H. Sompolinsky
60
8
0
08 Sep 2023
Mind the spikes: Benign overfitting of kernels and neural networks in
  fixed dimension
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
Moritz Haas
David Holzmüller
U. V. Luxburg
Ingo Steinwart
MLT
40
14
0
23 May 2023
Automatic Data Augmentation via Invariance-Constrained Learning
Automatic Data Augmentation via Invariance-Constrained Learning
Ignacio Hounie
Luiz F. O. Chamon
Alejandro Ribeiro
28
10
0
29 Sep 2022
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Neil Rohit Mallinar
James B. Simon
Amirhesam Abedsoltan
Parthe Pandit
M. Belkin
Preetum Nakkiran
26
37
0
14 Jul 2022
The Spectral Bias of Polynomial Neural Networks
The Spectral Bias of Polynomial Neural Networks
Moulik Choraria
L. Dadi
Grigorios G. Chrysos
Julien Mairal
V. Cevher
24
18
0
27 Feb 2022
Random Feature Amplification: Feature Learning and Generalization in
  Neural Networks
Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
30
29
0
15 Feb 2022
Agnostic Learnability of Halfspaces via Logistic Loss
Agnostic Learnability of Halfspaces via Logistic Loss
Ziwei Ji
Kwangjun Ahn
Pranjal Awasthi
Satyen Kale
Stefani Karp
10
3
0
31 Jan 2022
Achieving Small Test Error in Mildly Overparameterized Neural Networks
Achieving Small Test Error in Mildly Overparameterized Neural Networks
Shiyu Liang
Ruoyu Sun
R. Srikant
20
3
0
24 Apr 2021
1