ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.02486
  4. Cited By
Regularization Matters: A Nonparametric Perspective on Overparametrized
  Neural Network

Regularization Matters: A Nonparametric Perspective on Overparametrized Neural Network

6 July 2020
Tianyang Hu
Wei Cao
Cong Lin
Guang Cheng
ArXivPDFHTML

Papers citing "Regularization Matters: A Nonparametric Perspective on Overparametrized Neural Network"

10 / 10 papers shown
Title
On the Impacts of the Random Initialization in the Neural Tangent Kernel
  Theory
On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory
Guhan Chen
Yicheng Li
Qian Lin
AAML
38
1
0
08 Oct 2024
Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective
Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective
Jingren Liu
Zhong Ji
YunLong Yu
Jiale Cao
Yanwei Pang
Jungong Han
Xuelong Li
CLL
42
5
0
24 Jul 2024
Mind the spikes: Benign overfitting of kernels and neural networks in
  fixed dimension
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
Moritz Haas
David Holzmüller
U. V. Luxburg
Ingo Steinwart
MLT
35
14
0
23 May 2023
On the Eigenvalue Decay Rates of a Class of Neural-Network Related
  Kernel Functions Defined on General Domains
On the Eigenvalue Decay Rates of a Class of Neural-Network Related Kernel Functions Defined on General Domains
Yicheng Li
Zixiong Yu
Y. Cotronis
Qian Lin
55
13
0
04 May 2023
Learning Lipschitz Functions by GD-trained Shallow Overparameterized
  ReLU Neural Networks
Learning Lipschitz Functions by GD-trained Shallow Overparameterized ReLU Neural Networks
Ilja Kuzborskij
Csaba Szepesvári
21
4
0
28 Dec 2022
Neural Networks Efficiently Learn Low-Dimensional Representations with
  SGD
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
MLT
324
48
0
29 Sep 2022
Sobolev Acceleration and Statistical Optimality for Learning Elliptic
  Equations via Gradient Descent
Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent
Yiping Lu
Jose H. Blanchet
Lexing Ying
38
7
0
15 May 2022
Understanding Square Loss in Training Overparametrized Neural Network
  Classifiers
Understanding Square Loss in Training Overparametrized Neural Network Classifiers
Tianyang Hu
Jun Wang
Wei Cao
Zhenguo Li
UQCV
AAML
41
19
0
07 Dec 2021
On the Equivalence between Neural Network and Support Vector Machine
On the Equivalence between Neural Network and Support Vector Machine
Yilan Chen
Wei Huang
Lam M. Nguyen
Tsui-Wei Weng
AAML
25
18
0
11 Nov 2021
Towards Understanding the Spectral Bias of Deep Learning
Towards Understanding the Spectral Bias of Deep Learning
Yuan Cao
Zhiying Fang
Yue Wu
Ding-Xuan Zhou
Quanquan Gu
35
214
0
03 Dec 2019
1