ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10826
  4. Cited By
On Learning Over-parameterized Neural Networks: A Functional
  Approximation Perspective

On Learning Over-parameterized Neural Networks: A Functional Approximation Perspective

26 May 2019
Lili Su
Pengkun Yang
    MLT
ArXivPDFHTML

Papers citing "On Learning Over-parameterized Neural Networks: A Functional Approximation Perspective"

15 / 15 papers shown
Title
Feature Learning Beyond the Edge of Stability
Feature Learning Beyond the Edge of Stability
Dávid Terjék
MLT
46
0
0
18 Feb 2025
Tuning Frequency Bias of State Space Models
Tuning Frequency Bias of State Space Models
Annan Yu
Dongwei Lyu
S. H. Lim
Michael W. Mahoney
N. Benjamin Erichson
47
3
0
02 Oct 2024
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
39
19
0
06 Apr 2023
Global Convergence of SGD On Two Layer Neural Nets
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
26
5
0
20 Oct 2022
Approximation results for Gradient Descent trained Shallow Neural
  Networks in $1d$
Approximation results for Gradient Descent trained Shallow Neural Networks in 1d1d1d
R. Gentile
G. Welper
ODL
52
6
0
17 Sep 2022
Implicit Bias of MSE Gradient Optimization in Underparameterized Neural
  Networks
Implicit Bias of MSE Gradient Optimization in Underparameterized Neural Networks
Benjamin Bowman
Guido Montúfar
28
11
0
12 Jan 2022
On the Convergence of Shallow Neural Network Training with Randomly
  Masked Neurons
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
43
16
0
05 Dec 2021
Subquadratic Overparameterization for Shallow Neural Networks
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
V. Cevher
30
31
0
02 Nov 2021
FedKD: Communication Efficient Federated Learning via Knowledge
  Distillation
FedKD: Communication Efficient Federated Learning via Knowledge Distillation
Chuhan Wu
Fangzhao Wu
Lingjuan Lyu
Yongfeng Huang
Xing Xie
FedML
27
373
0
30 Aug 2021
A Revision of Neural Tangent Kernel-based Approaches for Neural Networks
Kyungsu Kim
A. Lozano
Eunho Yang
AAML
27
0
0
02 Jul 2020
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
36
32
0
18 Jun 2020
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
19
168
0
19 Dec 2019
Towards Understanding the Spectral Bias of Deep Learning
Towards Understanding the Spectral Bias of Deep Learning
Yuan Cao
Zhiying Fang
Yue Wu
Ding-Xuan Zhou
Quanquan Gu
32
214
0
03 Dec 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise
  for Overparameterized Neural Networks
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
47
351
0
27 Mar 2019
Approximation by Combinations of ReLU and Squared ReLU Ridge Functions
  with $ \ell^1 $ and $ \ell^0 $ Controls
Approximation by Combinations of ReLU and Squared ReLU Ridge Functions with ℓ1 \ell^1 ℓ1 and ℓ0 \ell^0 ℓ0 Controls
Jason M. Klusowski
Andrew R. Barron
132
142
0
26 Jul 2016
1