ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.00425
  4. Cited By
The Convergence Rate of Neural Networks for Learned Functions of
  Different Frequencies

The Convergence Rate of Neural Networks for Learned Functions of Different Frequencies

2 June 2019
Ronen Basri
David Jacobs
Yoni Kasten
S. Kritchman
ArXivPDFHTML

Papers citing "The Convergence Rate of Neural Networks for Learned Functions of Different Frequencies"

13 / 13 papers shown
Title
A Deep State Space Model for Rainfall-Runoff Simulations
Yihan Wang
Lujun Zhang
Annan Yu
N. Benjamin Erichson
Tiantian Yang
78
1
0
28 Jan 2025
SNeRV: Spectra-preserving Neural Representation for Video
SNeRV: Spectra-preserving Neural Representation for Video
Jina Kim
Jihoo Lee
Je-Won Kang
76
3
0
03 Jan 2025
Inductive Gradient Adjustment For Spectral Bias In Implicit Neural Representations
Inductive Gradient Adjustment For Spectral Bias In Implicit Neural Representations
Kexuan Shi
Hai Chen
Leheng Zhang
Shuhang Gu
66
1
0
17 Oct 2024
Geometric Inductive Biases of Deep Networks: The Role of Data and Architecture
Geometric Inductive Biases of Deep Networks: The Role of Data and Architecture
Sajad Movahedi
Antonio Orvieto
Seyed-Mohsen Moosavi-Dezfooli
AI4CE
AAML
425
0
0
15 Oct 2024
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Taesun Yeom
Sangyoon Lee
Jaeho Lee
81
3
0
07 Oct 2024
On the expressiveness and spectral bias of KANs
On the expressiveness and spectral bias of KANs
Yixuan Wang
Jonathan W. Siegel
Ziming Liu
Thomas Y. Hou
73
11
0
02 Oct 2024
Implicit Kinematic Policies: Unifying Joint and Cartesian Action Spaces
  in End-to-End Robot Learning
Implicit Kinematic Policies: Unifying Joint and Cartesian Action Spaces in End-to-End Robot Learning
Aditya Ganapathi
Peter R. Florence
Jacob Varley
Kaylee Burns
Ken Goldberg
Andy Zeng
103
16
0
03 Mar 2022
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU
  Networks
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks
Difan Zou
Yuan Cao
Dongruo Zhou
Quanquan Gu
ODL
133
448
0
21 Nov 2018
Learning Overparameterized Neural Networks via Stochastic Gradient
  Descent on Structured Data
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
Yuanzhi Li
Yingyu Liang
MLT
142
652
0
03 Aug 2018
On the Spectral Bias of Neural Networks
On the Spectral Bias of Neural Networks
Nasim Rahaman
A. Baratin
Devansh Arpit
Felix Dräxler
Min Lin
Fred Hamprecht
Yoshua Bengio
Aaron Courville
98
1,408
0
22 Jun 2018
Identity Matters in Deep Learning
Identity Matters in Deep Learning
Moritz Hardt
Tengyu Ma
OOD
63
399
0
14 Nov 2016
The Power of Depth for Feedforward Neural Networks
The Power of Depth for Feedforward Neural Networks
Ronen Eldan
Ohad Shamir
154
731
0
12 Dec 2015
Exact solutions to the nonlinear dynamics of learning in deep linear
  neural networks
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
Andrew M. Saxe
James L. McClelland
Surya Ganguli
ODL
130
1,830
0
20 Dec 2013
1