Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2202.13473
Cited By
The Spectral Bias of Polynomial Neural Networks
27 February 2022
Moulik Choraria
L. Dadi
Grigorios G. Chrysos
Julien Mairal
Volkan Cevher
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"The Spectral Bias of Polynomial Neural Networks"
22 / 22 papers shown
Title
Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Taesun Yeom
Sangyoon Lee
Jaeho Lee
90
3
0
07 Oct 2024
Neural Redshift: Random Networks are not Random Functions
Damien Teney
A. Nicolicioiu
Valentin Hartmann
Ehsan Abbasnejad
141
24
0
04 Mar 2024
Early-stopped neural networks are consistent
Ziwei Ji
Justin D. Li
Matus Telgarsky
64
37
0
10 Jun 2021
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
Matthew Tancik
Pratul P. Srinivasan
B. Mildenhall
Sara Fridovich-Keil
N. Raghavan
Utkarsh Singhal
R. Ramamoorthi
Jonathan T. Barron
Ren Ng
124
2,421
0
18 Jun 2020
Frequency Bias in Neural Networks for Input of Non-Uniform Density
Ronen Basri
Meirav Galun
Amnon Geifman
David Jacobs
Yoni Kasten
S. Kritchman
80
184
0
10 Mar 2020
Π
−
Π-
Π
−
nets: Deep Polynomial Neural Networks
Grigorios G. Chrysos
Stylianos Moschoglou
Giorgos Bouritsas
Yannis Panagakis
Jiankang Deng
Stefanos Zafeiriou
52
59
0
08 Mar 2020
Towards Understanding the Spectral Bias of Deep Learning
Yuan Cao
Zhiying Fang
Yue Wu
Ding-Xuan Zhou
Quanquan Gu
98
219
0
03 Dec 2019
A Fine-Grained Spectral Perspective on Neural Networks
Greg Yang
Hadi Salman
76
113
0
24 Jul 2019
On the Inductive Bias of Neural Tangent Kernels
A. Bietti
Julien Mairal
88
259
0
29 May 2019
Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks
Zhi-Qin John Xu
Yaoyu Zhang
Yaoyu Zhang
Yan Xiao
Zheng Ma
124
516
0
19 Jan 2019
On Lazy Training in Differentiable Programming
Lénaïc Chizat
Edouard Oyallon
Francis R. Bach
111
835
0
19 Dec 2018
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
583
10,561
0
12 Dec 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
219
1,272
0
04 Oct 2018
On the Spectral Bias of Neural Networks
Nasim Rahaman
A. Baratin
Devansh Arpit
Felix Dräxler
Min Lin
Fred Hamprecht
Yoshua Bengio
Aaron Courville
152
1,439
0
22 Jun 2018
Implicit Bias of Gradient Descent on Linear Convolutional Networks
Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
MDE
124
412
0
01 Jun 2018
Deep learning generalizes because the parameter-function map is biased towards simple functions
Guillermo Valle Pérez
Chico Q. Camargo
A. Louis
MLT
AI4CE
85
231
0
22 May 2018
On the Power of Over-parametrization in Neural Networks with Quadratic Activation
S. Du
Jason D. Lee
178
272
0
03 Mar 2018
A Closer Look at Memorization in Deep Networks
Devansh Arpit
Stanislaw Jastrzebski
Nicolas Ballas
David M. Krueger
Emmanuel Bengio
...
Tegan Maharaj
Asja Fischer
Aaron Courville
Yoshua Bengio
Simon Lacoste-Julien
TDI
125
1,818
0
16 Jun 2017
SORT: Second-Order Response Transform for Visual Recognition
Yan Wang
Lingxi Xie
Chenxi Liu
Ya Zhang
Wenjun Zhang
Alan Yuille
53
53
0
20 Mar 2017
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
339
4,629
0
10 Nov 2016
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
AIMat
558
27,311
0
01 Sep 2014
Sharp analysis of low-rank kernel matrix approximations
Francis R. Bach
161
282
0
09 Aug 2012
1