Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1902.05040
Cited By
How do infinite width bounded norm networks look in function space?
13 February 2019
Pedro H. P. Savarese
Itay Evron
Daniel Soudry
Nathan Srebro
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How do infinite width bounded norm networks look in function space?"
39 / 39 papers shown
Title
The Effects of Multi-Task Learning on ReLU Neural Network Functions
Julia B. Nakhleh
Joseph Shenouda
Robert D. Nowak
39
1
0
29 Oct 2024
When does compositional structure yield compositional generalization? A kernel theory
Samuel Lippl
Kim Stachenfeld
NAI
CoGe
73
5
0
26 May 2024
Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks
Fanghui Liu
L. Dadi
V. Cevher
82
2
0
29 Apr 2024
Function-Space Optimality of Neural Architectures with Multivariate Nonlinearities
Rahul Parhi
Michael Unser
44
5
0
05 Oct 2023
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
Suzanna Parkinson
Greg Ongie
Rebecca Willett
65
6
0
24 May 2023
Convex Dual Theory Analysis of Two-Layer Convolutional Neural Networks with Soft-Thresholding
Chunyan Xiong
Meng Lu
Xiaotong Yu
JIAN-PENG Cao
Zhong Chen
D. Guo
X. Qu
MLT
40
0
0
14 Apr 2023
Penalising the biases in norm regularisation enforces sparsity
Etienne Boursier
Nicolas Flammarion
37
14
0
02 Mar 2023
Deep Learning Meets Sparse Regularization: A Signal Processing Perspective
Rahul Parhi
Robert D. Nowak
40
25
0
23 Jan 2023
Learning Single-Index Models with Shallow Neural Networks
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
167
68
0
27 Oct 2022
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions
Arthur Jacot
36
25
0
29 Sep 2022
Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty
Thomas George
Guillaume Lajoie
A. Baratin
31
5
0
19 Sep 2022
On the Implicit Bias in Deep-Learning Algorithms
Gal Vardi
FedML
AI4CE
34
72
0
26 Aug 2022
Intrinsic dimensionality and generalization properties of the
R
\mathcal{R}
R
-norm inductive bias
Navid Ardeshir
Daniel J. Hsu
Clayton Sanford
CML
AI4CE
18
6
0
10 Jun 2022
Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs
Etienne Boursier
Loucas Pillaud-Vivien
Nicolas Flammarion
ODL
24
58
0
02 Jun 2022
On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias
Itay Safran
Gal Vardi
Jason D. Lee
MLT
59
23
0
18 May 2022
Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers
Arda Sahiner
Tolga Ergen
Batu Mehmet Ozturkler
John M. Pauly
Morteza Mardani
Mert Pilanci
40
33
0
17 May 2022
Fully-Connected Network on Noncompact Symmetric Space and Ridgelet Transform based on Helgason-Fourier Analysis
Sho Sonoda
Isao Ishikawa
Masahiro Ikeda
21
15
0
03 Mar 2022
On Regularizing Coordinate-MLPs
Sameera Ramasinghe
L. MacDonald
Simon Lucey
158
5
0
01 Feb 2022
Measuring Complexity of Learning Schemes Using Hessian-Schatten Total Variation
Shayan Aziznejad
Joaquim Campos
M. Unser
21
9
0
12 Dec 2021
Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks
A. Shevchenko
Vyacheslav Kungurtsev
Marco Mondelli
MLT
41
13
0
03 Nov 2021
Global Optimality Beyond Two Layers: Training Deep ReLU Networks via Convex Programs
Tolga Ergen
Mert Pilanci
OffRL
MLT
32
32
0
11 Oct 2021
Tighter Sparse Approximation Bounds for ReLU Neural Networks
Carles Domingo-Enrich
Youssef Mroueh
99
4
0
07 Oct 2021
Ridgeless Interpolation with Shallow ReLU Networks in
1
D
1D
1
D
is Nearest Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz Functions
Boris Hanin
MLT
38
9
0
27 Sep 2021
Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks
Rahul Parhi
Robert D. Nowak
56
38
0
18 Sep 2021
Scaled ReLU Matters for Training Vision Transformers
Pichao Wang
Xue Wang
Haowen Luo
Jingkai Zhou
Zhipeng Zhou
Fan Wang
Hao Li
R. L. Jin
19
41
0
08 Sep 2021
What Kinds of Functions do Deep Neural Networks Learn? Insights from Variational Spline Theory
Rahul Parhi
Robert D. Nowak
MLT
38
70
0
07 May 2021
On Energy-Based Models with Overparametrized Shallow Neural Networks
Carles Domingo-Enrich
A. Bietti
Eric Vanden-Eijnden
Joan Bruna
BDL
33
9
0
15 Apr 2021
Experiments with Rich Regime Training for Deep Learning
Xinyan Li
A. Banerjee
32
2
0
26 Feb 2021
Complexity Measures for Neural Networks with General Activation Functions Using Path-based Norms
Zhong Li
Chao Ma
Lei Wu
28
24
0
14 Sep 2020
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy
E. Moroshko
Suriya Gunasekar
Blake E. Woodworth
J. Lee
Nathan Srebro
Daniel Soudry
35
85
0
13 Jul 2020
Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization
Sang Michael Xie
Tengyu Ma
Percy Liang
30
13
0
29 Jun 2020
Neural Splines: Fitting 3D Surfaces with Infinitely-Wide Neural Networks
Francis Williams
Matthew Trager
Joan Bruna
Denis Zorin
21
67
0
24 Jun 2020
A Spectral Analysis of Dot-product Kernels
M. Scetbon
Zaïd Harchaoui
150
2
0
28 Feb 2020
Convex Geometry and Duality of Over-parameterized Neural Networks
Tolga Ergen
Mert Pilanci
MLT
42
54
0
25 Feb 2020
Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-layer Networks
Mert Pilanci
Tolga Ergen
23
116
0
24 Feb 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
39
327
0
11 Feb 2020
The Role of Neural Network Activation Functions
Rahul Parhi
Robert D. Nowak
18
12
0
05 Oct 2019
Kernel and Rich Regimes in Overparametrized Models
Blake E. Woodworth
Suriya Gunasekar
Pedro H. P. Savarese
E. Moroshko
Itay Golan
J. Lee
Daniel Soudry
Nathan Srebro
24
352
0
13 Jun 2019
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
127
577
0
27 Feb 2015
1