Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2105.03361
Cited By
What Kinds of Functions do Deep Neural Networks Learn? Insights from Variational Spline Theory
7 May 2021
Rahul Parhi
Robert D. Nowak
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What Kinds of Functions do Deep Neural Networks Learn? Insights from Variational Spline Theory"
50 / 52 papers shown
Title
The Spectral Bias of Shallow Neural Network Learning is Shaped by the Choice of Non-linearity
Justin Sahs
Ryan Pyle
Fabio Anselmi
Ankit B. Patel
52
0
0
13 Mar 2025
A Gap Between the Gaussian RKHS and Neural Networks: An Infinite-Center Asymptotic Analysis
Akash Kumar
Rahul Parhi
Mikhail Belkin
41
0
0
22 Feb 2025
Mirror Descent on Reproducing Kernel Banach Spaces
Akash Kumar
Mikhail Belkin
Parthe Pandit
35
1
0
18 Nov 2024
The Effects of Multi-Task Learning on ReLU Neural Network Functions
Julia B. Nakhleh
Joseph Shenouda
Robert D. Nowak
34
1
0
29 Oct 2024
A Lipschitz spaces view of infinitely wide shallow neural networks
Francesca Bartolucci
Marcello Carioni
José A. Iglesias
Yury Korolev
Emanuele Naldi
S. Vigogna
18
0
0
18 Oct 2024
Nonuniform random feature models using derivative information
Konstantin Pieper
Zezhong Zhang
Guannan Zhang
14
2
0
03 Oct 2024
Dimension-independent learning rates for high-dimensional classification problems
Andrés Felipe Lerma Pineda
P. Petersen
Simon Frieder
Thomas Lukasiewicz
18
0
0
26 Sep 2024
On the Geometry of Deep Learning
Randall Balestriero
Ahmed Imtiaz Humayun
Richard G. Baraniuk
AI4CE
39
1
0
09 Aug 2024
ReLUs Are Sufficient for Learning Implicit Neural Representations
Joseph Shenouda
Yamin Zhou
Robert D. Nowak
28
5
0
04 Jun 2024
How many samples are needed to train a deep neural network?
Pegah Golestaneh
Mahsa Taheri
Johannes Lederer
26
4
0
26 May 2024
Random ReLU Neural Networks as Non-Gaussian Processes
Rahul Parhi
Pakshal Bohra
Ayoub El Biari
Mehrsa Pourya
Michael Unser
63
1
0
16 May 2024
Neural reproducing kernel Banach spaces and representer theorems for deep networks
Francesca Bartolucci
E. De Vito
Lorenzo Rosasco
S. Vigogna
44
4
0
13 Mar 2024
The Convex Landscape of Neural Networks: Characterizing Global Optima and Stationary Points via Lasso Models
Tolga Ergen
Mert Pilanci
11
2
0
19 Dec 2023
Learning a Sparse Representation of Barron Functions with the Inverse Scale Space Flow
T. J. Heeringa
Tim Roith
Christoph Brune
Martin Burger
11
0
0
05 Dec 2023
How do Minimum-Norm Shallow Denoisers Look in Function Space?
Chen Zeno
Greg Ongie
Yaniv Blumenfeld
Nir Weinberger
Daniel Soudry
16
8
0
12 Nov 2023
Minimum norm interpolation by perceptra: Explicit regularization and implicit bias
Jiyoung Park
Ian Pelakh
Stephan Wojtowytsch
40
2
0
10 Nov 2023
Efficient Compression of Overparameterized Deep Models through Low-Dimensional Learning Dynamics
Soo Min Kwon
Zekai Zhang
Dogyoon Song
Laura Balzano
Qing Qu
37
2
0
08 Nov 2023
Function-Space Optimality of Neural Architectures with Multivariate Nonlinearities
Rahul Parhi
Michael Unser
39
5
0
05 Oct 2023
Weighted variation spaces and approximation by shallow ReLU networks
Ronald A. DeVore
Robert D. Nowak
Rahul Parhi
Jonathan W. Siegel
26
5
0
28 Jul 2023
Extending Path-Dependent NJ-ODEs to Noisy Observations and a Dependent Observation Framework
William Andersson
Jakob Heiss
Florian Krach
Josef Teichmann
29
2
0
24 Jul 2023
A max-affine spline approximation of neural networks using the Legendre transform of a convex-concave representation
Adam Perrett
Danny Wood
Gavin Brown
19
0
0
16 Jul 2023
Sharp Convergence Rates for Matching Pursuit
Jason M. Klusowski
Jonathan W. Siegel
29
1
0
15 Jul 2023
Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space
Zhengdao Chen
30
1
0
03 Jul 2023
Scaling MLPs: A Tale of Inductive Bias
Gregor Bachmann
Sotiris Anagnostidis
Thomas Hofmann
32
38
0
23 Jun 2023
Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization and Detection
Haoyue Bai
Gregory H. Canal
Xuefeng Du
Jeongyeol Kwon
Robert D. Nowak
Yixuan Li
OODD
33
45
0
15 Jun 2023
Nonparametric regression using over-parameterized shallow ReLU neural networks
Yunfei Yang
Ding-Xuan Zhou
26
6
0
14 Jun 2023
Variation Spaces for Multi-Output Neural Networks: Insights on Multi-Task Learning and Network Compression
Joseph Shenouda
Rahul Parhi
Kangwook Lee
Robert D. Nowak
28
12
0
25 May 2023
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
Suzanna Parkinson
Greg Ongie
Rebecca Willett
60
6
0
24 May 2023
Optimal rates of approximation by shallow ReLU
k
^k
k
neural networks and applications to nonparametric regression
Yunfei Yang
Ding-Xuan Zhou
34
19
0
04 Apr 2023
Deep networks for system identification: a Survey
G. Pillonetto
Aleksandr Aravkin
Daniel Gedon
L. Ljung
Antônio H. Ribeiro
Thomas B. Schon
OOD
35
35
0
30 Jan 2023
Deep Learning Meets Sparse Regularization: A Signal Processing Perspective
Rahul Parhi
Robert D. Nowak
23
25
0
23 Jan 2023
Active Learning with Neural Networks: Insights from Nonparametric Statistics
Yinglun Zhu
Robert D. Nowak
72
6
0
15 Oct 2022
PathProx: A Proximal Gradient Algorithm for Weight Decay Regularized Deep Neural Networks
Liu Yang
Jifan Zhang
Joseph Shenouda
Dimitris Papailiopoulos
Kangwook Lee
Robert D. Nowak
48
1
0
06 Oct 2022
Optimal bump functions for shallow ReLU networks: Weight decay, depth separation and the curse of dimensionality
Stephan Wojtowytsch
20
1
0
02 Sep 2022
Delaunay-Triangulation-Based Learning with Hessian Total-Variation Regularization
Mehrsa Pourya
Alexis Goujon
M. Unser
14
5
0
16 Aug 2022
From Kernel Methods to Neural Networks: A Unifying Variational Formulation
M. Unser
46
7
0
29 Jun 2022
Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs
Etienne Boursier
Loucas Pillaud-Vivien
Nicolas Flammarion
ODL
19
58
0
02 Jun 2022
The Directional Bias Helps Stochastic Gradient Descent to Generalize in Kernel Regression Models
Yiling Luo
X. Huo
Y. Mei
11
0
0
29 Apr 2022
Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?
Kaiqi Zhang
Yu-Xiang Wang
17
12
0
20 Apr 2022
Qualitative neural network approximation over R and C: Elementary proofs for analytic and polynomial activation
Josiah Park
Stephan Wojtowytsch
15
1
0
25 Mar 2022
Sparsest Univariate Learning Models Under Lipschitz Constraint
Shayan Aziznejad
Thomas Debarre
M. Unser
13
4
0
27 Dec 2021
Measuring Complexity of Learning Schemes Using Hessian-Schatten Total Variation
Shayan Aziznejad
Joaquim Campos
M. Unser
13
9
0
12 Dec 2021
Tighter Sparse Approximation Bounds for ReLU Neural Networks
Carles Domingo-Enrich
Youssef Mroueh
91
4
0
07 Oct 2021
Ridgeless Interpolation with Shallow ReLU Networks in
1
D
1D
1
D
is Nearest Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz Functions
Boris Hanin
MLT
32
9
0
27 Sep 2021
Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks
Rahul Parhi
Robert D. Nowak
48
38
0
18 Sep 2021
Connections between Numerical Algorithms for PDEs and Neural Networks
Tobias Alt
Karl Schrader
M. Augustin
Pascal Peter
Joachim Weickert
PINN
21
21
0
30 Jul 2021
Deep Quantile Regression: Mitigating the Curse of Dimensionality Through Composition
Guohao Shen
Yuling Jiao
Yuanyuan Lin
J. Horowitz
Jian Huang
88
22
0
10 Jul 2021
Characterization of the Variation Spaces Corresponding to Shallow Neural Networks
Jonathan W. Siegel
Jinchao Xu
17
43
0
28 Jun 2021
Sharp Bounds on the Approximation Rates, Metric Entropy, and
n
n
n
-widths of Shallow Neural Networks
Jonathan W. Siegel
Jinchao Xu
14
87
0
29 Jan 2021
From Boundaries to Bumps: when closed (extremal) contours are critical
B. Kunsberg
Steven W. Zucker
19
11
0
16 May 2020
1
2
Next