ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.06633
  4. Cited By
Nonparametric regression using deep neural networks with ReLU activation
  function

Nonparametric regression using deep neural networks with ReLU activation function

22 August 2017
Johannes Schmidt-Hieber
ArXivPDFHTML

Papers citing "Nonparametric regression using deep neural networks with ReLU activation function"

50 / 54 papers shown
Title
Efficient Adaptive Experimentation with Non-Compliance
Efficient Adaptive Experimentation with Non-Compliance
Miruna Oprescu
Brian M Cho
Nathan Kallus
110
0
0
23 May 2025
Deep learning with missing data
Deep learning with missing data
Tianyi Ma
Tengyao Wang
R. Samworth
155
0
0
21 Apr 2025
Learning with Noisy Labels: the Exploration of Error Bounds in Classification
Haixia Liu
Boxiao Li
Can Yang
Yang Wang
66
0
0
28 Jan 2025
Deep Partially Linear Transformation Model for Right-Censored Survival Data
Deep Partially Linear Transformation Model for Right-Censored Survival Data
Junkai Yin
Yue Zhang
Zhangsheng Yu
232
0
0
10 Dec 2024
Understanding the Effect of GCN Convolutions in Regression Tasks
Understanding the Effect of GCN Convolutions in Regression Tasks
Juntong Chen
Johannes Schmidt-Hieber
Claire Donnat
Olga Klopp
GNN
57
0
0
26 Oct 2024
Robust Feature Learning for Multi-Index Models in High Dimensions
Robust Feature Learning for Multi-Index Models in High Dimensions
Alireza Mousavi-Hosseini
Adel Javanmard
Murat A. Erdogdu
OOD
AAML
92
1
0
21 Oct 2024
Probing the Latent Hierarchical Structure of Data via Diffusion Models
Probing the Latent Hierarchical Structure of Data via Diffusion Models
Antonio Sclocchi
Alessandro Favero
Noam Itzhak Levi
Matthieu Wyart
DiffM
65
5
0
17 Oct 2024
Nested Deep Learning Model Towards A Foundation Model for Brain Signal Data
Nested Deep Learning Model Towards A Foundation Model for Brain Signal Data
Fangyi Wei
Jiajie Mo
Kai Zhang
Haipeng Shen
Srikantan Nagarajan
Fei Jiang
122
2
0
04 Oct 2024
On the expressiveness and spectral bias of KANs
On the expressiveness and spectral bias of KANs
Yixuan Wang
Jonathan W. Siegel
Ziming Liu
Thomas Y. Hou
73
11
0
02 Oct 2024
Posterior and variational inference for deep neural networks with heavy-tailed weights
Posterior and variational inference for deep neural networks with heavy-tailed weights
Ismael Castillo
Paul Egels
BDL
76
4
0
05 Jun 2024
Spectral complexity of deep neural networks
Spectral complexity of deep neural networks
Simmaco Di Lillo
Domenico Marinucci
Michele Salvi
Stefano Vigogna
BDL
92
2
0
15 May 2024
Causality Pursuit from Heterogeneous Environments via Neural Adversarial Invariance Learning
Causality Pursuit from Heterogeneous Environments via Neural Adversarial Invariance Learning
Yihong Gu
Cong Fang
Peter Bühlmann
Jianqing Fan
OOD
CML
191
2
0
07 May 2024
KAN: Kolmogorov-Arnold Networks
KAN: Kolmogorov-Arnold Networks
Ziming Liu
Yixuan Wang
Sachin Vaidya
Fabian Ruehle
James Halverson
Marin Soljacic
Thomas Y. Hou
Max Tegmark
149
528
0
30 Apr 2024
Deep Horseshoe Gaussian Processes
Deep Horseshoe Gaussian Processes
Ismael Castillo
Thibault Randrianarisoa
BDL
UQCV
70
5
0
04 Mar 2024
Structure-agnostic Optimality of Doubly Robust Learning for Treatment Effect Estimation
Structure-agnostic Optimality of Doubly Robust Learning for Treatment Effect Estimation
Jikai Jin
Vasilis Syrgkanis
CML
103
1
0
22 Feb 2024
Nonlinear functional regression by functional deep neural network with kernel embedding
Nonlinear functional regression by functional deep neural network with kernel embedding
Zhongjie Shi
Jun Fan
Linhao Song
Ding-Xuan Zhou
Johan A. K. Suykens
262
5
0
05 Jan 2024
Deep Huber quantile regression networks
Deep Huber quantile regression networks
Hristos Tyralis
Georgia Papacharalampous
N. Dogulu
Kwok-Pan Chun
UQCV
75
2
0
17 Jun 2023
Utility Theory of Synthetic Data Generation
Utility Theory of Synthetic Data Generation
Shi Xu
W. Sun
Guang Cheng
117
5
0
17 May 2023
Deep ReLU network approximation of functions on a manifold
Deep ReLU network approximation of functions on a manifold
Johannes Schmidt-Hieber
48
92
0
02 Aug 2019
Adaptive Approximation and Generalization of Deep Neural Network with
  Intrinsic Dimensionality
Adaptive Approximation and Generalization of Deep Neural Network with Intrinsic Dimensionality
Ryumei Nakada
Masaaki Imaizumi
AI4CE
31
38
0
04 Jul 2019
Benchmarking Neural Network Robustness to Common Corruptions and
  Perturbations
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Dan Hendrycks
Thomas G. Dietterich
OOD
VLM
109
3,399
0
28 Mar 2019
How Can We Be So Dense? The Benefits of Using Highly Sparse
  Representations
How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
Subutai Ahmad
Luiz Scheinkman
47
96
0
27 Mar 2019
The State of Sparsity in Deep Neural Networks
The State of Sparsity in Deep Neural Networks
Trevor Gale
Erich Elsen
Sara Hooker
103
755
0
25 Feb 2019
How do infinite width bounded norm networks look in function space?
How do infinite width bounded norm networks look in function space?
Pedro H. P. Savarese
Itay Evron
Daniel Soudry
Nathan Srebro
57
166
0
13 Feb 2019
Neural Network Topologies for Sparse Training
Neural Network Topologies for Sparse Training
Ryan A. Robinett
J. Kepner
20
7
0
14 Sep 2018
Does data interpolation contradict statistical optimality?
Does data interpolation contradict statistical optimality?
M. Belkin
Alexander Rakhlin
Alexandre B. Tsybakov
64
218
0
25 Jun 2018
Comparison of non-linear activation functions for deep neural networks
  on MNIST classification task
Comparison of non-linear activation functions for deep neural networks on MNIST classification task
Dabal Pedamonti
40
186
0
08 Apr 2018
A comparison of deep networks with ReLU activation function and linear
  spline-type methods
A comparison of deep networks with ReLU activation function and linear spline-type methods
Konstantin Eckle
Johannes Schmidt-Hieber
60
327
0
06 Apr 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
173
3,433
0
09 Mar 2018
Characterizing Implicit Bias in Terms of Optimization Geometry
Characterizing Implicit Bias in Terms of Optimization Geometry
Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
AI4CE
62
404
0
22 Feb 2018
Optimal approximation of continuous functions by very deep ReLU networks
Optimal approximation of continuous functions by very deep ReLU networks
Dmitry Yarotsky
130
293
0
10 Feb 2018
Deep Expander Networks: Efficient Deep Networks from Graph Theory
Deep Expander Networks: Efficient Deep Networks from Graph Theory
Ameya Prabhu
G. Varma
A. Namboodiri
GNN
63
71
0
23 Nov 2017
The Implicit Bias of Gradient Descent on Separable Data
The Implicit Bias of Gradient Descent on Separable Data
Daniel Soudry
Elad Hoffer
Mor Shpigel Nacson
Suriya Gunasekar
Nathan Srebro
83
908
0
27 Oct 2017
Scalable Training of Artificial Neural Networks with Adaptive Sparse
  Connectivity inspired by Network Science
Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science
Decebal Constantin Mocanu
Elena Mocanu
Peter Stone
Phuong H. Nguyen
M. Gibescu
A. Liotta
121
625
0
15 Jul 2017
Fast learning rate of deep learning via a kernel perspective
Fast learning rate of deep learning via a kernel perspective
Taiji Suzuki
39
6
0
29 May 2017
Optimal Approximation with Sparsely Connected Deep Neural Networks
Optimal Approximation with Sparsely Connected Deep Neural Networks
Helmut Bölcskei
Philipp Grohs
Gitta Kutyniok
P. Petersen
128
256
0
04 May 2017
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of
  Dimensionality: a Review
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review
T. Poggio
H. Mhaskar
Lorenzo Rosasco
Brando Miranda
Q. Liao
74
575
0
02 Nov 2016
Why Deep Neural Networks for Function Approximation?
Why Deep Neural Networks for Function Approximation?
Shiyu Liang
R. Srikant
73
383
0
13 Oct 2016
Error bounds for approximations with deep ReLU networks
Error bounds for approximations with deep ReLU networks
Dmitry Yarotsky
137
1,226
0
03 Oct 2016
Deep vs. shallow networks : An approximation theory perspective
Deep vs. shallow networks : An approximation theory perspective
H. Mhaskar
T. Poggio
98
341
0
10 Aug 2016
Risk Bounds for High-dimensional Ridge Function Combinations Including
  Neural Networks
Risk Bounds for High-dimensional Ridge Function Combinations Including Neural Networks
Jason M. Klusowski
Andrew R. Barron
54
69
0
05 Jul 2016
Inception-v4, Inception-ResNet and the Impact of Residual Connections on
  Learning
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Christian Szegedy
Sergey Ioffe
Vincent Vanhoucke
Alexander A. Alemi
299
14,196
0
23 Feb 2016
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
292
605
0
14 Feb 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.4K
192,638
0
10 Dec 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
247
6,628
0
08 Jun 2015
Highway Networks
Highway Networks
R. Srivastava
Klaus Greff
Jürgen Schmidhuber
120
1,765
0
03 May 2015
Breaking the Curse of Dimensionality with Convex Neural Networks
Breaking the Curse of Dimensionality with Convex Neural Networks
Francis R. Bach
121
703
0
30 Dec 2014
Convolutional Neural Networks at Constrained Time Cost
Convolutional Neural Networks at Constrained Time Cost
Kaiming He
Jian Sun
3DV
60
1,291
0
04 Dec 2014
On the Number of Linear Regions of Deep Neural Networks
On the Number of Linear Regions of Deep Neural Networks
Guido Montúfar
Razvan Pascanu
Kyunghyun Cho
Yoshua Bengio
77
1,250
0
08 Feb 2014
On the number of response regions of deep feed forward networks with
  piece-wise linear activations
On the number of response regions of deep feed forward networks with piece-wise linear activations
Razvan Pascanu
Guido Montúfar
Yoshua Bengio
FAtt
102
257
0
20 Dec 2013
12
Next