ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.05502
  4. Cited By
The power of deeper networks for expressing natural functions

The power of deeper networks for expressing natural functions

16 May 2017
David Rolnick
Max Tegmark
ArXivPDFHTML

Papers citing "The power of deeper networks for expressing natural functions"

30 / 30 papers shown
Title
System Identification and Control Using Lyapunov-Based Deep Neural Networks without Persistent Excitation: A Concurrent Learning Approach
System Identification and Control Using Lyapunov-Based Deep Neural Networks without Persistent Excitation: A Concurrent Learning Approach
Rebecca G. Hart
Omkar Sudhir Patil
Zachary I. Bell
Warren E. Dixon
14
0
0
15 May 2025
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
T. Getu
Georges Kaddoum
M. Bennis
40
1
0
13 Sep 2023
A Survey of Geometric Optimization for Deep Learning: From Euclidean
  Space to Riemannian Manifold
A Survey of Geometric Optimization for Deep Learning: From Euclidean Space to Riemannian Manifold
Yanhong Fei
Xian Wei
Yingjie Liu
Zhengyu Li
Mingsong Chen
30
6
0
16 Feb 2023
MOSAIC, acomparison framework for machine learning models
MOSAIC, acomparison framework for machine learning models
Mattéo Papin
Yann Beaujeault-Taudiere
F. Magniette
VLM
21
0
0
30 Jan 2023
Minimal Width for Universal Property of Deep RNN
Minimal Width for Universal Property of Deep RNN
Changhoon Song
Geonho Hwang
Jun ho Lee
Myung-joo Kang
25
9
0
25 Nov 2022
When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work
When Expressivity Meets Trainability: Fewer than nnn Neurons Can Work
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Ruoyu Sun
Zhi-Quan Luo
31
10
0
21 Oct 2022
Neural Networks as Paths through the Space of Representations
Neural Networks as Paths through the Space of Representations
Richard D. Lange
Devin Kwok
Jordan K Matelsky
Xinyue Wang
David Rolnick
Konrad Paul Kording
37
4
0
22 Jun 2022
Explicitly antisymmetrized neural network layers for variational Monte
  Carlo simulation
Explicitly antisymmetrized neural network layers for variational Monte Carlo simulation
Jeffmin Lin
Gil Goldshlager
Lin Lin
48
22
0
07 Dec 2021
On the approximation of functions by tanh neural networks
On the approximation of functions by tanh neural networks
Tim De Ryck
S. Lanthaler
Siddhartha Mishra
29
138
0
18 Apr 2021
Augmenting Deep Classifiers with Polynomial Neural Networks
Augmenting Deep Classifiers with Polynomial Neural Networks
Grigorios G. Chrysos
Markos Georgopoulos
Jiankang Deng
Jean Kossaifi
Yannis Panagakis
Anima Anandkumar
24
18
0
16 Apr 2021
Deep ReLU Networks Preserve Expected Length
Deep ReLU Networks Preserve Expected Length
Boris Hanin
Ryan Jeong
David Rolnick
29
14
0
21 Feb 2021
Depth separation beyond radial functions
Depth separation beyond radial functions
Luca Venturi
Samy Jelassi
Tristan Ozuch
Joan Bruna
25
15
0
02 Feb 2021
On Representing (Anti)Symmetric Functions
On Representing (Anti)Symmetric Functions
Marcus Hutter
17
22
0
30 Jul 2020
Expressivity of Deep Neural Networks
Expressivity of Deep Neural Networks
Ingo Gühring
Mones Raslan
Gitta Kutyniok
16
51
0
09 Jul 2020
Interpreting and Disentangling Feature Components of Various Complexity
  from DNNs
Interpreting and Disentangling Feature Components of Various Complexity from DNNs
Jie Ren
Mingjie Li
Zexu Liu
Quanshi Zhang
CoGe
19
18
0
29 Jun 2020
Deep Residual Mixture Models
Deep Residual Mixture Models
Perttu Hämäläinen
Martin Trapp
Tuure Saloheimo
Arno Solin
36
8
0
22 Jun 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
44
172
0
23 Apr 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
38
301
0
08 Jan 2020
Optimal Function Approximation with Relu Neural Networks
Optimal Function Approximation with Relu Neural Networks
Bo Liu
Yi Liang
25
33
0
09 Sep 2019
A Review on Deep Learning in Medical Image Reconstruction
A Review on Deep Learning in Medical Image Reconstruction
Hai-Miao Zhang
Bin Dong
MedIm
35
122
0
23 Jun 2019
Deep Network Approximation Characterized by Number of Neurons
Deep Network Approximation Characterized by Number of Neurons
Zuowei Shen
Haizhao Yang
Shijun Zhang
26
182
0
13 Jun 2019
A Selective Overview of Deep Learning
A Selective Overview of Deep Learning
Jianqing Fan
Cong Ma
Yiqiao Zhong
BDL
VLM
38
136
0
10 Apr 2019
Nonlinear Approximation via Compositions
Nonlinear Approximation via Compositions
Zuowei Shen
Haizhao Yang
Shijun Zhang
26
92
0
26 Feb 2019
Understanding Geometry of Encoder-Decoder CNNs
Understanding Geometry of Encoder-Decoder CNNs
J. C. Ye
Woon Kyoung Sung
3DV
AI4CE
17
72
0
22 Jan 2019
Multitask Learning Deep Neural Networks to Combine Revealed and Stated
  Preference Data
Multitask Learning Deep Neural Networks to Combine Revealed and Stated Preference Data
Shenhao Wang
Qingyi Wang
Jinhuan Zhao
AI4TS
6
21
0
02 Jan 2019
On a Sparse Shortcut Topology of Artificial Neural Networks
On a Sparse Shortcut Topology of Artificial Neural Networks
Fenglei Fan
Dayang Wang
Hengtao Guo
Qikui Zhu
Pingkun Yan
Ge Wang
Hengyong Yu
38
22
0
22 Nov 2018
Small ReLU networks are powerful memorizers: a tight analysis of
  memorization capacity
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
28
117
0
17 Oct 2018
ResNet with one-neuron hidden layers is a Universal Approximator
ResNet with one-neuron hidden layers is a Universal Approximator
Hongzhou Lin
Stefanie Jegelka
43
227
0
28 Jun 2018
Approximating Continuous Functions by ReLU Nets of Minimal Width
Approximating Continuous Functions by ReLU Nets of Minimal Width
Boris Hanin
Mark Sellke
50
229
0
31 Oct 2017
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
155
603
0
14 Feb 2016
1