ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1709.02540
  4. Cited By
The Expressive Power of Neural Networks: A View from the Width

The Expressive Power of Neural Networks: A View from the Width

8 September 2017
Zhou Lu
Hongming Pu
Feicheng Wang
Zhiqiang Hu
Liwei Wang
ArXivPDFHTML

Papers citing "The Expressive Power of Neural Networks: A View from the Width"

22 / 122 papers shown
Title
Neural Contextual Bandits with UCB-based Exploration
Neural Contextual Bandits with UCB-based Exploration
Dongruo Zhou
Lihong Li
Quanquan Gu
36
15
0
11 Nov 2019
Stochastic Feedforward Neural Networks: Universal Approximation
Stochastic Feedforward Neural Networks: Universal Approximation
Thomas Merkh
Guido Montúfar
17
8
0
22 Oct 2019
DirectPET: Full Size Neural Network PET Reconstruction from Sinogram
  Data
DirectPET: Full Size Neural Network PET Reconstruction from Sinogram Data
W. Whiteley
W. K. Luk
J. Gregor
3DV
AI4TS
29
54
0
19 Aug 2019
Padé Activation Units: End-to-end Learning of Flexible Activation
  Functions in Deep Networks
Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks
Alejandro Molina
P. Schramowski
Kristian Kersting
ODL
23
77
0
15 Jul 2019
A Review on Deep Learning in Medical Image Reconstruction
A Review on Deep Learning in Medical Image Reconstruction
Hai-Miao Zhang
Bin Dong
MedIm
32
122
0
23 Jun 2019
Deep Network Approximation Characterized by Number of Neurons
Deep Network Approximation Characterized by Number of Neurons
Zuowei Shen
Haizhao Yang
Shijun Zhang
20
182
0
13 Jun 2019
Universal Approximation with Deep Narrow Networks
Universal Approximation with Deep Narrow Networks
Patrick Kidger
Terry Lyons
29
324
0
21 May 2019
Nonlinear Approximation via Compositions
Nonlinear Approximation via Compositions
Zuowei Shen
Haizhao Yang
Shijun Zhang
23
92
0
26 Feb 2019
A Survey of the Recent Architectures of Deep Convolutional Neural
  Networks
A Survey of the Recent Architectures of Deep Convolutional Neural Networks
Asifullah Khan
A. Sohail
Umme Zahoora
Aqsa Saeed Qureshi
OOD
33
2,268
0
17 Jan 2019
Enhanced Expressive Power and Fast Training of Neural Networks by Random
  Projections
Enhanced Expressive Power and Fast Training of Neural Networks by Random Projections
Jian-Feng Cai
Dong Li
Jiaze Sun
Ke Wang
14
5
0
22 Nov 2018
On a Sparse Shortcut Topology of Artificial Neural Networks
On a Sparse Shortcut Topology of Artificial Neural Networks
Fenglei Fan
Dayang Wang
Hengtao Guo
Qikui Zhu
Pingkun Yan
Ge Wang
Hengyong Yu
38
22
0
22 Nov 2018
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU
  Networks
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks
Difan Zou
Yuan Cao
Dongruo Zhou
Quanquan Gu
ODL
24
446
0
21 Nov 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
J. Lee
Haochuan Li
Liwei Wang
M. Tomizuka
ODL
35
1,122
0
09 Nov 2018
Small ReLU networks are powerful memorizers: a tight analysis of
  memorization capacity
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
20
117
0
17 Oct 2018
Universal Approximation with Quadratic Deep Networks
Universal Approximation with Quadratic Deep Networks
Fenglei Fan
Jinjun Xiong
Ge Wang
PINN
15
78
0
31 Jul 2018
ResNet with one-neuron hidden layers is a Universal Approximator
ResNet with one-neuron hidden layers is a Universal Approximator
Hongzhou Lin
Stefanie Jegelka
36
227
0
28 Jun 2018
On the Spectral Bias of Neural Networks
On the Spectral Bias of Neural Networks
Nasim Rahaman
A. Baratin
Devansh Arpit
Felix Dräxler
Min-Bin Lin
Fred Hamprecht
Yoshua Bengio
Aaron Courville
48
1,390
0
22 Jun 2018
Learning One-hidden-layer ReLU Networks via Gradient Descent
Learning One-hidden-layer ReLU Networks via Gradient Descent
Xiao Zhang
Yaodong Yu
Lingxiao Wang
Quanquan Gu
MLT
28
134
0
20 Jun 2018
The Effect of Network Width on the Performance of Large-batch Training
The Effect of Network Width on the Performance of Large-batch Training
Lingjiao Chen
Hongyi Wang
Jinman Zhao
Dimitris Papailiopoulos
Paraschos Koutris
16
22
0
11 Jun 2018
Optimal approximation of continuous functions by very deep ReLU networks
Optimal approximation of continuous functions by very deep ReLU networks
Dmitry Yarotsky
13
293
0
10 Feb 2018
The power of deeper networks for expressing natural functions
The power of deeper networks for expressing natural functions
David Rolnick
Max Tegmark
31
174
0
16 May 2017
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
148
602
0
14 Feb 2016
Previous
123