ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.01491
  4. Cited By
Understanding Deep Neural Networks with Rectified Linear Units
v1v2v3v4v5v6 (latest)

Understanding Deep Neural Networks with Rectified Linear Units

4 November 2016
R. Arora
A. Basu
Poorya Mianjy
Anirbit Mukherjee
    PINN
ArXiv (abs)PDFHTML

Papers citing "Understanding Deep Neural Networks with Rectified Linear Units"

49 / 199 papers shown
Title
Padé Activation Units: End-to-end Learning of Flexible Activation
  Functions in Deep Networks
Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks
Alejandro Molina
P. Schramowski
Kristian Kersting
ODL
54
82
0
15 Jul 2019
Deep Compositional Spatial Models
Deep Compositional Spatial Models
A. Zammit‐Mangion
T. L. J. Ng
Quan Vu
Maurizio Filippone
126
57
0
06 Jun 2019
Controlling Neural Level Sets
Controlling Neural Level Sets
Matan Atzmon
Niv Haim
Lior Yariv
Ofer Israelov
Haggai Maron
Y. Lipman
AI4CE
54
121
0
28 May 2019
Expression of Fractals Through Neural Network Functions
Expression of Fractals Through Neural Network Functions
Nadav Dym
B. Sober
Ingrid Daubechies
76
15
0
27 May 2019
Provable robustness against all adversarial $l_p$-perturbations for
  $p\geq 1$
Provable robustness against all adversarial lpl_plp​-perturbations for p≥1p\geq 1p≥1
Francesco Croce
Matthias Hein
OOD
78
75
0
27 May 2019
Universal Approximation with Deep Narrow Networks
Universal Approximation with Deep Narrow Networks
Patrick Kidger
Terry Lyons
136
335
0
21 May 2019
DSTP-RNN: a dual-stage two-phase attention-based recurrent neural
  networks for long-term and multivariate time series prediction
DSTP-RNN: a dual-stage two-phase attention-based recurrent neural networks for long-term and multivariate time series prediction
Yeqi Liu
Chuanyang Gong
Ling Yang
Yingyi Chen
AI4TS
73
315
0
16 Apr 2019
Scaling up the randomized gradient-free adversarial attack reveals
  overestimation of robustness using established attacks
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
60
31
0
27 Mar 2019
Rectified deep neural networks overcome the curse of dimensionality for
  nonsmooth value functions in zero-sum games of nonlinear stiff systems
Rectified deep neural networks overcome the curse of dimensionality for nonsmooth value functions in zero-sum games of nonlinear stiff systems
C. Reisinger
Yufei Zhang
55
70
0
15 Mar 2019
Error bounds for approximations with deep ReLU neural networks in
  $W^{s,p}$ norms
Error bounds for approximations with deep ReLU neural networks in Ws,pW^{s,p}Ws,p norms
Ingo Gühring
Gitta Kutyniok
P. Petersen
96
200
0
21 Feb 2019
Complexity of Linear Regions in Deep Networks
Complexity of Linear Regions in Deep Networks
Boris Hanin
David Rolnick
78
234
0
25 Jan 2019
Understanding Geometry of Encoder-Decoder CNNs
Understanding Geometry of Encoder-Decoder CNNs
J. C. Ye
Woon Kyoung Sung
3DVAI4CE
88
74
0
22 Jan 2019
Neumann Networks for Inverse Problems in Imaging
Neumann Networks for Inverse Problems in Imaging
Davis Gilton
Greg Ongie
Rebecca Willett
78
24
0
13 Jan 2019
A Constructive Approach for One-Shot Training of Neural Networks Using
  Hypercube-Based Topological Coverings
A Constructive Approach for One-Shot Training of Neural Networks Using Hypercube-Based Topological Coverings
W. B. Daniel
Enoch Yeung
28
2
0
09 Jan 2019
Universal Deep Beamformer for Variable Rate Ultrasound Imaging
Universal Deep Beamformer for Variable Rate Ultrasound Imaging
Shujaat Khan
Jaeyoung Huh
J. C. Ye
56
10
0
07 Jan 2019
Greedy Layerwise Learning Can Scale to ImageNet
Greedy Layerwise Learning Can Scale to ImageNet
Eugene Belilovsky
Michael Eickenberg
Edouard Oyallon
139
181
0
29 Dec 2018
Why ReLU networks yield high-confidence predictions far away from the
  training data and how to mitigate the problem
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
187
559
0
13 Dec 2018
A randomized gradient-free attack on ReLU networks
A randomized gradient-free attack on ReLU networks
Francesco Croce
Matthias Hein
AAML
74
21
0
28 Nov 2018
Strong mixed-integer programming formulations for trained neural
  networks
Strong mixed-integer programming formulations for trained neural networks
Ross Anderson
Joey Huchette
Christian Tjandraatmadja
J. Vielma
187
259
0
20 Nov 2018
Learning Two Layer Rectified Neural Networks in Polynomial Time
Learning Two Layer Rectified Neural Networks in Polynomial Time
Ainesh Bakshi
Rajesh Jayaram
David P. Woodruff
NoLa
178
70
0
05 Nov 2018
Convergence of the Deep BSDE Method for Coupled FBSDEs
Convergence of the Deep BSDE Method for Coupled FBSDEs
Jiequn Han
Jihao Long
87
160
0
03 Nov 2018
Quasi-random sampling for multivariate distributions via generative
  neural networks
Quasi-random sampling for multivariate distributions via generative neural networks
Marius Hofert
Avinash Prasad
Mu Zhu
102
15
0
01 Nov 2018
Nearly-tight bounds on linear regions of piecewise linear neural
  networks
Nearly-tight bounds on linear regions of piecewise linear neural networks
Qiang Hu
Huatian Zhang
60
5
0
31 Oct 2018
Precipitation Nowcasting: Leveraging bidirectional LSTM and 1D CNN
Precipitation Nowcasting: Leveraging bidirectional LSTM and 1D CNN
Maitreya Patel
Anery Patel
R. Ghosh
23
8
0
24 Oct 2018
The loss surface of deep linear networks viewed through the algebraic
  geometry lens
The loss surface of deep linear networks viewed through the algebraic geometry lens
D. Mehta
Tianran Chen
Tingting Tang
J. Hauenstein
ODL
92
32
0
17 Oct 2018
Provable Robustness of ReLU networks via Maximization of Linear Regions
Provable Robustness of ReLU networks via Maximization of Linear Regions
Francesco Croce
Maksym Andriushchenko
Matthias Hein
92
166
0
17 Oct 2018
Empirical Bounds on Linear Regions of Deep Rectifier Networks
Empirical Bounds on Linear Regions of Deep Rectifier Networks
Thiago Serra
Srikumar Ramalingam
83
42
0
08 Oct 2018
Understanding Weight Normalized Deep Neural Networks with Rectified
  Linear Units
Understanding Weight Normalized Deep Neural Networks with Rectified Linear Units
Yixi Xu
Tianlin Li
MQ
79
12
0
03 Oct 2018
Complexity of Training ReLU Neural Network
Complexity of Training ReLU Neural Network
Digvijay Boob
Santanu S. Dey
Guanghui Lan
83
74
0
27 Sep 2018
PLU: The Piecewise Linear Unit Activation Function
PLU: The Piecewise Linear Unit Activation Function
Andrei Nicolae
45
32
0
03 Sep 2018
On the Implicit Bias of Dropout
On the Implicit Bias of Dropout
Poorya Mianjy
R. Arora
René Vidal
84
67
0
26 Jun 2018
On the Spectral Bias of Neural Networks
On the Spectral Bias of Neural Networks
Nasim Rahaman
A. Baratin
Devansh Arpit
Felix Dräxler
Min Lin
Fred Hamprecht
Yoshua Bengio
Aaron Courville
172
1,462
0
22 Jun 2018
How Could Polyhedral Theory Harness Deep Learning?
How Could Polyhedral Theory Harness Deep Learning?
Thiago Serra
Christian Tjandraatmadja
Srikumar Ramalingam
AI4CE
32
0
0
17 Jun 2018
A Tropical Approach to Neural Networks with Piecewise Linear Activations
A Tropical Approach to Neural Networks with Piecewise Linear Activations
Vasileios Charisopoulos
Petros Maragos
78
40
0
22 May 2018
Tropical Geometry of Deep Neural Networks
Tropical Geometry of Deep Neural Networks
Liwen Zhang
Gregory Naitzat
Lek-Heng Lim
95
140
0
18 May 2018
Mad Max: Affine Spline Insights into Deep Learning
Mad Max: Affine Spline Insights into Deep Learning
Randall Balestriero
Richard Baraniuk
AI4CE
95
78
0
17 May 2018
Neural Networks Should Be Wide Enough to Learn Disconnected Decision
  Regions
Neural Networks Should Be Wide Enough to Learn Disconnected Decision Regions
Quynh N. Nguyen
Mahesh Chandra Mukkamala
Matthias Hein
MLT
109
56
0
28 Feb 2018
A representer theorem for deep neural networks
A representer theorem for deep neural networks
M. Unser
71
98
0
26 Feb 2018
Limits on representing Boolean functions by linear combinations of
  simple functions: thresholds, ReLUs, and low-degree polynomials
Limits on representing Boolean functions by linear combinations of simple functions: thresholds, ReLUs, and low-degree polynomials
Richard Ryan Williams
60
27
0
26 Feb 2018
On the Optimization of Deep Networks: Implicit Acceleration by
  Overparameterization
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
Sanjeev Arora
Nadav Cohen
Elad Hazan
132
488
0
19 Feb 2018
Anomaly Detection using One-Class Neural Networks
Anomaly Detection using One-Class Neural Networks
Raghavendra Chalapathy
A. Menon
Sanjay Chawla
UQCV
77
399
0
18 Feb 2018
Script Identification in Natural Scene Image and Video Frame using
  Attention based Convolutional-LSTM Network
Script Identification in Natural Scene Image and Video Frame using Attention based Convolutional-LSTM Network
A. Bhunia
Aishik Konwer
A. Bhunia
A. Bhowmick
P. Roy
Umapada Pal
93
128
0
01 Jan 2018
Bounding and Counting Linear Regions of Deep Neural Networks
Bounding and Counting Linear Regions of Deep Neural Networks
Thiago Serra
Christian Tjandraatmadja
Srikumar Ramalingam
MLT
114
251
0
06 Nov 2017
Approximating Continuous Functions by ReLU Nets of Minimal Width
Approximating Continuous Functions by ReLU Nets of Minimal Width
Boris Hanin
Mark Sellke
120
239
0
31 Oct 2017
Empirical analysis of non-linear activation functions for Deep Neural
  Networks in classification tasks
Empirical analysis of non-linear activation functions for Deep Neural Networks in classification tasks
Giovanni Alcantara
36
16
0
30 Oct 2017
Sparse Coding and Autoencoders
Sparse Coding and Autoencoders
Akshay Rangamani
Anirbit Mukherjee
A. Basu
T. Ganapathi
Ashish Arora
S. Chin
T. Tran
105
20
0
12 Aug 2017
Universal Function Approximation by Deep Neural Nets with Bounded Width
  and ReLU Activations
Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations
Boris Hanin
91
356
0
09 Aug 2017
Depth Creates No Bad Local Minima
Depth Creates No Bad Local Minima
Haihao Lu
Kenji Kawaguchi
ODLFAtt
102
121
0
27 Feb 2017
Reliably Learning the ReLU in Polynomial Time
Reliably Learning the ReLU in Polynomial Time
Surbhi Goel
Varun Kanade
Adam R. Klivans
J. Thaler
90
127
0
30 Nov 2016
Previous
1234