ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.08101
  4. Cited By
Representation Benefits of Deep Feedforward Networks

Representation Benefits of Deep Feedforward Networks

27 September 2015
Matus Telgarsky
ArXivPDFHTML

Papers citing "Representation Benefits of Deep Feedforward Networks"

50 / 63 papers shown
Title
Non-identifiability distinguishes Neural Networks among Parametric Models
Non-identifiability distinguishes Neural Networks among Parametric Models
Sourav Chatterjee
Timothy Sudijono
35
0
0
25 Apr 2025
On Space Folds of ReLU Neural Networks
On Space Folds of ReLU Neural Networks
Michal Lewandowski
Hamid Eghbalzadeh
Bernhard Heinzl
Raphael Pisoni
Bernhard A.Moser
MLT
87
1
0
17 Feb 2025
Extracting Formulae in Many-Valued Logic from Deep Neural Networks
Extracting Formulae in Many-Valued Logic from Deep Neural Networks
Yani Zhang
Helmut Bölcskei
24
0
0
22 Jan 2024
Expressivity and Approximation Properties of Deep Neural Networks with
  ReLU$^k$ Activation
Expressivity and Approximation Properties of Deep Neural Networks with ReLUk^kk Activation
Juncai He
Tong Mao
Jinchao Xu
45
3
0
27 Dec 2023
Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision
Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision
Arturs Berzins
27
5
0
12 Jun 2023
The Tunnel Effect: Building Data Representations in Deep Neural Networks
The Tunnel Effect: Building Data Representations in Deep Neural Networks
Wojciech Masarczyk
M. Ostaszewski
Ehsan Imani
Razvan Pascanu
Piotr Milo's
Tomasz Trzciñski
41
19
0
31 May 2023
Embeddings between Barron spaces with higher order activation functions
Embeddings between Barron spaces with higher order activation functions
T. J. Heeringa
L. Spek
Felix L. Schwenninger
C. Brune
42
3
0
25 May 2023
Multi-Path Transformer is Better: A Case Study on Neural Machine
  Translation
Multi-Path Transformer is Better: A Case Study on Neural Machine Translation
Ye Lin
Shuhan Zhou
Yanyang Li
Anxiang Ma
Tong Xiao
Jingbo Zhu
38
0
0
10 May 2023
When Deep Learning Meets Polyhedral Theory: A Survey
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
96
33
0
29 Apr 2023
The R-mAtrIx Net
The R-mAtrIx Net
Shailesh Lal
Suvajit Majumder
E. Sobko
24
5
0
14 Apr 2023
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice
  Polytopes
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes
Christian Haase
Christoph Hertrich
Georg Loho
34
22
0
24 Feb 2023
Optimal Approximation Complexity of High-Dimensional Functions with
  Neural Networks
Optimal Approximation Complexity of High-Dimensional Functions with Neural Networks
Vincent P. H. Goverse
Jad Hamdan
Jared Tanner
31
0
0
30 Jan 2023
Getting Away with More Network Pruning: From Sparsity to Geometry and
  Linear Regions
Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions
Junyang Cai
Khai-Nguyen Nguyen
Nishant Shrestha
Aidan Good
Ruisen Tu
Xin Yu
Shandian Zhe
Thiago Serra
MLT
40
7
0
19 Jan 2023
Expected Gradients of Maxout Networks and Consequences to Parameter
  Initialization
Expected Gradients of Maxout Networks and Consequences to Parameter Initialization
Hanna Tseran
Guido Montúfar
ODL
32
0
0
17 Jan 2023
Effects of Data Geometry in Early Deep Learning
Effects of Data Geometry in Early Deep Learning
Saket Tiwari
George Konidaris
82
7
0
29 Dec 2022
Towards Global Neural Network Abstractions with Locally-Exact
  Reconstruction
Towards Global Neural Network Abstractions with Locally-Exact Reconstruction
Edoardo Manino
I. Bessa
Lucas C. Cordeiro
21
1
0
21 Oct 2022
Curved Representation Space of Vision Transformers
Curved Representation Space of Vision Transformers
Juyeop Kim
Junha Park
Songkuk Kim
Jongseok Lee
ViT
41
6
0
11 Oct 2022
Limitations of neural network training due to numerical instability of
  backpropagation
Limitations of neural network training due to numerical instability of backpropagation
Clemens Karner
V. Kazeev
P. Petersen
40
3
0
03 Oct 2022
Relational Reasoning via Set Transformers: Provable Efficiency and
  Applications to MARL
Relational Reasoning via Set Transformers: Provable Efficiency and Applications to MARL
Fengzhuo Zhang
Boyi Liu
Kaixin Wang
Vincent Y. F. Tan
Zhuoran Yang
Zhaoran Wang
OffRL
LRM
51
10
0
20 Sep 2022
Universal Solutions of Feedforward ReLU Networks for Interpolations
Universal Solutions of Feedforward ReLU Networks for Interpolations
Changcun Huang
25
2
0
16 Aug 2022
Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the
  Optimization Landscape Around the True Solution
Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution
Jianhao Ma
S. Fattahi
46
5
0
15 Jul 2022
Lower and Upper Bounds for Numbers of Linear Regions of Graph
  Convolutional Networks
Lower and Upper Bounds for Numbers of Linear Regions of Graph Convolutional Networks
Hao Chen
Yu Wang
Huan Xiong
GNN
21
6
0
01 Jun 2022
CNNs Avoid Curse of Dimensionality by Learning on Patches
CNNs Avoid Curse of Dimensionality by Learning on Patches
Vamshi C. Madala
S. Chandrasekaran
Jason Bunk
UQCV
33
5
0
22 May 2022
Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete
Training Fully Connected Neural Networks is ∃R\exists\mathbb{R}∃R-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
64
30
0
04 Apr 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another
  in Neural Networks
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
44
48
0
09 Mar 2022
Selective Network Linearization for Efficient Private Inference
Selective Network Linearization for Efficient Private Inference
Minsu Cho
Ameya Joshi
S. Garg
Brandon Reagen
C. Hegde
19
43
0
04 Feb 2022
Training Thinner and Deeper Neural Networks: Jumpstart Regularization
Training Thinner and Deeper Neural Networks: Jumpstart Regularization
Carles Roger Riera Molina
Camilo Rey
Thiago Serra
Eloi Puertas
O. Pujol
27
4
0
30 Jan 2022
Expressivity of Neural Networks via Chaotic Itineraries beyond
  Sharkovsky's Theorem
Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem
Clayton Sanford
Vaggos Chatziafratis
16
1
0
19 Oct 2021
Neural Network Approximation of Refinable Functions
Neural Network Approximation of Refinable Functions
Ingrid Daubechies
Ronald A. DeVore
Nadav Dym
Shira Faigenbaum-Golovin
S. Kovalsky
Kung-Chin Lin
Josiah Park
G. Petrova
B. Sober
46
14
0
28 Jul 2021
Multifidelity Modeling for Physics-Informed Neural Networks (PINNs)
Multifidelity Modeling for Physics-Informed Neural Networks (PINNs)
Michael Penwarden
Shandian Zhe
A. Narayan
Robert M. Kirby
34
43
0
25 Jun 2021
Sharp bounds for the number of regions of maxout networks and vertices
  of Minkowski sums
Sharp bounds for the number of regions of maxout networks and vertices of Minkowski sums
Guido Montúfar
Yue Ren
Leon Zhang
20
39
0
16 Apr 2021
Deep ReLU Networks Preserve Expected Length
Deep ReLU Networks Preserve Expected Length
Boris Hanin
Ryan Jeong
David Rolnick
29
14
0
21 Feb 2021
A Convergence Theory Towards Practical Over-parameterized Deep Neural
  Networks
A Convergence Theory Towards Practical Over-parameterized Deep Neural Networks
Asaf Noy
Yi Tian Xu
Y. Aflalo
Lihi Zelnik-Manor
Rong Jin
41
3
0
12 Jan 2021
Hierarchically Compositional Tasks and Deep Convolutional Networks
Hierarchically Compositional Tasks and Deep Convolutional Networks
Arturo Deza
Q. Liao
Andrzej Banburski
T. Poggio
BDL
OOD
33
2
0
24 Jun 2020
Provably Good Solutions to the Knapsack Problem via Neural Networks of
  Bounded Size
Provably Good Solutions to the Knapsack Problem via Neural Networks of Bounded Size
Christoph Hertrich
M. Skutella
52
21
0
28 May 2020
Neural Contextual Bandits with UCB-based Exploration
Neural Contextual Bandits with UCB-based Exploration
Dongruo Zhou
Lihong Li
Quanquan Gu
36
15
0
11 Nov 2019
Optimal Function Approximation with Relu Neural Networks
Optimal Function Approximation with Relu Neural Networks
Bo Liu
Yi Liang
25
33
0
09 Sep 2019
Information-Theoretic Lower Bounds for Compressive Sensing with
  Generative Models
Information-Theoretic Lower Bounds for Compressive Sensing with Generative Models
Zhaoqiang Liu
Jonathan Scarlett
19
38
0
28 Aug 2019
Theoretical Issues in Deep Networks: Approximation, Optimization and
  Generalization
Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization
T. Poggio
Andrzej Banburski
Q. Liao
ODL
40
161
0
25 Aug 2019
A Review on Deep Learning in Medical Image Reconstruction
A Review on Deep Learning in Medical Image Reconstruction
Hai-Miao Zhang
Bin Dong
MedIm
35
122
0
23 Jun 2019
Nonlinear Approximation and (Deep) ReLU Networks
Nonlinear Approximation and (Deep) ReLU Networks
Ingrid Daubechies
Ronald A. DeVore
S. Foucart
Boris Hanin
G. Petrova
22
138
0
05 May 2019
Is Deeper Better only when Shallow is Good?
Is Deeper Better only when Shallow is Good?
Eran Malach
Shai Shalev-Shwartz
28
45
0
08 Mar 2019
Deep Neural Network Approximation Theory
Deep Neural Network Approximation Theory
Dennis Elbrächter
Dmytro Perekrestenko
Philipp Grohs
Helmut Bölcskei
19
207
0
08 Jan 2019
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU
  Networks
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks
Difan Zou
Yuan Cao
Dongruo Zhou
Quanquan Gu
ODL
33
446
0
21 Nov 2018
Statistical Characteristics of Deep Representations: An Empirical
  Investigation
Statistical Characteristics of Deep Representations: An Empirical Investigation
Daeyoung Choi
Kyungeun Lee
Changho Shin
Stephen J. Roberts
AI4TS
21
2
0
08 Nov 2018
Small ReLU networks are powerful memorizers: a tight analysis of
  memorization capacity
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
28
117
0
17 Oct 2018
The universal approximation power of finite-width deep ReLU networks
The universal approximation power of finite-width deep ReLU networks
Dmytro Perekrestenko
Philipp Grohs
Dennis Elbrächter
Helmut Bölcskei
21
36
0
05 Jun 2018
Deep Learning Works in Practice. But Does it Work in Theory?
Deep Learning Works in Practice. But Does it Work in Theory?
L. Hoang
R. Guerraoui
PINN
44
3
0
31 Jan 2018
The exploding gradient problem demystified - definition, prevalence,
  impact, origin, tradeoffs, and solutions
The exploding gradient problem demystified - definition, prevalence, impact, origin, tradeoffs, and solutions
George Philipp
D. Song
J. Carbonell
ODL
35
46
0
15 Dec 2017
Approximating Continuous Functions by ReLU Nets of Minimal Width
Approximating Continuous Functions by ReLU Nets of Minimal Width
Boris Hanin
Mark Sellke
50
229
0
31 Oct 2017
12
Next