ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.10492
  4. Cited By
Deep ReLU Networks Preserve Expected Length

Deep ReLU Networks Preserve Expected Length

21 February 2021
Boris Hanin
Ryan Jeong
David Rolnick
ArXivPDFHTML

Papers citing "Deep ReLU Networks Preserve Expected Length"

12 / 12 papers shown
Title
On Space Folds of ReLU Neural Networks
On Space Folds of ReLU Neural Networks
Michal Lewandowski
Hamid Eghbalzadeh
Bernhard Heinzl
Raphael Pisoni
Bernhard A.Moser
MLT
81
1
0
17 Feb 2025
SmoothHess: ReLU Network Feature Interactions via Stein's Lemma
SmoothHess: ReLU Network Feature Interactions via Stein's Lemma
Max Torop
A. Masoomi
Davin Hill
Kivanc Kose
Stratis Ioannidis
Jennifer Dy
25
4
0
01 Nov 2023
Expected Gradients of Maxout Networks and Consequences to Parameter
  Initialization
Expected Gradients of Maxout Networks and Consequences to Parameter Initialization
Hanna Tseran
Guido Montúfar
ODL
24
0
0
17 Jan 2023
Maximal Initial Learning Rates in Deep ReLU Networks
Maximal Initial Learning Rates in Deep ReLU Networks
Gaurav M. Iyer
Boris Hanin
David Rolnick
29
9
0
14 Dec 2022
Curved Representation Space of Vision Transformers
Curved Representation Space of Vision Transformers
Juyeop Kim
Junha Park
Songkuk Kim
Jongseok Lee
ViT
35
6
0
11 Oct 2022
On Scrambling Phenomena for Randomly Initialized Recurrent Networks
On Scrambling Phenomena for Randomly Initialized Recurrent Networks
Vaggos Chatziafratis
Ioannis Panageas
Clayton Sanford
S. Stavroulakis
24
2
0
11 Oct 2022
On the Number of Regions of Piecewise Linear Neural Networks
On the Number of Regions of Piecewise Linear Neural Networks
Alexis Goujon
Arian Etemadi
M. Unser
44
13
0
17 Jun 2022
Lower and Upper Bounds for Numbers of Linear Regions of Graph
  Convolutional Networks
Lower and Upper Bounds for Numbers of Linear Regions of Graph Convolutional Networks
Hao Chen
Yu Wang
Huan Xiong
GNN
16
6
0
01 Jun 2022
Deep Architecture Connectivity Matters for Its Convergence: A
  Fine-Grained Analysis
Deep Architecture Connectivity Matters for Its Convergence: A Fine-Grained Analysis
Wuyang Chen
Wei Huang
Xinyu Gong
Boris Hanin
Zhangyang Wang
30
7
0
11 May 2022
Gradient representations in ReLU networks as similarity functions
Gradient representations in ReLU networks as similarity functions
Dániel Rácz
Bálint Daróczy
FAtt
19
1
0
26 Oct 2021
On the Expected Complexity of Maxout Networks
On the Expected Complexity of Maxout Networks
Hanna Tseran
Guido Montúfar
19
11
0
01 Jul 2021
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
148
602
0
14 Feb 2016
1