ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.00777
  4. Cited By
Better Depth-Width Trade-offs for Neural Networks through the lens of
  Dynamical Systems

Better Depth-Width Trade-offs for Neural Networks through the lens of Dynamical Systems

2 March 2020
Vaggos Chatziafratis
Sai Ganesh Nagarajan
Ioannis Panageas
ArXivPDFHTML

Papers citing "Better Depth-Width Trade-offs for Neural Networks through the lens of Dynamical Systems"

4 / 4 papers shown
Title
Expressivity of Neural Networks via Chaotic Itineraries beyond
  Sharkovsky's Theorem
Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem
Clayton Sanford
Vaggos Chatziafratis
14
1
0
19 Oct 2021
The Connection Between Approximation, Depth Separation and Learnability
  in Neural Networks
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
21
20
0
31 Jan 2021
On the Number of Linear Functions Composing Deep Neural Network: Towards
  a Refined Definition of Neural Networks Complexity
On the Number of Linear Functions Composing Deep Neural Network: Towards a Refined Definition of Neural Networks Complexity
Yuuki Takai
Akiyoshi Sannai
Matthieu Cordonnier
67
4
0
23 Oct 2020
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
148
602
0
14 Feb 2016
1