ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.05340
  4. Cited By
Exponential expressivity in deep neural networks through transient chaos

Exponential expressivity in deep neural networks through transient chaos

16 June 2016
Ben Poole
Subhaneil Lahiri
M. Raghu
Jascha Narain Sohl-Dickstein
Surya Ganguli
ArXivPDFHTML

Papers citing "Exponential expressivity in deep neural networks through transient chaos"

50 / 137 papers shown
Title
Slow Transition to Low-Dimensional Chaos in Heavy-Tailed Recurrent Neural Networks
Yi Xie
Stefan Mihalas
Łukasz Kuśmierz
21
0
0
14 May 2025
Don't be lazy: CompleteP enables compute-efficient deep transformers
Don't be lazy: CompleteP enables compute-efficient deep transformers
Nolan Dey
Bin Claire Zhang
Lorenzo Noci
Mufan Li
Blake Bordelon
Shane Bergsma
Cengiz Pehlevan
Boris Hanin
Joel Hestness
44
0
0
02 May 2025
Deep Neural Nets as Hamiltonians
Deep Neural Nets as Hamiltonians
Mike Winer
Boris Hanin
175
0
0
31 Mar 2025
Feature Learning Beyond the Edge of Stability
Feature Learning Beyond the Edge of Stability
Dávid Terjék
MLT
46
0
0
18 Feb 2025
Universal Sharpness Dynamics in Neural Network Training: Fixed Point Analysis, Edge of Stability, and Route to Chaos
Universal Sharpness Dynamics in Neural Network Training: Fixed Point Analysis, Edge of Stability, and Route to Chaos
Dayal Singh Kalra
Tianyu He
M. Barkeshli
57
4
0
17 Feb 2025
A Relative Homology Theory of Representation in Neural Networks
A Relative Homology Theory of Representation in Neural Networks
Kosio Beshkov
99
0
0
17 Feb 2025
Robust Weight Initialization for Tanh Neural Networks with Fixed Point Analysis
Robust Weight Initialization for Tanh Neural Networks with Fixed Point Analysis
Hyunwoo Lee
Hayoung Choi
Hyunju Kim
39
1
0
03 Oct 2024
Frequency and Generalisation of Periodic Activation Functions in Reinforcement Learning
Frequency and Generalisation of Periodic Activation Functions in Reinforcement Learning
Augustine N. Mavor-Parker
Matthew J. Sargent
Caswell Barry
Lewis D. Griffin
Clare Lyle
47
2
0
09 Jul 2024
Normalization and effective learning rates in reinforcement learning
Normalization and effective learning rates in reinforcement learning
Clare Lyle
Zeyu Zheng
Khimya Khetarpal
James Martens
H. V. Hasselt
Razvan Pascanu
Will Dabney
19
7
0
01 Jul 2024
Extremization to Fine Tune Physics Informed Neural Networks for Solving
  Boundary Value Problems
Extremization to Fine Tune Physics Informed Neural Networks for Solving Boundary Value Problems
A. Thiruthummal
Sergiy Shelyag
Eun-Jin Kim
33
2
0
07 Jun 2024
Understanding and Minimising Outlier Features in Neural Network Training
Understanding and Minimising Outlier Features in Neural Network Training
Bobby He
Lorenzo Noci
Daniele Paliotta
Imanol Schlag
Thomas Hofmann
42
3
0
29 May 2024
Bayesian RG Flow in Neural Network Field Theories
Bayesian RG Flow in Neural Network Field Theories
Jessica N. Howard
Marc S. Klinger
Anindita Maiti
A. G. Stapleton
68
1
0
27 May 2024
Spectral complexity of deep neural networks
Spectral complexity of deep neural networks
Simmaco Di Lillo
Domenico Marinucci
Michele Salvi
Stefano Vigogna
BDL
82
1
0
15 May 2024
Neural Redshift: Random Networks are not Random Functions
Neural Redshift: Random Networks are not Random Functions
Damien Teney
A. Nicolicioiu
Valentin Hartmann
Ehsan Abbasnejad
103
18
0
04 Mar 2024
On the Neural Tangent Kernel of Equilibrium Models
On the Neural Tangent Kernel of Equilibrium Models
Zhili Feng
J. Zico Kolter
18
6
0
21 Oct 2023
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
T. Getu
Georges Kaddoum
M. Bennis
40
1
0
13 Sep 2023
A theory of data variability in Neural Network Bayesian inference
A theory of data variability in Neural Network Bayesian inference
Javed Lindner
David Dahmen
Michael Krämer
M. Helias
BDL
32
1
0
31 Jul 2023
Deep Directly-Trained Spiking Neural Networks for Object Detection
Deep Directly-Trained Spiking Neural Networks for Object Detection
Qiaoyi Su
Yuhong Chou
Yifan Hu
Jianing Li
Shijie Mei
Ziyang Zhang
Guoqiu Li
ObjD
18
69
0
21 Jul 2023
Quantitative CLTs in Deep Neural Networks
Quantitative CLTs in Deep Neural Networks
Stefano Favaro
Boris Hanin
Domenico Marinucci
I. Nourdin
G. Peccati
BDL
33
12
0
12 Jul 2023
Dynamical Isometry based Rigorous Fair Neural Architecture Search
Dynamical Isometry based Rigorous Fair Neural Architecture Search
Jianxiang Luo
Junyi Hu
Tianji Pang
Weihao Huang
Chuan-Hsi Liu
21
0
0
05 Jul 2023
Sparsity-depth Tradeoff in Infinitely Wide Deep Neural Networks
Sparsity-depth Tradeoff in Infinitely Wide Deep Neural Networks
Chanwoo Chun
Daniel D. Lee
BDL
40
2
0
17 May 2023
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Eshaan Nichani
Alexandru Damian
Jason D. Lee
MLT
44
13
0
11 May 2023
Do deep neural networks have an inbuilt Occam's razor?
Do deep neural networks have an inbuilt Occam's razor?
Chris Mingard
Henry Rees
Guillermo Valle Pérez
A. Louis
UQCV
BDL
21
16
0
13 Apr 2023
Criticality versus uniformity in deep neural networks
Criticality versus uniformity in deep neural networks
A. Bukva
Jurriaan de Gier
Kevin T. Grosvenor
R. Jefferson
K. Schalm
Eliot Schwander
34
3
0
10 Apr 2023
Effective Theory of Transformers at Initialization
Effective Theory of Transformers at Initialization
Emily Dinan
Sho Yaida
Susan Zhang
30
14
0
04 Apr 2023
Understanding plasticity in neural networks
Understanding plasticity in neural networks
Clare Lyle
Zeyu Zheng
Evgenii Nikishin
Bernardo Avila-Pires
Razvan Pascanu
Will Dabney
AI4CE
35
98
0
02 Mar 2023
Width and Depth Limits Commute in Residual Networks
Width and Depth Limits Commute in Residual Networks
Soufiane Hayou
Greg Yang
44
14
0
01 Feb 2023
On the Initialisation of Wide Low-Rank Feedforward Neural Networks
On the Initialisation of Wide Low-Rank Feedforward Neural Networks
Thiziri Nait Saada
Jared Tanner
13
1
0
31 Jan 2023
Neural networks learn to magnify areas near decision boundaries
Neural networks learn to magnify areas near decision boundaries
Jacob A. Zavatone-Veth
Sheng Yang
Julian Rubinfien
Cengiz Pehlevan
MLT
AI4CE
28
6
0
26 Jan 2023
Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural
  Networks
Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural Networks
Fenglei Fan
Yingxin Li
Hanchuan Peng
T. Zeng
Fei Wang
25
5
0
23 Jan 2023
Learning Reservoir Dynamics with Temporal Self-Modulation
Learning Reservoir Dynamics with Temporal Self-Modulation
Yusuke Sakemi
S. Nobukawa
Toshitaka Matsuki
Takashi Morie
Kazuyuki Aihara
19
6
0
23 Jan 2023
Expected Gradients of Maxout Networks and Consequences to Parameter
  Initialization
Expected Gradients of Maxout Networks and Consequences to Parameter Initialization
Hanna Tseran
Guido Montúfar
ODL
30
0
0
17 Jan 2023
Effects of Data Geometry in Early Deep Learning
Effects of Data Geometry in Early Deep Learning
Saket Tiwari
George Konidaris
79
7
0
29 Dec 2022
Statistical Physics of Deep Neural Networks: Initialization toward
  Optimal Channels
Statistical Physics of Deep Neural Networks: Initialization toward Optimal Channels
Kangyu Weng
Aohua Cheng
Ziyang Zhang
Pei Sun
Yang Tian
53
2
0
04 Dec 2022
Characterizing the Spectrum of the NTK via a Power Series Expansion
Characterizing the Spectrum of the NTK via a Power Series Expansion
Michael Murray
Hui Jin
Benjamin Bowman
Guido Montúfar
38
11
0
15 Nov 2022
Deep equilibrium models as estimators for continuous latent variables
Deep equilibrium models as estimators for continuous latent variables
Russell Tsuchida
Cheng Soon Ong
30
8
0
11 Nov 2022
Meta-Principled Family of Hyperparameter Scaling Strategies
Meta-Principled Family of Hyperparameter Scaling Strategies
Sho Yaida
58
16
0
10 Oct 2022
Dynamical Isometry for Residual Networks
Dynamical Isometry for Residual Networks
Advait Gadhikar
R. Burkholz
ODL
AI4CE
40
2
0
05 Oct 2022
PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM)
  Systems
PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Qing Jin
Zhiyu Chen
J. Ren
Yanyu Li
Yanzhi Wang
Kai-Min Yang
MQ
18
2
0
18 Sep 2022
AutoInit: Automatic Initialization via Jacobian Tuning
AutoInit: Automatic Initialization via Jacobian Tuning
Tianyu He
Darshil Doshi
Andrey Gromov
14
4
0
27 Jun 2022
Gaussian Pre-Activations in Neural Networks: Myth or Reality?
Gaussian Pre-Activations in Neural Networks: Myth or Reality?
Pierre Wolinski
Julyan Arbel
AI4CE
76
8
0
24 May 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
Cengiz Pehlevan
MLT
40
78
0
19 May 2022
Extended critical regimes of deep neural networks
Extended critical regimes of deep neural networks
Chengqing Qu
Asem Wardak
P. Gong
AI4CE
24
1
0
24 Mar 2022
Deep Learning without Shortcuts: Shaping the Kernel with Tailored
  Rectifiers
Deep Learning without Shortcuts: Shaping the Kernel with Tailored Rectifiers
Guodong Zhang
Aleksandar Botev
James Martens
OffRL
34
26
0
15 Mar 2022
Interplay between depth of neural networks and locality of target
  functions
Interplay between depth of neural networks and locality of target functions
Takashi Mori
Masakuni Ueda
25
0
0
28 Jan 2022
MAE-DET: Revisiting Maximum Entropy Principle in Zero-Shot NAS for
  Efficient Object Detection
MAE-DET: Revisiting Maximum Entropy Principle in Zero-Shot NAS for Efficient Object Detection
Zhenhong Sun
Ming Lin
Xiuyu Sun
Zhiyu Tan
Hao Li
Rong Jin
23
32
0
26 Nov 2021
A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
Ido Nachum
Jan Hkazla
Michael C. Gastpar
Anatoly Khina
36
0
0
03 Nov 2021
Expressivity of Neural Networks via Chaotic Itineraries beyond
  Sharkovsky's Theorem
Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem
Clayton Sanford
Vaggos Chatziafratis
16
1
0
19 Oct 2021
Bayesian neural network unit priors and generalized Weibull-tail
  property
Bayesian neural network unit priors and generalized Weibull-tail property
M. Vladimirova
Julyan Arbel
Stéphane Girard
BDL
54
9
0
06 Oct 2021
On the Impact of Stable Ranks in Deep Nets
On the Impact of Stable Ranks in Deep Nets
B. Georgiev
L. Franken
Mayukh Mukherjee
Georgios Arvanitidis
21
3
0
05 Oct 2021
123
Next