Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1912.00827
Cited By
A Random Matrix Perspective on Mixtures of Nonlinearities for Deep Learning
2 December 2019
Ben Adlam
J. Levinson
Jeffrey Pennington
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Random Matrix Perspective on Mixtures of Nonlinearities for Deep Learning"
9 / 9 papers shown
Title
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
Behrad Moniri
Donghwan Lee
Hamed Hassani
Yan Sun
MLT
40
19
0
11 Oct 2023
Demystifying Disagreement-on-the-Line in High Dimensions
Dong-Hwan Lee
Behrad Moniri
Xinmeng Huang
Yan Sun
Hamed Hassani
21
8
0
31 Jan 2023
A Solvable Model of Neural Scaling Laws
A. Maloney
Daniel A. Roberts
J. Sully
36
51
0
30 Oct 2022
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
37
13
0
22 Oct 2021
Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
Zhichao Wang
Yizhe Zhu
35
18
0
20 Sep 2021
Analysis of One-Hidden-Layer Neural Networks via the Resolvent Method
Vanessa Piccolo
Dominik Schröder
18
8
0
11 May 2021
Mixed Moments for the Product of Ginibre Matrices
Nick Halmagyi
Shailesh Lal
16
2
0
20 Jul 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,746
0
26 Sep 2016
1