ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.07626
  4. Cited By
Random Feature Amplification: Feature Learning and Generalization in
  Neural Networks

Random Feature Amplification: Feature Learning and Generalization in Neural Networks

15 February 2022
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
    MLT
ArXivPDFHTML

Papers citing "Random Feature Amplification: Feature Learning and Generalization in Neural Networks"

42 / 42 papers shown
Title
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
François Caron
Fadhel Ayed
Paul Jung
Hoileong Lee
Juho Lee
Hongseok Yang
93
2
0
02 Feb 2023
Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data
Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data
Spencer Frei
Gal Vardi
Peter L. Bartlett
Nathan Srebro
Wei Hu
MLT
42
42
0
13 Oct 2022
Neural Networks can Learn Representations with Gradient Descent
Neural Networks can Learn Representations with Gradient Descent
Alexandru Damian
Jason D. Lee
Mahdi Soltanolkotabi
SSL
MLT
81
121
0
30 Jun 2022
Gradient flow dynamics of shallow ReLU networks for square loss and
  orthogonal inputs
Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs
Etienne Boursier
Loucas Pillaud-Vivien
Nicolas Flammarion
ODL
44
61
0
02 Jun 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step
  Improves the Representation
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
78
127
0
03 May 2022
Benign Overfitting without Linearity: Neural Network Classifiers Trained
  by Gradient Descent for Noisy Linear Data
Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
62
74
0
11 Feb 2022
Optimization-Based Separations for Neural Networks
Optimization-Based Separations for Neural Networks
Itay Safran
Jason D. Lee
286
14
0
04 Dec 2021
Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity
  Bias
Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias
Kaifeng Lyu
Zhiyuan Li
Runzhe Wang
Sanjeev Arora
MLT
60
76
0
26 Oct 2021
On the Power of Differentiable Learning versus PAC and SQ Learning
On the Power of Differentiable Learning versus PAC and SQ Learning
Emmanuel Abbe
Pritish Kamath
Eran Malach
Colin Sandon
Nathan Srebro
MLT
89
23
0
09 Aug 2021
Early-stopped neural networks are consistent
Early-stopped neural networks are consistent
Ziwei Ji
Justin D. Li
Matus Telgarsky
54
37
0
10 Jun 2021
Properties of the After Kernel
Properties of the After Kernel
Philip M. Long
41
29
0
21 May 2021
Quantifying the Benefit of Using Differentiable Learning over Tangent
  Kernels
Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels
Eran Malach
Pritish Kamath
Emmanuel Abbe
Nathan Srebro
54
39
0
01 Mar 2021
Provable Generalization of SGD-trained Neural Networks of Any Width in
  the Presence of Adversarial Label Noise
Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise
Spencer Frei
Yuan Cao
Quanquan Gu
FedML
MLT
83
21
0
04 Jan 2021
Feature Learning in Infinite-Width Neural Networks
Feature Learning in Infinite-Width Neural Networks
Greg Yang
J. E. Hu
MLT
73
153
0
30 Nov 2020
Deep learning versus kernel learning: an empirical study of loss
  landscape geometry and the time evolution of the Neural Tangent Kernel
Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Stanislav Fort
Gintare Karolina Dziugaite
Mansheej Paul
Sepideh Kharaghani
Daniel M. Roy
Surya Ganguli
97
190
0
28 Oct 2020
Modeling from Features: a Mean-field Framework for Over-parameterized
  Deep Neural Networks
Modeling from Features: a Mean-field Framework for Over-parameterized Deep Neural Networks
Cong Fang
Jason D. Lee
Pengkun Yang
Tong Zhang
OOD
FedML
130
57
0
03 Jul 2020
Agnostic Learning of a Single Neuron with Gradient Descent
Agnostic Learning of a Single Neuron with Gradient Descent
Spencer Frei
Yuan Cao
Quanquan Gu
MLT
44
59
0
29 May 2020
Approximate is Good Enough: Probabilistic Variants of Dimensional and
  Margin Complexity
Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity
Pritish Kamath
Omar Montasser
Nathan Srebro
31
28
0
09 Mar 2020
Learning Parities with Neural Networks
Learning Parities with Neural Networks
Amit Daniely
Eran Malach
54
78
0
18 Feb 2020
Learning Halfspaces with Massart Noise Under Structured Distributions
Learning Halfspaces with Massart Noise Under Structured Distributions
Ilias Diakonikolas
Vasilis Kontonis
Christos Tzamos
Nikos Zarifis
44
61
0
13 Feb 2020
Learning a Single Neuron with Gradient Methods
Learning a Single Neuron with Gradient Methods
Gilad Yehudai
Ohad Shamir
MLT
54
63
0
15 Jan 2020
Beyond Linearization: On Quadratic and Higher-Order Approximation of
  Wide Neural Networks
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Yu Bai
Jason D. Lee
49
116
0
03 Oct 2019
Benign Overfitting in Linear Regression
Benign Overfitting in Linear Regression
Peter L. Bartlett
Philip M. Long
Gábor Lugosi
Alexander Tsigler
MLT
70
777
0
26 Jun 2019
Limitations of Lazy Training of Two-layers Neural Networks
Limitations of Lazy Training of Two-layers Neural Networks
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
55
143
0
21 Jun 2019
What Can ResNet Learn Efficiently, Going Beyond Kernels?
What Can ResNet Learn Efficiently, Going Beyond Kernels?
Zeyuan Allen-Zhu
Yuanzhi Li
385
183
0
24 May 2019
On Exact Computation with an Infinitely Wide Neural Net
On Exact Computation with an Infinitely Wide Neural Net
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
213
922
0
26 Apr 2019
Depth Separations in Neural Networks: What is Actually Being Separated?
Depth Separations in Neural Networks: What is Actually Being Separated?
Itay Safran
Ronen Eldan
Ohad Shamir
MDE
51
36
0
15 Apr 2019
On the Power and Limitations of Random Features for Understanding Neural
  Networks
On the Power and Limitations of Random Features for Understanding Neural Networks
Gilad Yehudai
Ohad Shamir
MLT
66
182
0
01 Apr 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise
  for Overparameterized Neural Networks
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
93
352
0
27 Mar 2019
Is Deeper Better only when Shallow is Good?
Is Deeper Better only when Shallow is Good?
Eran Malach
Shai Shalev-Shwartz
48
45
0
08 Mar 2019
A Convergence Theory for Deep Learning via Over-Parameterization
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
AI4CE
ODL
242
1,462
0
09 Nov 2018
Regularization Matters: Generalization and Optimization of Neural Nets
  v.s. their Induced Kernel
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei
Jason D. Lee
Qiang Liu
Tengyu Ma
193
244
0
12 Oct 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
214
1,270
0
04 Oct 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
261
3,194
0
20 Jun 2018
On the Global Convergence of Gradient Descent for Over-parameterized
  Models using Optimal Transport
On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Lénaïc Chizat
Francis R. Bach
OT
204
735
0
24 May 2018
A Mean Field View of the Landscape of Two-Layers Neural Networks
A Mean Field View of the Landscape of Two-Layers Neural Networks
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
81
858
0
18 Apr 2018
Theoretical insights into the optimization landscape of
  over-parameterized shallow neural networks
Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
Mahdi Soltanolkotabi
Adel Javanmard
Jason D. Lee
165
419
0
16 Jul 2017
Depth Separation for Neural Networks
Depth Separation for Neural Networks
Amit Daniely
MDE
37
74
0
27 Feb 2017
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
334
4,625
0
10 Nov 2016
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
348
608
0
14 Feb 2016
The Power of Depth for Feedforward Neural Networks
The Power of Depth for Feedforward Neural Networks
Ronen Eldan
Ohad Shamir
213
732
0
12 Dec 2015
On the Expressive Power of Deep Learning: A Tensor Analysis
On the Expressive Power of Deep Learning: A Tensor Analysis
Nadav Cohen
Or Sharir
Amnon Shashua
81
470
0
16 Sep 2015
1