ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.00687
  4. Cited By
On the Power and Limitations of Random Features for Understanding Neural
  Networks
v1v2v3v4 (latest)

On the Power and Limitations of Random Features for Understanding Neural Networks

1 April 2019
Gilad Yehudai
Ohad Shamir
    MLT
ArXiv (abs)PDFHTML

Papers citing "On the Power and Limitations of Random Features for Understanding Neural Networks"

50 / 91 papers shown
Title
Tensor Sketch: Fast and Scalable Polynomial Kernel Approximation
Tensor Sketch: Fast and Scalable Polynomial Kernel Approximation
Ninh Pham
Rasmus Pagh
138
0
0
13 May 2025
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
Ziang Chen
Rong Ge
MLT
154
1
0
10 Jan 2025
Adaptive Random Fourier Features Training Stabilized By Resampling With Applications in Image Regression
Adaptive Random Fourier Features Training Stabilized By Resampling With Applications in Image Regression
Aku Kammonen
Anamika Pandey
E. von Schwerin
Raúl Tempone
76
0
0
08 Oct 2024
Approximation with Random Shallow ReLU Networks with Applications to
  Model Reference Adaptive Control
Approximation with Random Shallow ReLU Networks with Applications to Model Reference Adaptive Control
Andrew G. Lamperski
Tyler Lekang
53
3
0
25 Mar 2024
Polynomially Over-Parameterized Convolutional Neural Networks Contain
  Structured Strong Winning Lottery Tickets
Polynomially Over-Parameterized Convolutional Neural Networks Contain Structured Strong Winning Lottery Tickets
A. D. Cunha
Francesco d’Amore
Emanuele Natale
MLT
66
1
0
16 Nov 2023
Orthogonal Random Features: Explicit Forms and Sharp Inequalities
Orthogonal Random Features: Explicit Forms and Sharp Inequalities
N. Demni
Hachem Kadri
74
1
0
11 Oct 2023
Six Lectures on Linearized Neural Networks
Six Lectures on Linearized Neural Networks
Theodor Misiakiewicz
Andrea Montanari
143
13
0
25 Aug 2023
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Eshaan Nichani
Alexandru Damian
Jason D. Lee
MLT
201
15
0
11 May 2023
Depth Separation with Multilayer Mean-Field Networks
Depth Separation with Multilayer Mean-Field Networks
Y. Ren
Mo Zhou
Rong Ge
OOD
85
3
0
03 Apr 2023
Function Approximation with Randomly Initialized Neural Networks for
  Approximate Model Reference Adaptive Control
Function Approximation with Randomly Initialized Neural Networks for Approximate Model Reference Adaptive Control
Tyler Lekang
Andrew G. Lamperski
54
0
0
28 Mar 2023
Online Learning for the Random Feature Model in the Student-Teacher
  Framework
Online Learning for the Random Feature Model in the Student-Teacher Framework
Roman Worschech
B. Rosenow
86
0
0
24 Mar 2023
Over-Parameterization Exponentially Slows Down Gradient Descent for
  Learning a Single Neuron
Over-Parameterization Exponentially Slows Down Gradient Descent for Learning a Single Neuron
Weihang Xu
S. Du
108
16
0
20 Feb 2023
System identification of neural systems: If we got it right, would we
  know?
System identification of neural systems: If we got it right, would we know?
Yena Han
T. Poggio
Brian Cheung
94
10
0
13 Feb 2023
On the symmetries in the dynamics of wide two-layer neural networks
On the symmetries in the dynamics of wide two-layer neural networks
Karl Hajjar
Lénaïc Chizat
51
11
0
16 Nov 2022
Understanding Impacts of Task Similarity on Backdoor Attack and
  Detection
Understanding Impacts of Task Similarity on Backdoor Attack and Detection
Di Tang
Rui Zhu
Wenyuan Xu
Haixu Tang
Yi Chen
AAML
118
5
0
12 Oct 2022
Annihilation of Spurious Minima in Two-Layer ReLU Networks
Annihilation of Spurious Minima in Two-Layer ReLU Networks
Yossi Arjevani
M. Field
52
8
0
12 Oct 2022
Neural Networks Efficiently Learn Low-Dimensional Representations with
  SGD
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
MLT
379
50
0
29 Sep 2022
Understanding Deep Neural Function Approximation in Reinforcement
  Learning via $ε$-Greedy Exploration
Understanding Deep Neural Function Approximation in Reinforcement Learning via εεε-Greedy Exploration
Fanghui Liu
Luca Viano
Volkan Cevher
116
20
0
15 Sep 2022
Differentiable Architecture Search with Random Features
Differentiable Architecture Search with Random Features
Xuanyang Zhang
Yonggang Li
Xinming Zhang
Yongtao Wang
Jian Sun
70
11
0
18 Aug 2022
Hidden Progress in Deep Learning: SGD Learns Parities Near the
  Computational Limit
Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Boaz Barak
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Eran Malach
Cyril Zhang
114
133
0
18 Jul 2022
Learning sparse features can lead to overfitting in neural networks
Learning sparse features can lead to overfitting in neural networks
Leonardo Petrini
Francesco Cagnetta
Eric Vanden-Eijnden
Matthieu Wyart
MLT
103
26
0
24 Jun 2022
Intrinsic dimensionality and generalization properties of the
  $\mathcal{R}$-norm inductive bias
Intrinsic dimensionality and generalization properties of the R\mathcal{R}R-norm inductive bias
Navid Ardeshir
Daniel J. Hsu
Clayton Sanford
CMLAI4CE
113
6
0
10 Jun 2022
Long-Tailed Learning Requires Feature Learning
Long-Tailed Learning Requires Feature Learning
T. Laurent
J. V. Brecht
Xavier Bresson
VLM
93
1
0
29 May 2022
Randomly Initialized One-Layer Neural Networks Make Data Linearly
  Separable
Randomly Initialized One-Layer Neural Networks Make Data Linearly Separable
Promit Ghosal
Srinath Mahankali
Yihang Sun
MLT
64
5
0
24 May 2022
Learning a Single Neuron for Non-monotonic Activation Functions
Learning a Single Neuron for Non-monotonic Activation Functions
Lei Wu
MLT
65
11
0
16 Feb 2022
Random Feature Amplification: Feature Learning and Generalization in
  Neural Networks
Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
103
30
0
15 Feb 2022
Optimization-Based Separations for Neural Networks
Optimization-Based Separations for Neural Networks
Itay Safran
Jason D. Lee
387
14
0
04 Dec 2021
Subquadratic Overparameterization for Shallow Neural Networks
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
Volkan Cevher
81
31
0
02 Nov 2021
Provable Regret Bounds for Deep Online Learning and Control
Provable Regret Bounds for Deep Online Learning and Control
Xinyi Chen
Edgar Minasyan
Jason D. Lee
Elad Hazan
115
6
0
15 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
  on Pruned Neural Networks
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCVMLT
85
13
0
12 Oct 2021
ReLU Regression with Massart Noise
ReLU Regression with Massart Noise
Ilias Diakonikolas
Jongho Park
Christos Tzamos
109
12
0
10 Sep 2021
A spectral-based analysis of the separation between two-layer neural
  networks and linear methods
A spectral-based analysis of the separation between two-layer neural networks and linear methods
Lei Wu
Jihao Long
126
8
0
10 Aug 2021
On the Power of Differentiable Learning versus PAC and SQ Learning
On the Power of Differentiable Learning versus PAC and SQ Learning
Emmanuel Abbe
Pritish Kamath
Eran Malach
Colin Sandon
Nathan Srebro
MLT
125
23
0
09 Aug 2021
Deep Networks Provably Classify Data on Curves
Deep Networks Provably Classify Data on Curves
Tingran Wang
Sam Buchanan
D. Gilboa
John N. Wright
83
9
0
29 Jul 2021
Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural
  Networks: A Tale of Symmetry II
Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural Networks: A Tale of Symmetry II
Yossi Arjevani
M. Field
70
19
0
21 Jul 2021
Going Beyond Linear RL: Sample Efficient Neural Function Approximation
Going Beyond Linear RL: Sample Efficient Neural Function Approximation
Baihe Huang
Kaixuan Huang
Sham Kakade
Jason D. Lee
Qi Lei
Runzhe Wang
Jiaqi Yang
103
8
0
14 Jul 2021
Memory-efficient Transformers via Top-$k$ Attention
Memory-efficient Transformers via Top-kkk Attention
Ankit Gupta
Guy Dar
Shaya Goodman
David Ciprut
Jonathan Berant
MQ
98
60
0
13 Jun 2021
The Limitations of Large Width in Neural Networks: A Deep Gaussian
  Process Perspective
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
Geoff Pleiss
John P. Cunningham
76
27
0
11 Jun 2021
Neural Optimization Kernel: Towards Robust Deep Learning
Neural Optimization Kernel: Towards Robust Deep Learning
Yueming Lyu
Ivor Tsang
58
1
0
11 Jun 2021
Learning a Single Neuron with Bias Using Gradient Descent
Learning a Single Neuron with Bias Using Gradient Descent
Gal Vardi
Gilad Yehudai
Ohad Shamir
MLT
89
17
0
02 Jun 2021
Properties of the After Kernel
Properties of the After Kernel
Philip M. Long
66
29
0
21 May 2021
Optimization of Graph Neural Networks: Implicit Acceleration by Skip
  Connections and More Depth
Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth
Keyulu Xu
Mozhi Zhang
Stefanie Jegelka
Kenji Kawaguchi
GNN
53
78
0
10 May 2021
Relative stability toward diffeomorphisms indicates performance in deep
  nets
Relative stability toward diffeomorphisms indicates performance in deep nets
Leonardo Petrini
Alessandro Favero
Mario Geiger
Matthieu Wyart
OOD
93
15
0
06 May 2021
Noether: The More Things Change, the More Stay the Same
Noether: The More Things Change, the More Stay the Same
Grzegorz Gluch
R. Urbanke
79
18
0
12 Apr 2021
Spectral Analysis of the Neural Tangent Kernel for Deep Residual
  Networks
Spectral Analysis of the Neural Tangent Kernel for Deep Residual Networks
Yuval Belfer
Amnon Geifman
Meirav Galun
Ronen Basri
74
17
0
07 Apr 2021
Quantifying the Benefit of Using Differentiable Learning over Tangent
  Kernels
Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels
Eran Malach
Pritish Kamath
Emmanuel Abbe
Nathan Srebro
88
39
0
01 Mar 2021
Classifying high-dimensional Gaussian mixtures: Where kernel methods
  fail and neural networks succeed
Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed
Maria Refinetti
Sebastian Goldt
Florent Krzakala
Lenka Zdeborová
92
74
0
23 Feb 2021
On the Approximation Power of Two-Layer Networks of Random ReLUs
On the Approximation Power of Two-Layer Networks of Random ReLUs
Daniel J. Hsu
Clayton Sanford
Rocco A. Servedio
Emmanouil-Vasileios Vlatakis-Gkaragkounis
63
25
0
03 Feb 2021
The Connection Between Approximation, Depth Separation and Learnability
  in Neural Networks
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
95
20
0
31 Jan 2021
Particle Dual Averaging: Optimization of Mean Field Neural Networks with
  Global Convergence Rate Analysis
Particle Dual Averaging: Optimization of Mean Field Neural Networks with Global Convergence Rate Analysis
Atsushi Nitanda
Denny Wu
Taiji Suzuki
97
29
0
31 Dec 2020
12
Next