ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.08619
  4. Cited By
Locality defeats the curse of dimensionality in convolutional
  teacher-student scenarios

Locality defeats the curse of dimensionality in convolutional teacher-student scenarios

16 June 2021
Alessandro Favero
Francesco Cagnetta
Matthieu Wyart
ArXivPDFHTML

Papers citing "Locality defeats the curse of dimensionality in convolutional teacher-student scenarios"

28 / 28 papers shown
Title
Learning curves theory for hierarchically compositional data with power-law distributed features
Learning curves theory for hierarchically compositional data with power-law distributed features
Francesco Cagnetta
Hyunmo Kang
Matthieu Wyart
100
1
0
11 May 2025
Learning with invariances in random features and kernel models
Learning with invariances in random features and kernel models
Song Mei
Theodor Misiakiewicz
Andrea Montanari
OOD
96
90
0
25 Feb 2021
Computational Separation Between Convolutional and Fully-Connected
  Networks
Computational Separation Between Convolutional and Fully-Connected Networks
Eran Malach
Shai Shalev-Shwartz
67
26
0
03 Oct 2020
Towards Learning Convolutions from Scratch
Towards Learning Convolutions from Scratch
Behnam Neyshabur
SSL
283
71
0
27 Jul 2020
On the Similarity between the Laplace and Neural Tangent Kernels
On the Similarity between the Laplace and Neural Tangent Kernels
Amnon Geifman
A. Yadav
Yoni Kasten
Meirav Galun
David Jacobs
Ronen Basri
115
94
0
03 Jul 2020
Spectral Bias and Task-Model Alignment Explain Generalization in Kernel
  Regression and Infinitely Wide Neural Networks
Spectral Bias and Task-Model Alignment Explain Generalization in Kernel Regression and Infinitely Wide Neural Networks
Abdulkadir Canatar
Blake Bordelon
Cengiz Pehlevan
97
187
0
23 Jun 2020
Kernel Alignment Risk Estimator: Risk Prediction from Training Data
Kernel Alignment Risk Estimator: Risk Prediction from Training Data
Arthur Jacot
Berfin cSimcsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
68
68
0
17 Jun 2020
Implicit Regularization of Random Feature Models
Implicit Regularization of Random Feature Models
Arthur Jacot
Berfin Simsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
64
83
0
19 Feb 2020
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
Cengiz Pehlevan
223
206
0
07 Feb 2020
Theoretical Issues in Deep Networks: Approximation, Optimization and
  Generalization
Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization
T. Poggio
Andrzej Banburski
Q. Liao
ODL
112
164
0
25 Aug 2019
On the Inductive Bias of Neural Tangent Kernels
On the Inductive Bias of Neural Tangent Kernels
A. Bietti
Julien Mairal
86
259
0
29 May 2019
Asymptotic learning curves of kernel methods: empirical data v.s.
  Teacher-Student paradigm
Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
S. Spigler
Mario Geiger
Matthieu Wyart
68
38
0
26 May 2019
On Exact Computation with an Infinitely Wide Neural Net
On Exact Computation with an Infinitely Wide Neural Net
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
223
925
0
26 Apr 2019
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient
  Descent
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Jaehoon Lee
Lechao Xiao
S. Schoenholz
Yasaman Bahri
Roman Novak
Jascha Narain Sohl-Dickstein
Jeffrey Pennington
211
1,104
0
18 Feb 2019
On Lazy Training in Differentiable Programming
On Lazy Training in Differentiable Programming
Lénaïc Chizat
Edouard Oyallon
Francis R. Bach
111
835
0
19 Dec 2018
A jamming transition from under- to over-parametrization affects loss
  landscape and generalization
A jamming transition from under- to over-parametrization affects loss landscape and generalization
S. Spigler
Mario Geiger
Stéphane dÁscoli
Levent Sagun
Giulio Biroli
Matthieu Wyart
61
150
0
22 Oct 2018
Bayesian Deep Convolutional Networks with Many Channels are Gaussian
  Processes
Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes
Roman Novak
Lechao Xiao
Jaehoon Lee
Yasaman Bahri
Greg Yang
Jiri Hron
Daniel A. Abolafia
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCV
BDL
65
309
0
11 Oct 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
219
1,272
0
04 Oct 2018
Gaussian Processes and Kernel Methods: A Review on Connections and
  Equivalences
Gaussian Processes and Kernel Methods: A Review on Connections and Equivalences
Motonobu Kanagawa
Philipp Hennig
Dino Sejdinovic
Bharath K. Sriperumbudur
GP
BDL
133
342
0
06 Jul 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
267
3,203
0
20 Jun 2018
Building Bayesian Neural Networks with Blocks: On Structure,
  Interpretability and Uncertainty
Building Bayesian Neural Networks with Blocks: On Structure, Interpretability and Uncertainty
Hao Zhou
Yunyang Xiong
Vikas Singh
UQCV
BDL
65
4
0
10 Jun 2018
Gaussian Process Behaviour in Wide Deep Neural Networks
Gaussian Process Behaviour in Wide Deep Neural Networks
A. G. Matthews
Mark Rowland
Jiri Hron
Richard Turner
Zoubin Ghahramani
BDL
144
559
0
30 Apr 2018
On the Generalization of Equivariance and Convolution in Neural Networks
  to the Action of Compact Groups
On the Generalization of Equivariance and Convolution in Neural Networks to the Action of Compact Groups
Risi Kondor
Shubhendu Trivedi
MLT
112
498
0
11 Feb 2018
Deep Learning Scaling is Predictable, Empirically
Deep Learning Scaling is Predictable, Empirically
Joel Hestness
Sharan Narang
Newsha Ardalani
G. Diamos
Heewoo Jun
Hassan Kianinejad
Md. Mostofa Ali Patwary
Yang Yang
Yanqi Zhou
92
741
0
01 Dec 2017
Deep Neural Networks as Gaussian Processes
Deep Neural Networks as Gaussian Processes
Jaehoon Lee
Yasaman Bahri
Roman Novak
S. Schoenholz
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCV
BDL
125
1,093
0
01 Nov 2017
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of
  Dimensionality: a Review
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review
T. Poggio
H. Mhaskar
Lorenzo Rosasco
Brando Miranda
Q. Liao
97
576
0
02 Nov 2016
End-to-End Kernel Learning with Supervised Convolutional Kernel Networks
End-to-End Kernel Learning with Supervised Convolutional Kernel Networks
Julien Mairal
SSL
64
130
0
20 May 2016
Breaking the Curse of Dimensionality with Convex Neural Networks
Breaking the Curse of Dimensionality with Convex Neural Networks
Francis R. Bach
184
706
0
30 Dec 2014
1