ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.08127
  4. Cited By
Learning curves of generic features maps for realistic datasets with a
  teacher-student model

Learning curves of generic features maps for realistic datasets with a teacher-student model

16 February 2021
Bruno Loureiro
Cédric Gerbelot
Hugo Cui
Sebastian Goldt
Florent Krzakala
M. Mézard
Lenka Zdeborová
ArXivPDFHTML

Papers citing "Learning curves of generic features maps for realistic datasets with a teacher-student model"

30 / 30 papers shown
Title
Neural Learning Rules from Associative Networks Theory
Neural Learning Rules from Associative Networks Theory
Daniele Lotito
43
0
0
11 Mar 2025
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
Kaito Takanami
Takashi Takahashi
Ayaka Sakata
35
0
0
27 Jan 2025
High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws
High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws
M. E. Ildiz
Halil Alperen Gozeten
Ege Onur Taga
Marco Mondelli
Samet Oymak
54
2
0
24 Oct 2024
Classifying Overlapping Gaussian Mixtures in High Dimensions: From
  Optimal Classifiers to Neural Nets
Classifying Overlapping Gaussian Mixtures in High Dimensions: From Optimal Classifiers to Neural Nets
Khen Cohen
Noam Levi
Yaron Oz
BDL
31
1
0
28 May 2024
Asymptotic theory of in-context learning by linear attention
Asymptotic theory of in-context learning by linear attention
Yue M. Lu
Mary I. Letey
Jacob A. Zavatone-Veth
Anindita Maiti
C. Pehlevan
23
10
0
20 May 2024
Learning with Norm Constrained, Over-parameterized, Two-layer Neural
  Networks
Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks
Fanghui Liu
L. Dadi
V. Cevher
74
2
0
29 Apr 2024
Characterizing Overfitting in Kernel Ridgeless Regression Through the
  Eigenspectrum
Characterizing Overfitting in Kernel Ridgeless Regression Through the Eigenspectrum
Tin Sum Cheng
Aurélien Lucchi
Anastasis Kratsios
David Belius
34
8
0
02 Feb 2024
Random Matrix Analysis to Balance between Supervised and Unsupervised
  Learning under the Low Density Separation Assumption
Random Matrix Analysis to Balance between Supervised and Unsupervised Learning under the Low Density Separation Assumption
Vasilii Feofanov
Malik Tiomoko
Aladin Virmaux
31
5
0
20 Oct 2023
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
Behrad Moniri
Donghwan Lee
Hamed Hassani
Edgar Dobriban
MLT
34
19
0
11 Oct 2023
Exact threshold for approximate ellipsoid fitting of random points
Exact threshold for approximate ellipsoid fitting of random points
Antoine Maillard
Afonso S. Bandeira
27
2
0
09 Oct 2023
Modify Training Directions in Function Space to Reduce Generalization
  Error
Modify Training Directions in Function Space to Reduce Generalization Error
Yi Yu
Wenlian Lu
Boyu Chen
19
0
0
25 Jul 2023
How Spurious Features Are Memorized: Precise Analysis for Random and NTK
  Features
How Spurious Features Are Memorized: Precise Analysis for Random and NTK Features
Simone Bombari
Marco Mondelli
AAML
19
4
0
20 May 2023
Phase transitions in the mini-batch size for sparse and dense two-layer
  neural networks
Phase transitions in the mini-batch size for sparse and dense two-layer neural networks
Raffaele Marino
F. Ricci-Tersenghi
30
14
0
10 May 2023
On the Stepwise Nature of Self-Supervised Learning
On the Stepwise Nature of Self-Supervised Learning
James B. Simon
Maksis Knutins
Liu Ziyin
Daniel Geisz
Abraham J. Fetterman
Joshua Albrecht
SSL
32
29
0
27 Mar 2023
Neural-prior stochastic block model
Neural-prior stochastic block model
O. Duranthon
L. Zdeborová
32
3
0
17 Mar 2023
Precise Asymptotic Analysis of Deep Random Feature Models
Precise Asymptotic Analysis of Deep Random Feature Models
David Bosch
Ashkan Panahi
B. Hassibi
30
19
0
13 Feb 2023
From high-dimensional & mean-field dynamics to dimensionless ODEs: A
  unifying approach to SGD in two-layers networks
From high-dimensional & mean-field dynamics to dimensionless ODEs: A unifying approach to SGD in two-layers networks
Luca Arnaboldi
Ludovic Stephan
Florent Krzakala
Bruno Loureiro
MLT
30
31
0
12 Feb 2023
Demystifying Disagreement-on-the-Line in High Dimensions
Demystifying Disagreement-on-the-Line in High Dimensions
Dong-Hwan Lee
Behrad Moniri
Xinmeng Huang
Edgar Dobriban
Hamed Hassani
21
8
0
31 Jan 2023
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized
  Linear Models
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models
Lijia Zhou
Frederic Koehler
Pragya Sur
Danica J. Sutherland
Nathan Srebro
83
9
0
21 Oct 2022
Monotonic Risk Relationships under Distribution Shifts for Regularized
  Risk Minimization
Monotonic Risk Relationships under Distribution Shifts for Regularized Risk Minimization
Daniel LeJeune
Jiayu Liu
Reinhard Heckel
18
0
0
20 Oct 2022
Penalization-induced shrinking without rotation in high dimensional GLM
  regression: a cavity analysis
Penalization-induced shrinking without rotation in high dimensional GLM regression: a cavity analysis
Emanuele Massa
Marianne A Jonker
Anthony C. C. Coolen
23
1
0
09 Sep 2022
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Neil Rohit Mallinar
James B. Simon
Amirhesam Abedsoltan
Parthe Pandit
M. Belkin
Preetum Nakkiran
24
37
0
14 Jul 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
C. Pehlevan
MLT
24
79
0
19 May 2022
An Equivalence Principle for the Spectrum of Random Inner-Product Kernel
  Matrices with Polynomial Scalings
An Equivalence Principle for the Spectrum of Random Inner-Product Kernel Matrices with Polynomial Scalings
Yue M. Lu
H. Yau
19
24
0
12 May 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step
  Improves the Representation
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
31
121
0
03 May 2022
Learning curves for the multi-class teacher-student perceptron
Learning curves for the multi-class teacher-student perceptron
Elisabetta Cornacchia
Francesca Mignacco
R. Veiga
Cédric Gerbelot
Bruno Loureiro
Lenka Zdeborová
12
21
0
22 Mar 2022
Contrasting random and learned features in deep Bayesian linear
  regression
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
C. Pehlevan
BDL
MLT
28
26
0
01 Mar 2022
The Lasso with general Gaussian designs with applications to hypothesis
  testing
The Lasso with general Gaussian designs with applications to hypothesis testing
Michael Celentano
Andrea Montanari
Yuting Wei
42
63
0
27 Jul 2020
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
C. Pehlevan
139
201
0
07 Feb 2020
De-biasing convex regularized estimators and interval estimation in
  linear models
De-biasing convex regularized estimators and interval estimation in linear models
Pierre C. Bellec
Cun-Hui Zhang
24
20
0
26 Dec 2019
1