ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.08404
  4. Cited By
Implicit Regularization of Random Feature Models

Implicit Regularization of Random Feature Models

19 February 2020
Arthur Jacot
Berfin Simsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
ArXivPDFHTML

Papers citing "Implicit Regularization of Random Feature Models"

20 / 20 papers shown
Title
Characterizing Overfitting in Kernel Ridgeless Regression Through the
  Eigenspectrum
Characterizing Overfitting in Kernel Ridgeless Regression Through the Eigenspectrum
Tin Sum Cheng
Aurelien Lucchi
Anastasis Kratsios
David Belius
37
8
0
02 Feb 2024
Matryoshka Policy Gradient for Entropy-Regularized RL: Convergence and
  Global Optimality
Matryoshka Policy Gradient for Entropy-Regularized RL: Convergence and Global Optimality
François Ged
M. H. Veiga
25
0
0
22 Mar 2023
Quantitative deterministic equivalent of sample covariance matrices with
  a general dependence structure
Quantitative deterministic equivalent of sample covariance matrices with a general dependence structure
Clément Chouard
16
6
0
23 Nov 2022
SPADE4: Sparsity and Delay Embedding based Forecasting of Epidemics
SPADE4: Sparsity and Delay Embedding based Forecasting of Epidemics
Esha Saha
L. Ho
Giang Tran
36
5
0
11 Nov 2022
On the Impossible Safety of Large AI Models
On the Impossible Safety of Large AI Models
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
L. Hoang
Rafael Pinot
Sébastien Rouault
John Stephan
30
31
0
30 Sep 2022
Model-Agnostic Confidence Intervals for Feature Importance: A Fast and
  Powerful Approach Using Minipatch Ensembles
Model-Agnostic Confidence Intervals for Feature Importance: A Fast and Powerful Approach Using Minipatch Ensembles
Luqin Gan
Lili Zheng
Genevera I. Allen
24
6
0
05 Jun 2022
Regularization-wise double descent: Why it occurs and how to eliminate
  it
Regularization-wise double descent: Why it occurs and how to eliminate it
Fatih Yilmaz
Reinhard Heckel
25
11
0
03 Jun 2022
Generalization Through The Lens Of Leave-One-Out Error
Generalization Through The Lens Of Leave-One-Out Error
Gregor Bachmann
Thomas Hofmann
Aurelien Lucchi
49
7
0
07 Mar 2022
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
Yuege Xie
Bobby Shi
Hayden Schaeffer
Rachel A. Ward
78
9
0
07 Dec 2021
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Mohammad Pezeshki
Amartya Mitra
Yoshua Bengio
Guillaume Lajoie
61
25
0
06 Dec 2021
Model, sample, and epoch-wise descents: exact solution of gradient flow
  in the random feature model
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
37
13
0
22 Oct 2021
Locality defeats the curse of dimensionality in convolutional
  teacher-student scenarios
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
Alessandro Favero
Francesco Cagnetta
M. Wyart
30
31
0
16 Jun 2021
Towards an Understanding of Benign Overfitting in Neural Networks
Towards an Understanding of Benign Overfitting in Neural Networks
Zhu Li
Zhi-Hua Zhou
A. Gretton
MLT
33
35
0
06 Jun 2021
Understanding Double Descent Requires a Fine-Grained Bias-Variance
  Decomposition
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
37
93
0
04 Nov 2020
Memorizing without overfitting: Bias, variance, and interpolation in
  over-parameterized models
Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized models
J. Rocks
Pankaj Mehta
13
41
0
26 Oct 2020
Geometric compression of invariant manifolds in neural nets
Geometric compression of invariant manifolds in neural nets
J. Paccolat
Leonardo Petrini
Mario Geiger
Kevin Tyloo
M. Wyart
MLT
55
34
0
22 Jul 2020
Kernel Alignment Risk Estimator: Risk Prediction from Training Data
Kernel Alignment Risk Estimator: Risk Prediction from Training Data
Arthur Jacot
Berfin cSimcsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
22
67
0
17 Jun 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
41
172
0
23 Apr 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
Towards Understanding the Spectral Bias of Deep Learning
Towards Understanding the Spectral Bias of Deep Learning
Yuan Cao
Zhiying Fang
Yue Wu
Ding-Xuan Zhou
Quanquan Gu
32
214
0
03 Dec 2019
1