ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.00573
  4. Cited By
Contrasting random and learned features in deep Bayesian linear
  regression

Contrasting random and learned features in deep Bayesian linear regression

1 March 2022
Jacob A. Zavatone-Veth
William L. Tong
Cengiz Pehlevan
    BDL
    MLT
ArXivPDFHTML

Papers citing "Contrasting random and learned features in deep Bayesian linear regression"

14 / 14 papers shown
Title
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Francesco Camilli
D. Tieplova
Eleonora Bergamin
Jean Barbier
204
0
0
06 May 2025
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Blake Bordelon
Cengiz Pehlevan
AI4CE
71
1
0
04 Feb 2025
Bayesian RG Flow in Neural Network Field Theories
Bayesian RG Flow in Neural Network Field Theories
Jessica N. Howard
Marc S. Klinger
Anindita Maiti
A. G. Stapleton
68
1
0
27 May 2024
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean
  Field Neural Networks
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Blake Bordelon
Cengiz Pehlevan
MLT
43
29
0
06 Apr 2023
Learning curves for deep structured Gaussian feature models
Learning curves for deep structured Gaussian feature models
Jacob A. Zavatone-Veth
Cengiz Pehlevan
MLT
30
11
0
01 Mar 2023
Bayes-optimal Learning of Deep Random Networks of Extensive-width
Bayes-optimal Learning of Deep Random Networks of Extensive-width
Hugo Cui
Florent Krzakala
Lenka Zdeborová
BDL
30
35
0
01 Feb 2023
Neural networks learn to magnify areas near decision boundaries
Neural networks learn to magnify areas near decision boundaries
Jacob A. Zavatone-Veth
Sheng Yang
Julian Rubinfien
Cengiz Pehlevan
MLT
AI4CE
30
6
0
26 Jan 2023
Spectral Evolution and Invariance in Linear-width Neural Networks
Spectral Evolution and Invariance in Linear-width Neural Networks
Zhichao Wang
A. Engel
Anand D. Sarwate
Ioana Dumitriu
Tony Chiang
45
14
0
11 Nov 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
Cengiz Pehlevan
MLT
45
77
0
19 May 2022
Unified field theoretical approach to deep and recurrent neuronal
  networks
Unified field theoretical approach to deep and recurrent neuronal networks
Kai Segadlo
Bastian Epping
Alexander van Meegen
David Dahmen
Michael Krämer
M. Helias
AI4CE
BDL
43
20
0
10 Dec 2021
Performance of Bayesian linear regression in a model with mismatch
Performance of Bayesian linear regression in a model with mismatch
Jean Barbier
Wei-Kuo Chen
D. Panchenko
Manuel Sáenz
40
22
0
14 Jul 2021
Asymptotics of representation learning in finite Bayesian neural
  networks
Asymptotics of representation learning in finite Bayesian neural networks
Jacob A. Zavatone-Veth
Abdulkadir Canatar
Benjamin S. Ruben
Cengiz Pehlevan
26
28
0
01 Jun 2021
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
98
152
0
02 Mar 2020
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
175
51
0
17 Oct 2019
1