ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.00364
  4. Cited By
Implicit Acceleration and Feature Learning in Infinitely Wide Neural
  Networks with Bottlenecks

Implicit Acceleration and Feature Learning in Infinitely Wide Neural Networks with Bottlenecks

1 July 2021
Etai Littwin
Omid Saremi
Shuangfei Zhai
Vimal Thilak
Hanlin Goh
J. Susskind
Greg Yang
ArXivPDFHTML

Papers citing "Implicit Acceleration and Feature Learning in Infinitely Wide Neural Networks with Bottlenecks"

2 / 2 papers shown
Title
Deep neural networks with dependent weights: Gaussian Process mixture
  limit, heavy tails, sparsity and compressibility
Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Hoileong Lee
Fadhel Ayed
Paul Jung
Juho Lee
Hongseok Yang
François Caron
46
10
0
17 May 2022
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
175
51
0
17 Oct 2019
1