ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.18800
  4. Cited By
Infinite Width Models That Work: Why Feature Learning Doesn't Matter as
  Much as You Think

Infinite Width Models That Work: Why Feature Learning Doesn't Matter as Much as You Think

27 June 2024
Luke Sernau
ArXivPDFHTML

Papers citing "Infinite Width Models That Work: Why Feature Learning Doesn't Matter as Much as You Think"

6 / 6 papers shown
Title
Feature Learning in Infinite-Width Neural Networks
Feature Learning in Infinite-Width Neural Networks
Greg Yang
J. E. Hu
MLT
73
153
0
30 Nov 2020
On the Inductive Bias of Neural Tangent Kernels
On the Inductive Bias of Neural Tangent Kernels
A. Bietti
Julien Mairal
63
257
0
29 May 2019
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
246
3,191
0
20 Jun 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
212
3,457
0
09 Mar 2018
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
642
130,942
0
12 Jun 2017
Toward Deeper Understanding of Neural Networks: The Power of
  Initialization and a Dual View on Expressivity
Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
Amit Daniely
Roy Frostig
Y. Singer
150
343
0
18 Feb 2016
1