Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.18800
Cited By
Infinite Width Models That Work: Why Feature Learning Doesn't Matter as Much as You Think
27 June 2024
Luke Sernau
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Infinite Width Models That Work: Why Feature Learning Doesn't Matter as Much as You Think"
6 / 6 papers shown
Title
Feature Learning in Infinite-Width Neural Networks
Greg Yang
J. E. Hu
MLT
73
153
0
30 Nov 2020
On the Inductive Bias of Neural Tangent Kernels
A. Bietti
Julien Mairal
63
257
0
29 May 2019
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
246
3,191
0
20 Jun 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
212
3,457
0
09 Mar 2018
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
642
130,942
0
12 Jun 2017
Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
Amit Daniely
Roy Frostig
Y. Singer
150
343
0
18 Feb 2016
1