Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.01663
Cited By
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
3 October 2019
Sanjeev Arora
S. Du
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
Dingli Yu
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks"
44 / 44 papers shown
Title
Unsupervised Replay Strategies for Continual Learning with Limited Data
Anthony Bazhenov
Pahan Dewasurendra
G. Krishnan
Jean Erik Delanois
CLL
24
0
0
21 Oct 2024
Neural Lineage
Runpeng Yu
Xinchao Wang
34
4
0
17 Jun 2024
Deep Continuous Networks
Nergis Tomen
S. Pintea
Jan van Gemert
94
14
0
02 Feb 2024
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models
Guillermo Ortiz-Jiménez
Alessandro Favero
P. Frossard
MoMe
51
110
0
22 May 2023
Cut your Losses with Squentropy
Like Hui
M. Belkin
S. Wright
UQCV
18
8
0
08 Feb 2023
Bayes-optimal Learning of Deep Random Networks of Extensive-width
Hugo Cui
Florent Krzakala
Lenka Zdeborová
BDL
20
35
0
01 Feb 2023
A Simple Algorithm For Scaling Up Kernel Methods
Tengyu Xu
Bryan Kelly
Semyon Malamud
16
0
0
26 Jan 2023
Image Classification with Small Datasets: Overview and Benchmark
Lorenzo Brigato
Björn Barz
Luca Iocchi
Joachim Denzler
VLM
30
17
0
23 Dec 2022
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
26
5
0
20 Oct 2022
Fast Finite Width Neural Tangent Kernel
Roman Novak
Jascha Narain Sohl-Dickstein
S. Schoenholz
AAML
22
53
0
17 Jun 2022
Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks
Kaiqi Zhang
Ming Yin
Yu-Xiang Wang
MQ
24
4
0
13 Jun 2022
Infinite Recommendation Networks: A Data-Centric Approach
Noveen Sachdeva
Mehak Preet Dhaliwal
Carole-Jean Wu
Julian McAuley
DD
33
28
0
03 Jun 2022
Analyzing Tree Architectures in Ensembles via Neural Tangent Kernel
Ryuichi Kanoh
M. Sugiyama
31
2
0
25 May 2022
Wide and Deep Neural Networks Achieve Optimality for Classification
Adityanarayanan Radhakrishnan
M. Belkin
Caroline Uhler
27
18
0
29 Apr 2022
Multi-model Ensemble Analysis with Neural Network Gaussian Processes
Trevor Harris
Yangqiu Song
Ryan Sriver
27
5
0
08 Feb 2022
Deep Layer-wise Networks Have Closed-Form Weights
Chieh-Tsai Wu
A. Masoomi
Arthur Gretton
Jennifer Dy
29
3
0
01 Feb 2022
Forward Operator Estimation in Generative Models with Kernel Transfer Operators
Z. Huang
Rudrasis Chakraborty
Vikas Singh
GAN
14
3
0
01 Dec 2021
On the Effectiveness of Neural Ensembles for Image Classification with Small Datasets
Lorenzo Brigato
Luca Iocchi
UQCV
30
0
0
29 Nov 2021
On the Equivalence between Neural Network and Support Vector Machine
Yilan Chen
Wei Huang
Lam M. Nguyen
Tsui-Wei Weng
AAML
25
18
0
11 Nov 2021
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
V. Cevher
30
31
0
02 Nov 2021
VC dimension of partially quantized neural networks in the overparametrized regime
Yutong Wang
Clayton D. Scott
25
1
0
06 Oct 2021
How Powerful is Graph Convolution for Recommendation?
Yifei Shen
Yongji Wu
Yao Zhang
Caihua Shan
Jun Zhang
Khaled B. Letaief
Dongsheng Li
GNN
28
100
0
17 Aug 2021
Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
DD
51
229
0
27 Jul 2021
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective
Akhilan Boopathy
Ila Fiete
24
9
0
15 Jun 2021
What can linearized neural networks actually say about generalization?
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
29
43
0
12 Jun 2021
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
Geoff Pleiss
John P. Cunningham
28
24
0
11 Jun 2021
A Neural Tangent Kernel Perspective of GANs
Jean-Yves Franceschi
Emmanuel de Bézenac
Ibrahim Ayed
Mickaël Chen
Sylvain Lamprier
Patrick Gallinari
34
26
0
10 Jun 2021
The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization
Mufan Li
Mihai Nica
Daniel M. Roy
30
33
0
07 Jun 2021
Priors in Bayesian Deep Learning: A Review
Vincent Fortuin
UQCV
BDL
31
124
0
14 May 2021
A Neural Pre-Conditioning Active Learning Algorithm to Reduce Label Complexity
Seo Taek Kong
Soomin Jeon
Dongbin Na
Jaewon Lee
Honglak Lee
Kyu-Hwan Jung
23
6
0
08 Apr 2021
Dataset Meta-Learning from Kernel Ridge-Regression
Timothy Nguyen
Zhourung Chen
Jaehoon Lee
DD
36
240
0
30 Oct 2020
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
28
61
0
03 Aug 2020
A Revision of Neural Tangent Kernel-based Approaches for Neural Networks
Kyungsu Kim
A. Lozano
Eunho Yang
AAML
27
0
0
02 Jul 2020
Tensor Programs II: Neural Tangent Kernel for Any Architecture
Greg Yang
58
134
0
25 Jun 2020
On the Preservation of Spatio-temporal Information in Machine Learning Applications
Yigit Oktar
Mehmet Türkan
12
1
0
15 Jun 2020
To Each Optimizer a Norm, To Each Norm its Generalization
Sharan Vaswani
Reza Babanezhad
Jose Gallego
Aaron Mishkin
Simon Lacoste-Julien
Nicolas Le Roux
26
8
0
11 Jun 2020
Modularizing Deep Learning via Pairwise Learning With Kernels
Shiyu Duan
Shujian Yu
José C. Príncipe
MoMe
27
20
0
12 May 2020
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
44
172
0
23 Apr 2020
A Close Look at Deep Learning with Small Data
Lorenzo Brigato
Luca Iocchi
24
139
0
28 Mar 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
On the infinite width limit of neural networks with a standard parameterization
Jascha Narain Sohl-Dickstein
Roman Novak
S. Schoenholz
Jaehoon Lee
32
47
0
21 Jan 2020
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Roman Novak
Lechao Xiao
Jiri Hron
Jaehoon Lee
Alexander A. Alemi
Jascha Narain Sohl-Dickstein
S. Schoenholz
29
224
0
05 Dec 2019
Information in Infinite Ensembles of Infinitely-Wide Neural Networks
Ravid Shwartz-Ziv
Alexander A. Alemi
19
21
0
20 Nov 2019
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
35
191
0
02 Oct 2018
1