Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2411.11176
Cited By
v1
v2
v3 (latest)
Infinite Width Limits of Self Supervised Neural Networks
17 November 2024
Maximilian Fleissner
Gautham Govind Anil
Debarghya Ghoshdastidar
SSL
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Infinite Width Limits of Self Supervised Neural Networks"
21 / 21 papers shown
Title
When can we Approximate Wide Contrastive Models with Neural Tangent Kernels and Principal Component Analysis?
Gautham Govind Anil
Pascal Esser
Debarghya Ghoshdastidar
73
1
0
13 Mar 2024
Non-Parametric Representation Learning with Kernels
Pascal Esser
Maximilian Fleissner
Debarghya Ghoshdastidar
SSL
66
5
0
05 Sep 2023
On the Stepwise Nature of Self-Supervised Learning
James B. Simon
Maksis Knutins
Liu Ziyin
Daniel Geisz
Abraham J. Fetterman
Joshua Albrecht
SSL
72
35
0
27 Mar 2023
The SSL Interplay: Augmentations, Inductive Bias, and Generalization
Vivien A. Cabannes
B. Kiani
Randall Balestriero
Yann LeCun
A. Bietti
SSL
77
33
0
06 Feb 2023
Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels
Simone Bombari
Shayan Kiyani
Marco Mondelli
AAML
103
10
0
03 Feb 2023
Contrastive Learning Can Find An Optimal Basis For Approximately View-Invariant Functions
Daniel D. Johnson
Ayoub El Hanchi
Chris J. Maddison
SSL
98
24
0
04 Oct 2022
What shapes the loss landscape of self-supervised learning?
Liu Ziyin
Ekdeep Singh Lubana
Masakuni Ueda
Hidenori Tanaka
95
21
0
02 Oct 2022
Joint Embedding Self-Supervised Learning in the Kernel Regime
B. Kiani
Randall Balestriero
Yubei Chen
S. Lloyd
Yann LeCun
SSL
105
14
0
29 Sep 2022
The Eigenlearning Framework: A Conservation Law Perspective on Kernel Regression and Wide Neural Networks
James B. Simon
Madeline Dickens
Dhruva Karkada
M. DeWeese
106
28
0
08 Oct 2021
Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss
Jeff Z. HaoChen
Colin Wei
Adrien Gaidon
Tengyu Ma
SSL
85
322
0
08 Jun 2021
VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning
Adrien Bardes
Jean Ponce
Yann LeCun
SSL
DML
153
945
0
11 May 2021
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Jure Zbontar
Li Jing
Ishan Misra
Yann LeCun
Stéphane Deny
SSL
347
2,368
0
04 Mar 2021
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIP
VLM
1.0K
29,926
0
26 Feb 2021
Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data
Colin Wei
Kendrick Shen
Yining Chen
Tengyu Ma
SSL
117
232
0
07 Oct 2020
On the linearity of large non-linear models: when and why the tangent kernel is constant
Chaoyue Liu
Libin Zhu
M. Belkin
131
143
0
02 Oct 2020
Learning Representations by Maximizing Mutual Information Across Views
Philip Bachman
R. Devon Hjelm
William Buchwalter
SSL
195
1,481
0
03 Jun 2019
A Theoretical Analysis of Contrastive Unsupervised Representation Learning
Sanjeev Arora
H. Khandeparkar
M. Khodak
Orestis Plevrakis
Nikunj Saunshi
SSL
109
784
0
25 Feb 2019
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Jaehoon Lee
Lechao Xiao
S. Schoenholz
Yasaman Bahri
Roman Novak
Jascha Narain Sohl-Dickstein
Jeffrey Pennington
213
1,110
0
18 Feb 2019
On Lazy Training in Differentiable Programming
Lénaïc Chizat
Edouard Oyallon
Francis R. Bach
111
840
0
19 Dec 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
284
3,225
0
20 Jun 2018
To understand deep learning we need to understand kernel learning
M. Belkin
Siyuan Ma
Soumik Mandal
77
420
0
05 Feb 2018
1