ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.14037
  4. Cited By
Understanding Contrastive Learning Requires Incorporating Inductive
  Biases

Understanding Contrastive Learning Requires Incorporating Inductive Biases

28 February 2022
Nikunj Saunshi
Jordan T. Ash
Surbhi Goel
Dipendra Kumar Misra
Cyril Zhang
Sanjeev Arora
Sham Kakade
A. Krishnamurthy
    SSL
ArXivPDFHTML

Papers citing "Understanding Contrastive Learning Requires Incorporating Inductive Biases"

25 / 75 papers shown
Title
Understanding Multimodal Contrastive Learning and Incorporating Unpaired
  Data
Understanding Multimodal Contrastive Learning and Incorporating Unpaired Data
Ryumei Nakada
Halil Ibrahim Gulluk
Zhun Deng
Wenlong Ji
James Zou
Linjun Zhang
SSL
VLM
42
36
0
13 Feb 2023
Evaluating Self-Supervised Learning via Risk Decomposition
Evaluating Self-Supervised Learning via Risk Decomposition
Yann Dubois
Tatsunori Hashimoto
Percy Liang
14
9
0
06 Feb 2023
The SSL Interplay: Augmentations, Inductive Bias, and Generalization
The SSL Interplay: Augmentations, Inductive Bias, and Generalization
Vivien A. Cabannes
B. Kiani
Randall Balestriero
Yann LeCun
A. Bietti
SSL
19
31
0
06 Feb 2023
Deciphering the Projection Head: Representation Evaluation
  Self-supervised Learning
Deciphering the Projection Head: Representation Evaluation Self-supervised Learning
Jiajun Ma
Tianyang Hu
Wei Cao
17
8
0
28 Jan 2023
Understanding Self-Supervised Pretraining with Part-Aware Representation
  Learning
Understanding Self-Supervised Pretraining with Part-Aware Representation Learning
Jie Zhu
Jiyang Qi
Mingyu Ding
Xiaokang Chen
Ping Luo
Xinggang Wang
Wenyu Liu
Leye Wang
Jingdong Wang
SSL
33
8
0
27 Jan 2023
GEDI: GEnerative and DIscriminative Training for Self-Supervised
  Learning
GEDI: GEnerative and DIscriminative Training for Self-Supervised Learning
Emanuele Sansone
Robin Manhaeve
SSL
20
9
0
27 Dec 2022
A Theoretical Study of Inductive Biases in Contrastive Learning
A Theoretical Study of Inductive Biases in Contrastive Learning
Jeff Z. HaoChen
Tengyu Ma
UQCV
SSL
36
31
0
27 Nov 2022
ProtoX: Explaining a Reinforcement Learning Agent via Prototyping
ProtoX: Explaining a Reinforcement Learning Agent via Prototyping
Ronilo Ragodos
Tong Wang
Qihang Lin
Xun Zhou
18
7
0
06 Nov 2022
Same Pre-training Loss, Better Downstream: Implicit Bias Matters for
  Language Models
Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models
Hong Liu
Sang Michael Xie
Zhiyuan Li
Tengyu Ma
AI4CE
40
49
0
25 Oct 2022
The Curious Case of Benign Memorization
The Curious Case of Benign Memorization
Sotiris Anagnostidis
Gregor Bachmann
Lorenzo Noci
Thomas Hofmann
AAML
49
8
0
25 Oct 2022
A Kernel-Based View of Language Model Fine-Tuning
A Kernel-Based View of Language Model Fine-Tuning
Sadhika Malladi
Alexander Wettig
Dingli Yu
Danqi Chen
Sanjeev Arora
VLM
73
61
0
11 Oct 2022
Contrastive Learning Can Find An Optimal Basis For Approximately
  View-Invariant Functions
Contrastive Learning Can Find An Optimal Basis For Approximately View-Invariant Functions
Daniel D. Johnson
Ayoub El Hanchi
Chris J. Maddison
SSL
24
22
0
04 Oct 2022
What shapes the loss landscape of self-supervised learning?
What shapes the loss landscape of self-supervised learning?
Liu Ziyin
Ekdeep Singh Lubana
Masakuni Ueda
Hidenori Tanaka
50
20
0
02 Oct 2022
Improving Self-Supervised Learning by Characterizing Idealized
  Representations
Improving Self-Supervised Learning by Characterizing Idealized Representations
Yann Dubois
Tatsunori Hashimoto
Stefano Ermon
Percy Liang
SSL
83
40
0
13 Sep 2022
Analyzing Data-Centric Properties for Graph Contrastive Learning
Analyzing Data-Centric Properties for Graph Contrastive Learning
Puja Trivedi
Ekdeep Singh Lubana
Mark Heimann
Danai Koutra
Jayaraman J. Thiagarajan
30
11
0
04 Aug 2022
Integrating Prior Knowledge in Contrastive Learning with Kernel
Integrating Prior Knowledge in Contrastive Learning with Kernel
Benoit Dufumier
C. Barbano
Robin Louiset
Edouard Duchesnay
Pietro Gori
SSL
24
8
0
03 Jun 2022
Understanding the Role of Nonlinearity in Training Dynamics of
  Contrastive Learning
Understanding the Role of Nonlinearity in Training Dynamics of Contrastive Learning
Yuandong Tian
MLT
26
13
0
02 Jun 2022
Your Contrastive Learning Is Secretly Doing Stochastic Neighbor
  Embedding
Your Contrastive Learning Is Secretly Doing Stochastic Neighbor Embedding
Tianyang Hu
Zhili Liu
Fengwei Zhou
Wei Cao
Weiran Huang
SSL
47
26
0
30 May 2022
Orchestra: Unsupervised Federated Learning via Globally Consistent
  Clustering
Orchestra: Unsupervised Federated Learning via Globally Consistent Clustering
Ekdeep Singh Lubana
Chi Ian Tang
F. Kawsar
Robert P. Dick
Akhil Mathur
FedML
37
52
0
23 May 2022
The Mechanism of Prediction Head in Non-contrastive Self-supervised
  Learning
The Mechanism of Prediction Head in Non-contrastive Self-supervised Learning
Zixin Wen
Yuanzhi Li
SSL
27
34
0
12 May 2022
Do More Negative Samples Necessarily Hurt in Contrastive Learning?
Do More Negative Samples Necessarily Hurt in Contrastive Learning?
Pranjal Awasthi
Nishanth Dikkala
Pritish Kamath
35
40
0
03 May 2022
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
274
2,606
0
04 May 2021
Contrastive Learning Inverts the Data Generating Process
Contrastive Learning Inverts the Data Generating Process
Roland S. Zimmermann
Yash Sharma
Steffen Schneider
Matthias Bethge
Wieland Brendel
SSL
238
208
0
17 Feb 2021
COCO-LM: Correcting and Contrasting Text Sequences for Language Model
  Pretraining
COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Yu Meng
Chenyan Xiong
Payal Bajaj
Saurabh Tiwary
Paul N. Bennett
Jiawei Han
Xia Song
125
203
0
16 Feb 2021
For self-supervised learning, Rationality implies generalization,
  provably
For self-supervised learning, Rationality implies generalization, provably
Yamini Bansal
Gal Kaplun
Boaz Barak
OOD
SSL
58
22
0
16 Oct 2020
Previous
12