ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.12170
  4. Cited By
Domain-Agnostic Clustering with Self-Distillation

Domain-Agnostic Clustering with Self-Distillation

23 November 2021
Mohammed Adnan
Yani Andrew Ioannou
Chuan-Yung Tsai
Graham W. Taylor
    FedML
    SSL
    OOD
ArXivPDFHTML

Papers citing "Domain-Agnostic Clustering with Self-Distillation"

27 / 27 papers shown
Title
ScaleNet: An Unsupervised Representation Learning Method for Limited
  Information
ScaleNet: An Unsupervised Representation Learning Method for Limited Information
Huili Huang
M. M. Roozbahani
SSL
93
805
0
03 Oct 2023
Clustering-friendly Representation Learning via Instance Discrimination
  and Feature Decorrelation
Clustering-friendly Representation Learning via Instance Discrimination and Feature Decorrelation
Yaling Tao
Kentaro Takagi
Kouta Nakata
31
92
0
31 May 2021
Representation Learning for Clustering via Building Consensus
Representation Learning for Clustering via Building Consensus
A. Deshmukh
Jayanth Reddy Regatti
Eren Manavoglu
Ürün Dogan
SSL
35
9
0
04 May 2021
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
Jure Zbontar
Li Jing
Ishan Misra
Yann LeCun
Stéphane Deny
SSL
300
2,343
0
04 Mar 2021
Towards Domain-Agnostic Contrastive Learning
Towards Domain-Agnostic Contrastive Learning
Vikas Verma
Minh-Thang Luong
Kenji Kawaguchi
Hieu H. Pham
Quoc V. Le
SSL
62
119
0
09 Nov 2020
Viewmaker Networks: Learning Views for Unsupervised Representation
  Learning
Viewmaker Networks: Learning Views for Unsupervised Representation Learning
Alex Tamkin
Mike Wu
Noah D. Goodman
SSL
99
64
0
14 Oct 2020
Unsupervised Learning of Visual Features by Contrasting Cluster
  Assignments
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
Mathilde Caron
Ishan Misra
Julien Mairal
Priya Goyal
Piotr Bojanowski
Armand Joulin
OCL
SSL
215
4,073
0
17 Jun 2020
Bootstrap your own latent: A new approach to self-supervised Learning
Bootstrap your own latent: A new approach to self-supervised Learning
Jean-Bastien Grill
Florian Strub
Florent Altché
Corentin Tallec
Pierre Harvey Richemond
...
M. G. Azar
Bilal Piot
Koray Kavukcuoglu
Rémi Munos
Michal Valko
SSL
353
6,792
0
13 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
62
2,946
0
09 Jun 2020
Self-Distillation as Instance-Specific Label Smoothing
Self-Distillation as Instance-Specific Label Smoothing
Zhilu Zhang
M. Sabuncu
64
119
0
09 Jun 2020
Self-Distillation Amplifies Regularization in Hilbert Space
Self-Distillation Amplifies Regularization in Hilbert Space
H. Mobahi
Mehrdad Farajtabar
Peter L. Bartlett
64
234
0
13 Feb 2020
A Simple Framework for Contrastive Learning of Visual Representations
A Simple Framework for Contrastive Learning of Visual Representations
Ting-Li Chen
Simon Kornblith
Mohammad Norouzi
Geoffrey E. Hinton
SSL
345
18,739
0
13 Feb 2020
ClusterFit: Improving Generalization of Visual Representations
ClusterFit: Improving Generalization of Visual Representations
Xueting Yan
Ishan Misra
Abhinav Gupta
Deepti Ghadiyaram
D. Mahajan
SSL
VLM
102
133
0
06 Dec 2019
Momentum Contrast for Unsupervised Visual Representation Learning
Momentum Contrast for Unsupervised Visual Representation Learning
Kaiming He
Haoqi Fan
Yuxin Wu
Saining Xie
Ross B. Girshick
SSL
185
12,073
0
13 Nov 2019
Learning Lightweight Lane Detection CNNs by Self Attention Distillation
Learning Lightweight Lane Detection CNNs by Self Attention Distillation
Yuenan Hou
Zheng Ma
Chunxiao Liu
Chen Change Loy
62
557
0
02 Aug 2019
Be Your Own Teacher: Improve the Performance of Convolutional Neural
  Networks via Self Distillation
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Linfeng Zhang
Jiebo Song
Anni Gao
Jingwei Chen
Chenglong Bao
Kaisheng Ma
FedML
60
860
0
17 May 2019
Snapshot Distillation: Teacher-Student Optimization in One Generation
Snapshot Distillation: Teacher-Student Optimization in One Generation
Chenglin Yang
Lingxi Xie
Chi Su
Alan Yuille
64
193
0
01 Dec 2018
Data Augmentation using Random Image Cropping and Patching for Deep CNNs
Data Augmentation using Random Image Cropping and Patching for Deep CNNs
Ryo Takahashi
Takashi Matsubara
K. Uehara
58
330
0
22 Nov 2018
Self-Referenced Deep Learning
Self-Referenced Deep Learning
Xu Lan
Xiatian Zhu
S. Gong
104
24
0
19 Nov 2018
Data augmentation instead of explicit regularization
Data augmentation instead of explicit regularization
Alex Hernández-García
Peter König
54
144
0
11 Jun 2018
Unsupervised Representation Learning by Predicting Image Rotations
Unsupervised Representation Learning by Predicting Image Rotations
Spyros Gidaris
Praveer Singh
N. Komodakis
OOD
SSL
DRL
249
3,289
0
21 Mar 2018
mixup: Beyond Empirical Risk Minimization
mixup: Beyond Empirical Risk Minimization
Hongyi Zhang
Moustapha Cissé
Yann N. Dauphin
David Lopez-Paz
NoLa
273
9,759
0
25 Oct 2017
Unsupervised Learning of Visual Representations by Solving Jigsaw
  Puzzles
Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles
M. Noroozi
Paolo Favaro
SSL
163
2,979
0
30 Mar 2016
Colorful Image Colorization
Colorful Image Colorization
Richard Y. Zhang
Phillip Isola
Alexei A. Efros
124
36
0
28 Mar 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
193,814
0
10 Dec 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
333
19,634
0
09 Mar 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.7K
150,006
0
22 Dec 2014
1