Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2101.04731
Cited By
SEED: Self-supervised Distillation For Visual Representation
12 January 2021
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SEED: Self-supervised Distillation For Visual Representation"
50 / 126 papers shown
Title
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via
D
\mathbf{\texttt{D}}
D
ual-
H
\mathbf{\texttt{H}}
H
ead
O
\mathbf{\texttt{O}}
O
ptimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
57
0
0
12 May 2025
COMODO: Cross-Modal Video-to-IMU Distillation for Efficient Egocentric Human Activity Recognition
Baiyu Chen
Wilson Wongso
Zechen Li
Yonchanok Khaokaew
Hao Xue
Flora D. Salim
61
0
0
10 Mar 2025
Unsupervised Parameter Efficient Source-free Post-pretraining
Abhishek Jha
Tinne Tuytelaars
Yuki M. Asano
OOD
45
0
0
28 Feb 2025
Keypoint Aware Masked Image Modelling
Madhava Krishna
Convin.AI
73
0
0
03 Jan 2025
Wearable Accelerometer Foundation Models for Health via Knowledge Distillation
Salar Abbaspourazad
Anshuman Mishra
Joseph D. Futoma
Andrew C. Miller
Ian Shapiro
90
0
0
15 Dec 2024
Multi-Token Enhancing for Vision Representation Learning
Zhong-Yu Li
Yu-Song Hu
Bo Yin
Ming-Ming Cheng
66
1
0
24 Nov 2024
Teaching VLMs to Localize Specific Objects from In-context Examples
Sivan Doveh
Nimrod Shabtay
Wei Lin
Eli Schwartz
Hilde Kuehne
...
Leonid Karlinsky
James Glass
Assaf Arbelle
S. Ullman
Muhammad Jehanzeb Mirza
VLM
103
1
0
20 Nov 2024
GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models
Muhammad Jehanzeb Mirza
Mengjie Zhao
Zhuoyuan Mao
Sivan Doveh
Wei Lin
...
Yuki Mitsufuji
Horst Possegger
Rogerio Feris
Leonid Karlinsky
James Glass
VLM
84
1
0
08 Oct 2024
Simple Unsupervised Knowledge Distillation With Space Similarity
Aditya Singh
Haohan Wang
31
1
0
20 Sep 2024
Frequency-Guided Masking for Enhanced Vision Self-Supervised Learning
Amin Karimi Monsefi
Mengxi Zhou
Nastaran Karimi Monsefi
Ser-Nam Lim
Wei-Lun Chao
R. Ramnath
46
1
0
16 Sep 2024
Self-Masking Networks for Unsupervised Adaptation
Alfonso Taboada Warmerdam
Mathilde Caron
Yuki M. Asano
46
1
0
11 Sep 2024
POA: Pre-training Once for Models of All Sizes
Yingying Zhang
Xin Guo
Jiangwei Lao
Lei Yu
Lixiang Ru
Jian Wang
Guo Ye
Huimei He
Jingdong Chen
Ming Yang
65
1
0
02 Aug 2024
Unsqueeze [CLS] Bottleneck to Learn Rich Representations
Qing Su
Shihao Ji
26
0
0
24 Jul 2024
Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation
Marco Mistretta
Alberto Baldrati
Marco Bertini
Andrew D. Bagdanov
VPVLM
VLM
35
6
0
03 Jul 2024
Federated Graph Semantic and Structural Learning
Wenke Huang
Guancheng Wan
Mang Ye
Bo Du
FedML
34
42
0
27 Jun 2024
Lightweight Model Pre-training via Language Guided Knowledge Distillation
Mingsheng Li
Lin Zhang
Mingzhen Zhu
Zilong Huang
Gang Yu
Jiayuan Fan
Tao Chen
41
1
0
17 Jun 2024
Relational Self-supervised Distillation with Compact Descriptors for Image Copy Detection
Juntae Kim
Sungwon Woo
Jongho Nang
42
1
0
28 May 2024
Retro: Reusing teacher projection head for efficient embedding distillation on Lightweight Models via Self-supervised Learning
Khanh-Binh Nguyen
Chae Jung Park
34
0
0
24 May 2024
A Review on Discriminative Self-supervised Learning Methods in Computer Vision
Nikolaos Giakoumoglou
Tania Stathaki
Athanasios Gkelias
SSL
64
0
0
08 May 2024
A Generalization Theory of Cross-Modality Distillation with Contrastive Learning
Hangyu Lin
Chen Liu
Chengming Xu
Zhengqi Gao
Yanwei Fu
Yuan Yao
VLM
38
0
0
06 May 2024
On Improving the Algorithm-, Model-, and Data- Efficiency of Self-Supervised Learning
Yunhao Cao
Jianxin Wu
38
0
0
30 Apr 2024
Low-Rank Knowledge Decomposition for Medical Foundation Models
Yuhang Zhou
Haolin Li
Siyuan Du
Jiangchao Yao
Ya-Qin Zhang
Yanfeng Wang
28
3
0
26 Apr 2024
An Experimental Study on Exploring Strong Lightweight Vision Transformers via Masked Image Modeling Pre-Training
Jin Gao
Shubo Lin
Shaoru Wang
Yutong Kou
Zeming Li
Liang Li
Congxuan Zhang
Xiaoqin Zhang
Yizheng Wang
Weiming Hu
47
1
0
18 Apr 2024
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
Sean Farhat
Deming Chen
42
0
0
04 Apr 2024
CORN: Contact-based Object Representation for Nonprehensile Manipulation of General Unseen Objects
Yoonyoung Cho
Junhyek Han
Yoontae Cho
Beomjoon Kim
42
8
0
16 Mar 2024
Contrastive Continual Learning with Importance Sampling and Prototype-Instance Relation Distillation
Jiyong Li
Dilshod Azizov
Yang Li
Shangsong Liang
CLL
37
7
0
07 Mar 2024
On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models
Juliette Marrie
Michael Arbel
Julien Mairal
Diane Larlus
VLM
MQ
45
1
0
17 Feb 2024
Let All be Whitened: Multi-teacher Distillation for Efficient Visual Retrieval
Zhe Ma
Jianfeng Dong
Shouling Ji
Zhenguang Liu
Xuhong Zhang
Zonghui Wang
Sifeng He
Feng Qian
Xiaobo Zhang
Lei Yang
35
6
0
15 Dec 2023
Augmentation-Free Dense Contrastive Knowledge Distillation for Efficient Semantic Segmentation
Jiawei Fan
Chao Li
Xiaolong Liu
Meina Song
Anbang Yao
23
5
0
07 Dec 2023
An Efficient Self-Supervised Cross-View Training For Sentence Embedding
Peerat Limkonchotiwat
Wuttikorn Ponwitayarat
Lalita Lowphansirikul
Can Udomcharoenchaikit
E. Chuangsuwanich
Sarana Nutanong
SSL
AI4TS
27
4
0
06 Nov 2023
Towards Generalized Multi-stage Clustering: Multi-view Self-distillation
Jiatai Wang
Zhiwei Xu
Xin Wang
Tao Li
16
1
0
29 Oct 2023
I
2
^2
2
MD: 3D Action Representation Learning with Inter- and Intra-modal Mutual Distillation
Yunyao Mao
Jiajun Deng
Wen-gang Zhou
Zhenbo Lu
Wanli Ouyang
Houqiang Li
VLM
30
1
0
24 Oct 2023
Neural Language Model Pruning for Automatic Speech Recognition
Leonardo Emili
Thiago Fraga-Silva
Ernest Pusateri
M. Nußbaum-Thom
Youssef Oualil
35
1
0
05 Oct 2023
Unsupervised Pretraining for Fact Verification by Language Model Distillation
A. Bazaga
Pietro Lió
Bo Dai
HILM
30
2
0
28 Sep 2023
Heterogeneous Generative Knowledge Distillation with Masked Image Modeling
Ziming Wang
Shumin Han
Xiaodi Wang
Jing Hao
Xianbin Cao
Baochang Zhang
VLM
32
0
0
18 Sep 2023
COMEDIAN: Self-Supervised Learning and Knowledge Distillation for Action Spotting using Transformers
J. Denize
Mykola Liashuha
Jaonary Rabarisoa
Astrid Orcesi
Romain Hérault
ViT
22
13
0
03 Sep 2023
Improving Small Footprint Few-shot Keyword Spotting with Supervision on Auxiliary Data
Seunghan Yang
Byeonggeun Kim
Kyuhong Shim
Simyoung Chang
26
1
0
31 Aug 2023
A General-Purpose Self-Supervised Model for Computational Pathology
Richard J. Chen
Tong Ding
Ming Y. Lu
Drew F. K. Williamson
Guillaume Jaume
...
Judy J. Wang
Walt Williams
L. Le
Georg Gerber
Faisal Mahmood
MedIm
25
42
0
29 Aug 2023
Online Prototype Learning for Online Continual Learning
Yujie Wei
Jiaxin Ye
Zhizhong Huang
Junping Zhang
Hongming Shan
CLL
OnRL
30
39
0
01 Aug 2023
CLIP-KD: An Empirical Study of CLIP Model Distillation
Chuanguang Yang
Zhulin An
Libo Huang
Junyu Bi
Xinqiang Yu
Hansheng Yang
Boyu Diao
Yongjun Xu
VLM
29
27
0
24 Jul 2023
Distilling Large Vision-Language Model with Out-of-Distribution Generalizability
Xuanlin Li
Yunhao Fang
Minghua Liu
Z. Ling
Z. Tu
Haoran Su
VLM
31
23
0
06 Jul 2023
Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Utku Ozbulak
Hyun Jung Lee
Beril Boga
Esla Timothy Anzaku
Ho-min Park
Arnout Van Messem
W. D. Neve
J. Vankerschaver
DiffM
26
36
0
23 May 2023
Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion
Yun Luo
Xiaotian Lin
Zhen Yang
Fandong Meng
Jie Zhou
Yue Zhang
CLL
24
5
0
20 May 2023
MSVQ: Self-Supervised Learning with Multiple Sample Views and Queues
Chengwei Peng
Xianzhong Long
Yun Li
SSL
38
1
0
09 May 2023
Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning
Kaiyou Song
Jin Xie
Shanyi Zhang
Zimeng Luo
30
29
0
13 Apr 2023
SIESTA: Efficient Online Continual Learning with Sleep
Md Yousuf Harun
Jhair Gallardo
Tyler L. Hayes
Ronald Kemker
Christopher Kanan
CLL
53
18
0
19 Mar 2023
Three Guidelines You Should Know for Universally Slimmable Self-Supervised Learning
Yunhao Cao
Peiqin Sun
Shuchang Zhou
24
4
0
13 Mar 2023
Extending global-local view alignment for self-supervised learning with remote sensing imagery
Xinye Wanyan
Sachith Seneviratne
Shuchang Shen
M. Kirley
73
18
0
12 Mar 2023
New Insights for the Stability-Plasticity Dilemma in Online Continual Learning
Dahuin Jung
Dongjin Lee
Sunwon Hong
Hyemi Jang
Ho Bae
Sungroh Yoon
CLL
28
14
0
17 Feb 2023
Unbiased and Efficient Self-Supervised Incremental Contrastive Learning
Cheng Ji
Jianxin Li
Hao Peng
Jia Wu
Xingcheng Fu
Qingyun Sun
Phillip S. Yu
SSL
CLL
26
5
0
28 Jan 2023
1
2
3
Next