ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.04731
  4. Cited By
SEED: Self-supervised Distillation For Visual Representation

SEED: Self-supervised Distillation For Visual Representation

12 January 2021
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
    SSL
ArXivPDFHTML

Papers citing "SEED: Self-supervised Distillation For Visual Representation"

50 / 126 papers shown
Title
A Simple Recipe for Competitive Low-compute Self supervised Vision
  Models
A Simple Recipe for Competitive Low-compute Self supervised Vision Models
Quentin Duval
Ishan Misra
Nicolas Ballas
37
9
0
23 Jan 2023
Unifying Synergies between Self-supervised Learning and Dynamic
  Computation
Unifying Synergies between Self-supervised Learning and Dynamic Computation
Tarun Krishna
Ayush K. Rai
Alexandru Drimbarean
Eric Arazo
Paul Albert
Alan F. Smeaton
Kevin McGuinness
Noel E. O'Connor
24
0
0
22 Jan 2023
Transferring Pre-trained Multimodal Representations with Cross-modal
  Similarity Matching
Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
Byoungjip Kim
Sun Choi
Dasol Hwang
Moontae Lee
Honglak Lee
33
10
0
07 Jan 2023
TinyMIM: An Empirical Study of Distilling MIM Pre-trained Models
TinyMIM: An Empirical Study of Distilling MIM Pre-trained Models
Sucheng Ren
Fangyun Wei
Zheng-Wei Zhang
Han Hu
40
34
0
03 Jan 2023
Similarity Contrastive Estimation for Image and Video Soft Contrastive
  Self-Supervised Learning
Similarity Contrastive Estimation for Image and Video Soft Contrastive Self-Supervised Learning
J. Denize
Jaonary Rabarisoa
Astrid Orcesi
Romain Hérault
SSL
19
6
0
21 Dec 2022
Establishing a stronger baseline for lightweight contrastive models
Establishing a stronger baseline for lightweight contrastive models
Wenye Lin
Yifeng Ding
Zhixiong Cao
Haitao Zheng
27
2
0
14 Dec 2022
Masked Video Distillation: Rethinking Masked Feature Modeling for
  Self-supervised Video Representation Learning
Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning
Rui Wang
Dongdong Chen
Zuxuan Wu
Yinpeng Chen
Xiyang Dai
Mengchen Liu
Lu Yuan
Yu-Gang Jiang
VGen
32
87
0
08 Dec 2022
Self-Supervised Learning based on Heat Equation
Self-Supervised Learning based on Heat Equation
Yinpeng Chen
Xiyang Dai
Dongdong Chen
Mengchen Liu
Lu Yuan
Zicheng Liu
Youzuo Lin
29
4
0
23 Nov 2022
Beyond Instance Discrimination: Relation-aware Contrastive
  Self-supervised Learning
Beyond Instance Discrimination: Relation-aware Contrastive Self-supervised Learning
Yifei Zhang
Chang-rui Liu
Yu Zhou
Weiping Wang
QiXiang Ye
Xiangyang Ji
SSL
ISeg
BDL
19
6
0
02 Nov 2022
Pixel-Wise Contrastive Distillation
Pixel-Wise Contrastive Distillation
Junqiang Huang
Zichao Guo
42
4
0
01 Nov 2022
Towards Sustainable Self-supervised Learning
Towards Sustainable Self-supervised Learning
Shanghua Gao
Pan Zhou
Mingg-Ming Cheng
Shuicheng Yan
CLL
45
7
0
20 Oct 2022
Effective Self-supervised Pre-training on Low-compute Networks without
  Distillation
Effective Self-supervised Pre-training on Low-compute Networks without Distillation
Fuwen Tan
F. Saleh
Brais Martínez
35
4
0
06 Oct 2022
Improving Label-Deficient Keyword Spotting Through Self-Supervised
  Pretraining
Improving Label-Deficient Keyword Spotting Through Self-Supervised Pretraining
H. S. Bovbjerg
Zheng-Hua Tan
VLM
29
3
0
04 Oct 2022
Attention Distillation: self-supervised vision transformer students need
  more guidance
Attention Distillation: self-supervised vision transformer students need more guidance
Kai Wang
Fei Yang
Joost van de Weijer
ViT
27
16
0
03 Oct 2022
Slimmable Networks for Contrastive Self-supervised Learning
Slimmable Networks for Contrastive Self-supervised Learning
Shuai Zhao
Xiaohan Wang
Linchao Zhu
Yi Yang
35
1
0
30 Sep 2022
Improving Self-Supervised Learning by Characterizing Idealized
  Representations
Improving Self-Supervised Learning by Characterizing Idealized Representations
Yann Dubois
Tatsunori Hashimoto
Stefano Ermon
Percy Liang
SSL
83
40
0
13 Sep 2022
MimCo: Masked Image Modeling Pre-training with Contrastive Teacher
MimCo: Masked Image Modeling Pre-training with Contrastive Teacher
Qiang-feng Zhou
Chaohui Yu
Haowen Luo
Zhibin Wang
Hao Li
VLM
54
20
0
07 Sep 2022
CMD: Self-supervised 3D Action Representation Learning with Cross-modal
  Mutual Distillation
CMD: Self-supervised 3D Action Representation Learning with Cross-modal Mutual Distillation
Yunyao Mao
Wen-gang Zhou
Zhenbo Lu
Jiajun Deng
Houqiang Li
30
38
0
26 Aug 2022
GCISG: Guided Causal Invariant Learning for Improved Syn-to-real
  Generalization
GCISG: Guided Causal Invariant Learning for Improved Syn-to-real Generalization
Gilhyun Nam
Gyeongjae Choi
Kyungmin Lee
OOD
18
4
0
22 Aug 2022
Contrastive Positive Mining for Unsupervised 3D Action Representation
  Learning
Contrastive Positive Mining for Unsupervised 3D Action Representation Learning
Haoyuan Zhang
Yonghong Hou
Wenjing Zhang
Wanqing Li
SSL
29
38
0
06 Aug 2022
Online Knowledge Distillation via Mutual Contrastive Learning for Visual
  Recognition
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
39
50
0
23 Jul 2022
Bi-directional Contrastive Learning for Domain Adaptive Semantic
  Segmentation
Bi-directional Contrastive Learning for Domain Adaptive Semantic Segmentation
Geon Lee
Chanho Eom
Wonkyung Lee
Hyekang Park
Bumsub Ham
13
22
0
22 Jul 2022
DSPNet: Towards Slimmable Pretrained Networks based on Discriminative
  Self-supervised Learning
DSPNet: Towards Slimmable Pretrained Networks based on Discriminative Self-supervised Learning
Shaoru Wang
Zeming Li
Jin Gao
Liang Li
Weiming Hu
41
0
0
13 Jul 2022
Modality-Aware Contrastive Instance Learning with Self-Distillation for
  Weakly-Supervised Audio-Visual Violence Detection
Modality-Aware Contrastive Instance Learning with Self-Distillation for Weakly-Supervised Audio-Visual Violence Detection
Jiashuo Yu
Jin-Yuan Liu
Ying Cheng
Rui Feng
Yuejie Zhang
19
34
0
12 Jul 2022
Synergistic Self-supervised and Quantization Learning
Synergistic Self-supervised and Quantization Learning
Yunhao Cao
Peiqin Sun
Yechang Huang
Jianxin Wu
Shuchang Zhou
MQ
11
12
0
12 Jul 2022
Revisiting Label Smoothing and Knowledge Distillation Compatibility:
  What was Missing?
Revisiting Label Smoothing and Knowledge Distillation Compatibility: What was Missing?
Keshigeyan Chandrasegaran
Ngoc-Trung Tran
Yunqing Zhao
Ngai-man Cheung
86
41
0
29 Jun 2022
A Closer Look at Self-Supervised Lightweight Vision Transformers
A Closer Look at Self-Supervised Lightweight Vision Transformers
Shaoru Wang
Jin Gao
Zeming Li
Jian Sun
Weiming Hu
ViT
67
41
0
28 May 2022
The Importance of Being Parameters: An Intra-Distillation Method for
  Serious Gains
The Importance of Being Parameters: An Intra-Distillation Method for Serious Gains
Haoran Xu
Philipp Koehn
Kenton W. Murray
MoMe
19
4
0
23 May 2022
Generalized Knowledge Distillation via Relationship Matching
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model
  Pretraining
PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining
Yuting Gao
Jinfeng Liu
Zihan Xu
Jinchao Zhang
Ke Li
Rongrong Ji
Chunhua Shen
VLM
CLIP
29
100
0
29 Apr 2022
Selective Cross-Task Distillation
Selective Cross-Task Distillation
Su Lu
Han-Jia Ye
De-Chuan Zhan
28
0
0
25 Apr 2022
Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for
  Vision-Language Tasks
Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks
Zhecan Wang
Noel Codella
Yen-Chun Chen
Luowei Zhou
Xiyang Dai
...
Jianwei Yang
Haoxuan You
Kai-Wei Chang
Shih-Fu Chang
Lu Yuan
VLM
OffRL
31
22
0
22 Apr 2022
Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Chuanguang Yang
Helong Zhou
Zhulin An
Xue Jiang
Yong Xu
Qian Zhang
34
169
0
14 Apr 2022
CoupleFace: Relation Matters for Face Recognition Distillation
CoupleFace: Relation Matters for Face Recognition Distillation
Jiaheng Liu
Haoyu Qin
Yichao Wu
Jinyang Guo
Ding Liang
Ke Xu
CVBM
21
19
0
12 Apr 2022
Online Continual Learning for Embedded Devices
Online Continual Learning for Embedded Devices
Tyler L. Hayes
Christopher Kanan
CLL
38
54
0
21 Mar 2022
DATA: Domain-Aware and Task-Aware Self-supervised Learning
DATA: Domain-Aware and Task-Aware Self-supervised Learning
Qing Chang
Junran Peng
Lingxi Xie
Jiajun Sun
Hao Yin
Qi Tian
Zhaoxiang Zhang
42
8
0
17 Mar 2022
Weak Augmentation Guided Relational Self-Supervised Learning
Weak Augmentation Guided Relational Self-Supervised Learning
Mingkai Zheng
Shan You
Fei Wang
Chao Qian
Changshui Zhang
Xiaogang Wang
Chang Xu
32
4
0
16 Mar 2022
LoopITR: Combining Dual and Cross Encoder Architectures for Image-Text
  Retrieval
LoopITR: Combining Dual and Cross Encoder Architectures for Image-Text Retrieval
Jie Lei
Xinlei Chen
Ning Zhang
Meng-xing Wang
Joey Tianyi Zhou
Tamara L. Berg
Licheng Yu
31
12
0
10 Mar 2022
What Makes Good Contrastive Learning on Small-Scale Wearable-based
  Tasks?
What Makes Good Contrastive Learning on Small-Scale Wearable-based Tasks?
Hangwei Qian
Tian Tian
C. Miao
SSL
28
50
0
12 Feb 2022
SimReg: Regression as a Simple Yet Effective Tool for Self-supervised
  Knowledge Distillation
SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation
K. Navaneet
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
15
19
0
13 Jan 2022
Learning with Label Noise for Image Retrieval by Selecting Interactions
Learning with Label Noise for Image Retrieval by Selecting Interactions
Sarah Ibrahimi
Arnaud Sors
Rafael Sampaio de Rezende
S. Clinchant
NoLa
VLM
24
16
0
20 Dec 2021
Data Efficient Language-supervised Zero-shot Recognition with Optimal
  Transport Distillation
Data Efficient Language-supervised Zero-shot Recognition with Optimal Transport Distillation
Bichen Wu
Rui Cheng
Peizhao Zhang
Tianren Gao
Peter Vajda
Joseph E. Gonzalez
VLM
22
45
0
17 Dec 2021
Boosting Contrastive Learning with Relation Knowledge Distillation
Boosting Contrastive Learning with Relation Knowledge Distillation
Kai Zheng
Yuanjiang Wang
Ye Yuan
SSL
11
13
0
08 Dec 2021
Auxiliary Learning for Self-Supervised Video Representation via
  Similarity-based Knowledge Distillation
Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge Distillation
Amirhossein Dadashzadeh
Alan Whone
Majid Mirmehdi
SSL
21
4
0
07 Dec 2021
A Fast Knowledge Distillation Framework for Visual Recognition
A Fast Knowledge Distillation Framework for Visual Recognition
Zhiqiang Shen
Eric P. Xing
VLM
14
45
0
02 Dec 2021
A Practical Contrastive Learning Framework for Single-Image
  Super-Resolution
A Practical Contrastive Learning Framework for Single-Image Super-Resolution
Gang Wu
Junjun Jiang
Xianming Liu
44
50
0
27 Nov 2021
Improving Transferability of Representations via Augmentation-Aware
  Self-Supervision
Improving Transferability of Representations via Augmentation-Aware Self-Supervision
Hankook Lee
Kibok Lee
Kimin Lee
Honglak Lee
Jinwoo Shin
SSL
23
51
0
18 Nov 2021
GenURL: A General Framework for Unsupervised Representation Learning
GenURL: A General Framework for Unsupervised Representation Learning
Siyuan Li
Zicheng Liu
Z. Zang
Di Wu
Zhiyuan Chen
Stan Z. Li
OOD
3DGS
OffRL
34
9
0
27 Oct 2021
MTGLS: Multi-Task Gaze Estimation with Limited Supervision
MTGLS: Multi-Task Gaze Estimation with Limited Supervision
Abdulaziz Shamsah
Munawar Hayat
Seth Hutchinson
Jarrod Knibbe
CVBM
41
21
0
23 Oct 2021
TLDR: Twin Learning for Dimensionality Reduction
TLDR: Twin Learning for Dimensionality Reduction
Yannis Kalantidis
Carlos Lassance
Jon Almazán
Diane Larlus
SSL
27
10
0
18 Oct 2021
Previous
123
Next