Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.04977
Cited By
Paraphrasing Complex Network: Network Compression via Factor Transfer
14 February 2018
Jangho Kim
Seonguk Park
Nojun Kwak
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Paraphrasing Complex Network: Network Compression via Factor Transfer"
50 / 112 papers shown
Title
Knowledge Distillation of Transformer-based Language Models Revisited
Chengqiang Lu
Jianwei Zhang
Yunfei Chu
Zhengyu Chen
Jingren Zhou
Fei Wu
Haiqing Chen
Hongxia Yang
VLM
27
10
0
29 Jun 2022
Revisiting Architecture-aware Knowledge Distillation: Smaller Models and Faster Search
Taehyeon Kim
Heesoo Myeong
Se-Young Yun
37
2
0
27 Jun 2022
Parameter-Efficient and Student-Friendly Knowledge Distillation
Jun Rao
Xv Meng
Liang Ding
Shuhan Qi
Dacheng Tao
37
46
0
28 May 2022
Fast Object Placement Assessment
Li Niu
Qingyang Liu
Zhenchen Liu
Jiangtong Li
21
14
0
28 May 2022
Knowledge Distillation from A Stronger Teacher
Tao Huang
Shan You
Fei Wang
Chao Qian
Chang Xu
35
238
0
21 May 2022
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Jing Yang
Xiatian Zhu
Adrian Bulat
Brais Martínez
Georgios Tzimiropoulos
40
8
0
13 May 2022
Spot-adaptive Knowledge Distillation
Mingli Song
Ying Chen
Jingwen Ye
Mingli Song
25
72
0
05 May 2022
Localization Distillation for Object Detection
Zhaohui Zheng
Rongguang Ye
Ping Wang
Dongwei Ren
Jun Wang
W. Zuo
Ming-Ming Cheng
32
64
0
12 Apr 2022
Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Ruifei He
Shuyang Sun
Jihan Yang
Song Bai
Xiaojuan Qi
37
36
0
10 Mar 2022
Meta Knowledge Distillation
Jihao Liu
Boxiao Liu
Hongsheng Li
Yu Liu
18
25
0
16 Feb 2022
Enabling Deep Learning on Edge Devices through Filter Pruning and Knowledge Transfer
Kaiqi Zhao
Yitao Chen
Ming Zhao
27
3
0
22 Jan 2022
It's All in the Head: Representation Knowledge Distillation through Classifier Sharing
Emanuel Ben-Baruch
M. Karklinsky
Yossi Biton
Avi Ben-Cohen
Hussam Lawen
Nadav Zamir
29
11
0
18 Jan 2022
Information Theoretic Representation Distillation
Roy Miles
Adrian Lopez-Rodriguez
K. Mikolajczyk
MQ
26
21
0
01 Dec 2021
Self-slimmed Vision Transformer
Zhuofan Zong
Kunchang Li
Guanglu Song
Yali Wang
Yu Qiao
B. Leng
Yu Liu
ViT
21
30
0
24 Nov 2021
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation
Lin Wang
Yujeong Chae
Sung-Hoon Yoon
Tae-Kyun Kim
Kuk-Jin Yoon
47
64
0
24 Nov 2021
Local-Selective Feature Distillation for Single Image Super-Resolution
Seonguk Park
Nojun Kwak
24
9
0
22 Nov 2021
Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models
J. Yoon
H. Kim
Hyeon Seung Lee
Sunghwan Ahn
N. Kim
43
1
0
05 Nov 2021
Distilling Object Detectors with Feature Richness
Zhixing Du
Rui Zhang
Ming-Fang Chang
Xishan Zhang
Shaoli Liu
Tianshi Chen
Yunji Chen
ObjD
21
74
0
01 Nov 2021
FedHe: Heterogeneous Models and Communication-Efficient Federated Learning
Chan Yun Hin
Edith C.H. Ngai
FedML
24
24
0
19 Oct 2021
Dual Transfer Learning for Event-based End-task Prediction via Pluggable Event to Image Translation
Lin Wang
Yujeong Chae
Kuk-Jin Yoon
32
32
0
04 Sep 2021
Full-Cycle Energy Consumption Benchmark for Low-Carbon Computer Vision
Bo-wen Li
Xinyang Jiang
Donglin Bai
Yuge Zhang
Ningxin Zheng
Xuanyi Dong
Lu Liu
Yuqing Yang
Dongsheng Li
14
10
0
30 Aug 2021
CoCo DistillNet: a Cross-layer Correlation Distillation Network for Pathological Gastric Cancer Segmentation
Wenxuan Zou
Muyi Sun
40
9
0
27 Aug 2021
LIGA-Stereo: Learning LiDAR Geometry Aware Representations for Stereo-based 3D Detector
Xiaoyang Guo
Shaoshuai Shi
Xiaogang Wang
Hongsheng Li
3DPC
34
106
0
18 Aug 2021
Online Knowledge Distillation for Efficient Pose Estimation
Zheng Li
Jingwen Ye
Xiuming Zhang
Ying Huang
Zhigeng Pan
26
94
0
04 Aug 2021
Double Similarity Distillation for Semantic Image Segmentation
Yingchao Feng
Xian Sun
Wenhui Diao
Jihao Li
Xin Gao
24
62
0
19 Jul 2021
Novel Visual Category Discovery with Dual Ranking Statistics and Mutual Knowledge Distillation
Bingchen Zhao
Kai Han
26
107
0
07 Jul 2021
A Light-weight Deep Human Activity Recognition Algorithm Using Multi-knowledge Distillation
Runze Chen
Haiyong Luo
Fang Zhao
Xuechun Meng
Zhiqing Xie
Yida Zhu
VLM
HAI
29
2
0
06 Jul 2021
PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation
Jang-Hyun Kim
Simyung Chang
Nojun Kwak
30
44
0
25 Jun 2021
Initialization and Regularization of Factorized Neural Layers
M. Khodak
Neil A. Tenenholtz
Lester W. Mackey
Nicolò Fusi
65
56
0
03 May 2021
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu Liu
Hengshuang Zhao
Jiaya Jia
155
424
0
19 Apr 2021
Distilling and Transferring Knowledge via cGAN-generated Samples for Image Classification and Regression
Xin Ding
Z. J. Wang
Zuheng Xu
Z. Jane Wang
William J. Welch
41
22
0
07 Apr 2021
Students are the Best Teacher: Exit-Ensemble Distillation with Multi-Exits
Hojung Lee
Jong-Seok Lee
19
8
0
01 Apr 2021
Adaptive Configuration of In Situ Lossy Compression for Cosmology Simulations via Fine-Grained Rate-Quality Modeling
Sian Jin
Jesus Pulido
Pascal Grosset
Jiannan Tian
Dingwen Tao
J. Ahrens
33
22
0
01 Apr 2021
Complementary Relation Contrastive Distillation
Jinguo Zhu
Shixiang Tang
Dapeng Chen
Shijie Yu
Yakun Liu
A. Yang
M. Rong
Xiaohua Wang
27
77
0
29 Mar 2021
Distilling Object Detectors via Decoupled Features
Jianyuan Guo
Kai Han
Yunhe Wang
Han Wu
Xinghao Chen
Chunjing Xu
Chang Xu
45
199
0
26 Mar 2021
Prototype-based Personalized Pruning
Jang-Hyun Kim
Simyung Chang
Sungrack Yun
Nojun Kwak
28
4
0
25 Mar 2021
ReCU: Reviving the Dead Weights in Binary Neural Networks
Zihan Xu
Mingbao Lin
Jianzhuang Liu
Jie Chen
Ling Shao
Yue Gao
Yonghong Tian
Rongrong Ji
MQ
24
81
0
23 Mar 2021
Student Network Learning via Evolutionary Knowledge Distillation
Kangkai Zhang
Chunhui Zhang
Shikun Li
Dan Zeng
Shiming Ge
22
83
0
23 Mar 2021
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation
Mingi Ji
Seungjae Shin
Seunghyun Hwang
Gibeom Park
Il-Chul Moon
13
120
0
15 Mar 2021
Enhancing Data-Free Adversarial Distillation with Activation Regularization and Virtual Interpolation
Xiaoyang Qu
Jianzong Wang
Jing Xiao
18
14
0
23 Feb 2021
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Guodong Xu
Ziwei Liu
Chen Change Loy
UQCV
21
39
0
17 Dec 2020
Distilling Knowledge by Mimicking Features
G. Wang
Yifan Ge
Jianxin Wu
17
33
0
03 Nov 2020
Anti-Distillation: Improving reproducibility of deep networks
G. Shamir
Lorenzo Coviello
46
20
0
19 Oct 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
27
111
0
18 Sep 2020
Differentiable Feature Aggregation Search for Knowledge Distillation
Yushuo Guan
Pengyu Zhao
Bingxuan Wang
Yuanxing Zhang
Cong Yao
Kaigui Bian
Jian Tang
FedML
25
44
0
02 Aug 2020
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
44
280
0
12 Jun 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
28
2,857
0
09 Jun 2020
Self-Distillation as Instance-Specific Label Smoothing
Zhilu Zhang
M. Sabuncu
22
116
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
28
47
0
08 Jun 2020
SuperMix: Supervising the Mixing Data Augmentation
Ali Dabouei
Sobhan Soleymani
Fariborz Taherkhani
Nasser M. Nasrabadi
21
98
0
10 Mar 2020
Previous
1
2
3
Next