Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1412.6550
Cited By
FitNets: Hints for Thin Deep Nets
19 December 2014
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FitNets: Hints for Thin Deep Nets"
50 / 725 papers shown
Title
Improving Small Footprint Few-shot Keyword Spotting with Supervision on Auxiliary Data
Seunghan Yang
Byeonggeun Kim
Kyuhong Shim
Simyoung Chang
31
1
0
31 Aug 2023
Machine Unlearning Methodology base on Stochastic Teacher Network
Xulong Zhang
Jianzong Wang
Ning Cheng
Yifu Sun
Chuanyao Zhang
Jing Xiao
MU
29
4
0
28 Aug 2023
Rethinking Client Drift in Federated Learning: A Logit Perspective
Yu-bao Yan
Chun-Mei Feng
Senior Member Ieee Wangmeng Zuo Senior Member Ieee Mang Ye
Mong Goh
Ping Li
Rick Siow
Lei Zhu
F. I. C. L. Philip Chen
FedML
42
8
0
20 Aug 2023
Learning to Distill Global Representation for Sparse-View CT
Zilong Li
Chenglong Ma
Jie Chen
Junping Zhang
Hongming Shan
21
9
0
16 Aug 2023
Multi-Label Knowledge Distillation
Penghui Yang
Ming-Kun Xie
Chen-Chen Zong
Lei Feng
Gang Niu
Masashi Sugiyama
Sheng-Jun Huang
36
10
0
12 Aug 2023
Foreground Object Search by Distilling Composite Image Feature
Bo Zhang
Jiacheng Sui
Li Niu
30
5
0
09 Aug 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
23
16
0
08 Aug 2023
Accurate Retraining-free Pruning for Pretrained Encoder-based Language Models
Seungcheol Park
Ho-Jin Choi
U. Kang
VLM
42
5
0
07 Aug 2023
Cross-dimensional transfer learning in medical image segmentation with deep learning
Hicham Messaoudi
Ahror Belaid
Douraied BEN SALEM
Pierre-Henri Conze
MedIm
30
24
0
29 Jul 2023
Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series Data
Qing Xu
Min-man Wu
Xiaoli Li
K. Mao
Zhenghua Chen
19
5
0
07 Jul 2023
Review helps learn better: Temporal Supervised Knowledge Distillation
Dongwei Wang
Zhi Han
Yanmei Wang
Xi’ai Chen
Baichen Liu
Yandong Tang
60
1
0
03 Jul 2023
Q-YOLO: Efficient Inference for Real-time Object Detection
Mingze Wang
H. Sun
Jun Shi
Xuhui Liu
Baochang Zhang
Xianbin Cao
ObjD
42
8
0
01 Jul 2023
Filter Pruning for Efficient CNNs via Knowledge-driven Differential Filter Sampler
Shaohui Lin
Wenxuan Huang
Jiao Xie
Baochang Zhang
Yunhang Shen
Zhou Yu
Jungong Han
David Doermann
25
2
0
01 Jul 2023
Reducing the gap between streaming and non-streaming Transducer-based ASR by adaptive two-stage knowledge distillation
Haitao Tang
Yu Fu
Lei Sun
Jiabin Xue
Dan Liu
...
Zhiqiang Ma
Minghui Wu
Jia Pan
Genshun Wan
Ming’En Zhao
29
2
0
27 Jun 2023
Cross Architecture Distillation for Face Recognition
Weisong Zhao
Xiangyu Zhu
Zhixiang He
Xiaoyu Zhang
Zhen Lei
CVBM
19
6
0
26 Jun 2023
Feature Adversarial Distillation for Point Cloud Classification
Yuxing Lee
Wei-Chieh Wu
3DPC
27
2
0
25 Jun 2023
CrossKD: Cross-Head Knowledge Distillation for Object Detection
Jiabao Wang
Yuming Chen
Zhaohui Zheng
Xiang Li
Ming-Ming Cheng
Qibin Hou
48
33
0
20 Jun 2023
Depth and DOF Cues Make A Better Defocus Blur Detector
Yuxin Jin
Ming Qian
Jincheng Xiong
Nan Xue
Guisong Xia
26
3
0
20 Jun 2023
LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation
Yixiao Li
Yifan Yu
Qingru Zhang
Chen Liang
Pengcheng He
Weizhu Chen
Tuo Zhao
44
69
0
20 Jun 2023
Learning to Learn from APIs: Black-Box Data-Free Meta-Learning
Zixuan Hu
Li Shen
Zhenyi Wang
Baoyuan Wu
Chun Yuan
Dacheng Tao
52
7
0
28 May 2023
Improving Knowledge Distillation via Regularizing Feature Norm and Direction
Yuzhu Wang
Lechao Cheng
Manni Duan
Yongheng Wang
Zunlei Feng
Shu Kong
47
20
0
26 May 2023
Knowledge Diffusion for Distillation
Tao Huang
Yuan Zhang
Mingkai Zheng
Shan You
Fei Wang
Chao Qian
Chang Xu
37
51
0
25 May 2023
Decoupled Kullback-Leibler Divergence Loss
Jiequan Cui
Zhuotao Tian
Zhisheng Zhong
Xiaojuan Qi
Bei Yu
Hanwang Zhang
39
38
0
23 May 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
34
19
0
22 May 2023
Student-friendly Knowledge Distillation
Mengyang Yuan
Bo Lang
Fengnan Quan
20
17
0
18 May 2023
Analyzing Compression Techniques for Computer Vision
Maniratnam Mandal
Imran Khan
27
1
0
14 May 2023
CORSD: Class-Oriented Relational Self Distillation
Muzhou Yu
S. Tan
Kailu Wu
Runpei Dong
Linfeng Zhang
Kaisheng Ma
24
0
0
28 Apr 2023
Pre-trained Embeddings for Entity Resolution: An Experimental Analysis [Experiment, Analysis & Benchmark]
Alexandros Zeakis
G. Papadakis
Dimitrios Skoutas
Manolis Koubarakis
37
37
0
24 Apr 2023
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
49
18
0
24 Apr 2023
Knowledge Distillation Under Ideal Joint Classifier Assumption
Huayu Li
Xiwen Chen
G. Ditzler
Janet Roveda
Ao Li
18
1
0
19 Apr 2023
Constructing Deep Spiking Neural Networks from Artificial Neural Networks with Knowledge Distillation
Qi Xu
Yaxin Li
Jiangrong Shen
Jian K. Liu
Huajin Tang
Gang Pan
27
62
0
12 Apr 2023
Grouped Knowledge Distillation for Deep Face Recognition
Weisong Zhao
Xiangyu Zhu
Kaiwen Guo
Xiaoyu Zhang
Zhen Lei
CVBM
30
6
0
10 Apr 2023
Geometric-aware Pretraining for Vision-centric 3D Object Detection
Linyan Huang
Huijie Wang
J. Zeng
Shengchuan Zhang
Liujuan Cao
Junchi Yan
Hongyang Li
3DPC
70
9
0
06 Apr 2023
Self-Distillation for Gaussian Process Regression and Classification
Kenneth Borup
L. Andersen
16
2
0
05 Apr 2023
Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Sheng Xu
Yanjing Li
Mingbao Lin
Penglei Gao
Guodong Guo
Jinhu Lu
Baochang Zhang
MQ
31
23
0
01 Apr 2023
DIME-FM: DIstilling Multimodal and Efficient Foundation Models
Ximeng Sun
Pengchuan Zhang
Peizhao Zhang
Hardik Shah
Kate Saenko
Xide Xia
VLM
30
20
0
31 Mar 2023
CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
Ge Li
Hasan Hammoud
Hani Itani
Dmitrii Khizbullin
Guohao Li
SyDa
ALM
50
413
0
31 Mar 2023
Decomposed Cross-modal Distillation for RGB-based Temporal Action Detection
Pilhyeon Lee
Taeoh Kim
Minho Shim
Dongyoon Wee
H. Byun
38
11
0
30 Mar 2023
Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
Y. Cheng
Yichao Yan
Wenhan Zhu
Ye Pan
Bowen Pan
Xiaokang Yang
3DH
39
3
0
28 Mar 2023
UniDistill: A Universal Cross-Modality Knowledge Distillation Framework for 3D Object Detection in Bird's-Eye View
Shengchao Zhou
Weizhou Liu
Chen Hu
Shuchang Zhou
Chaoxiang Ma
29
44
0
27 Mar 2023
CAT:Collaborative Adversarial Training
Xingbin Liu
Huafeng Kuang
Xianming Lin
Yongjian Wu
Rongrong Ji
AAML
22
4
0
27 Mar 2023
Decoupled Multimodal Distilling for Emotion Recognition
Yong Li
Yuan-Zheng Wang
Zhen Cui
21
73
0
24 Mar 2023
Exploiting Unlabelled Photos for Stronger Fine-Grained SBIR
Aneeshan Sain
A. Bhunia
Subhadeep Koley
Pinaki Nath Chowdhury
Soumitri Chattopadhyay
Tao Xiang
Yi-Zhe Song
30
18
0
24 Mar 2023
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels
Zhendong Yang
Ailing Zeng
Zhe Li
Tianke Zhang
Chun Yuan
Yu Li
31
74
0
23 Mar 2023
MV-MR: multi-views and multi-representations for self-supervised learning and knowledge distillation
Vitaliy Kinakh
M. Drozdova
Slava Voloshynovskiy
42
1
0
21 Mar 2023
Performance-aware Approximation of Global Channel Pruning for Multitask CNNs
Hancheng Ye
Bo Zhang
Tao Chen
Jiayuan Fan
Bin Wang
32
18
0
21 Mar 2023
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Youcai Zhang
Yuzhuo Qin
Heng-Ye Liu
Yanhao Zhang
Yaqian Li
X. Gu
VLM
55
2
0
15 Mar 2023
SCPNet: Semantic Scene Completion on Point Cloud
Zhaoyang Xia
You-Chen Liu
Xin Li
Xinge Zhu
Yuexin Ma
Yikang Li
Yuenan Hou
Yu Qiao
36
70
0
13 Mar 2023
DSD
2
^2
2
: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
34
7
0
02 Mar 2023
Distillation from Heterogeneous Models for Top-K Recommendation
SeongKu Kang
Wonbin Kweon
Dongha Lee
Jianxun Lian
Xing Xie
Hwanjo Yu
VLM
35
21
0
02 Mar 2023
Previous
1
2
3
4
5
6
...
13
14
15
Next