ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6550
  4. Cited By
FitNets: Hints for Thin Deep Nets

FitNets: Hints for Thin Deep Nets

19 December 2014
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
    FedML
ArXivPDFHTML

Papers citing "FitNets: Hints for Thin Deep Nets"

50 / 732 papers shown
Title
ARDIR: Improving Robustness using Knowledge Distillation of Internal
  Representation
ARDIR: Improving Robustness using Knowledge Distillation of Internal Representation
Tomokatsu Takahashi
Masanori Yamada
Yuuki Yamanaka
Tomoya Yamashita
25
0
0
01 Nov 2022
Pixel-Wise Contrastive Distillation
Pixel-Wise Contrastive Distillation
Junqiang Huang
Zichao Guo
44
4
0
01 Nov 2022
Teacher-Student Architecture for Knowledge Learning: A Survey
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
29
35
0
28 Oct 2022
Exploring Effective Distillation of Self-Supervised Speech Models for
  Automatic Speech Recognition
Exploring Effective Distillation of Self-Supervised Speech Models for Automatic Speech Recognition
Yujin Wang
Changli Tang
Ziyang Ma
Zhisheng Zheng
Xie Chen
Weiqiang Zhang
49
1
0
27 Oct 2022
Multimodal Transformer Distillation for Audio-Visual Synchronization
Multimodal Transformer Distillation for Audio-Visual Synchronization
Xuan-Bo Chen
Haibin Wu
Chung-Che Wang
Hung-yi Lee
J. Jang
26
3
0
27 Oct 2022
Improved Feature Distillation via Projector Ensemble
Improved Feature Distillation via Projector Ensemble
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Zi Huang
39
37
0
27 Oct 2022
Collaborative Multi-Teacher Knowledge Distillation for Learning Low
  Bit-width Deep Neural Networks
Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks
Cuong Pham
Tuan Hoang
Thanh-Toan Do
FedML
MQ
40
14
0
27 Oct 2022
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with
  Deep Supervision
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Jiongyu Guo
Defang Chen
Can Wang
22
3
0
25 Oct 2022
Respecting Transfer Gap in Knowledge Distillation
Respecting Transfer Gap in Knowledge Distillation
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
26
23
0
23 Oct 2022
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Yong Wu
Shekhor Chanda
M. Hosseinzadeh
Zhi Liu
Yang Wang
VLM
29
7
0
18 Oct 2022
Approximating Continuous Convolutions for Deep Network Compression
Approximating Continuous Convolutions for Deep Network Compression
Theo W. Costain
V. Prisacariu
36
0
0
17 Oct 2022
AttTrack: Online Deep Attention Transfer for Multi-object Tracking
AttTrack: Online Deep Attention Transfer for Multi-object Tracking
Keivan Nalaie
Rong Zheng
VOT
23
5
0
16 Oct 2022
Federated Learning with Privacy-Preserving Ensemble Attention
  Distillation
Federated Learning with Privacy-Preserving Ensemble Attention Distillation
Xuan Gong
Liangchen Song
Rishi Vedula
Abhishek Sharma
Meng Zheng
...
Arun Innanje
Terrence Chen
Junsong Yuan
David Doermann
Ziyan Wu
FedML
30
27
0
16 Oct 2022
Boosting Graph Neural Networks via Adaptive Knowledge Distillation
Boosting Graph Neural Networks via Adaptive Knowledge Distillation
Zhichun Guo
Chunhui Zhang
Yujie Fan
Yijun Tian
Chuxu Zhang
Nitesh Chawla
26
32
0
12 Oct 2022
Linkless Link Prediction via Relational Distillation
Linkless Link Prediction via Relational Distillation
Zhichun Guo
William Shiao
Shichang Zhang
Yozen Liu
Nitesh Chawla
Neil Shah
Tong Zhao
32
41
0
11 Oct 2022
Knowledge Distillation Transfer Sets and their Impact on Downstream NLU
  Tasks
Knowledge Distillation Transfer Sets and their Impact on Downstream NLU Tasks
Charith Peris
Lizhen Tan
Thomas Gueudré
Turan Gojayev
Vivi Wei
Gokmen Oz
30
4
0
10 Oct 2022
Stimulative Training of Residual Networks: A Social Psychology
  Perspective of Loafing
Stimulative Training of Residual Networks: A Social Psychology Perspective of Loafing
Peng Ye
Shengji Tang
Baopu Li
Tao Chen
Wanli Ouyang
31
13
0
09 Oct 2022
Teaching Where to Look: Attention Similarity Knowledge Distillation for
  Low Resolution Face Recognition
Teaching Where to Look: Attention Similarity Knowledge Distillation for Low Resolution Face Recognition
Sungho Shin
Joosoon Lee
Junseok Lee
Yeonguk Yu
Kyoobin Lee
CVBM
24
32
0
29 Sep 2022
PROD: Progressive Distillation for Dense Retrieval
PROD: Progressive Distillation for Dense Retrieval
Zhenghao Lin
Yeyun Gong
Xiao Liu
Hang Zhang
Chen Lin
...
Jian Jiao
Jing Lu
Daxin Jiang
Rangan Majumder
Nan Duan
51
27
0
27 Sep 2022
MiNL: Micro-images based Neural Representation for Light Fields
MiNL: Micro-images based Neural Representation for Light Fields
Ziru Xu
Henan Wang
Zhibo Chen
33
1
0
17 Sep 2022
On-Device Domain Generalization
On-Device Domain Generalization
Kaiyang Zhou
Yuanhan Zhang
Yuhang Zang
Jingkang Yang
Chen Change Loy
Ziwei Liu
OOD
35
6
0
15 Sep 2022
Layerwise Bregman Representation Learning with Applications to Knowledge
  Distillation
Layerwise Bregman Representation Learning with Applications to Knowledge Distillation
Ehsan Amid
Rohan Anil
Christopher Fifty
Manfred K. Warmuth
28
2
0
15 Sep 2022
Exploring Target Representations for Masked Autoencoders
Exploring Target Representations for Masked Autoencoders
Xingbin Liu
Jinghao Zhou
Tao Kong
Xianming Lin
Rongrong Ji
100
50
0
08 Sep 2022
Generative Adversarial Super-Resolution at the Edge with Knowledge
  Distillation
Generative Adversarial Super-Resolution at the Edge with Knowledge Distillation
Simone Angarano
Francesco Salvetti
Mauro Martini
Marcello Chiaberge
GAN
49
21
0
07 Sep 2022
Recurrent Bilinear Optimization for Binary Neural Networks
Recurrent Bilinear Optimization for Binary Neural Networks
Sheng Xu
Yanjing Li
Tian Wang
Teli Ma
Baochang Zhang
Peng Gao
Yu Qiao
Jinhu Lv
Guodong Guo
MQ
19
14
0
04 Sep 2022
A Novel Self-Knowledge Distillation Approach with Siamese Representation
  Learning for Action Recognition
A Novel Self-Knowledge Distillation Approach with Siamese Representation Learning for Action Recognition
Duc-Quang Vu
T. Phung
Jia-Ching Wang
27
9
0
03 Sep 2022
Membership Inference Attacks by Exploiting Loss Trajectory
Membership Inference Attacks by Exploiting Loss Trajectory
Yiyong Liu
Zhengyu Zhao
Michael Backes
Yang Zhang
27
98
0
31 Aug 2022
Disentangle and Remerge: Interventional Knowledge Distillation for
  Few-Shot Object Detection from A Conditional Causal Perspective
Disentangle and Remerge: Interventional Knowledge Distillation for Few-Shot Object Detection from A Conditional Causal Perspective
Jiangmeng Li
Yanan Zhang
Jingyao Wang
Hui Xiong
Chengbo Jiao
Xiaohui Hu
Changwen Zheng
Gang Hua
CML
42
28
0
26 Aug 2022
CMD: Self-supervised 3D Action Representation Learning with Cross-modal
  Mutual Distillation
CMD: Self-supervised 3D Action Representation Learning with Cross-modal Mutual Distillation
Yunyao Mao
Wen-gang Zhou
Zhenbo Lu
Jiajun Deng
Houqiang Li
30
38
0
26 Aug 2022
Masked Autoencoders Enable Efficient Knowledge Distillers
Masked Autoencoders Enable Efficient Knowledge Distillers
Yutong Bai
Zeyu Wang
Junfei Xiao
Chen Wei
Huiyu Wang
Alan Yuille
Yuyin Zhou
Cihang Xie
CLL
32
40
0
25 Aug 2022
Multi-Granularity Distillation Scheme Towards Lightweight
  Semi-Supervised Semantic Segmentation
Multi-Granularity Distillation Scheme Towards Lightweight Semi-Supervised Semantic Segmentation
Jie Qin
Jie Wu
Ming Li
Xu Xiao
Min Zheng
Xingang Wang
38
6
0
22 Aug 2022
Rethinking Knowledge Distillation via Cross-Entropy
Rethinking Knowledge Distillation via Cross-Entropy
Zhendong Yang
Zhe Li
Yuan Gong
Tianke Zhang
Shanshan Lao
Chun Yuan
Yu Li
33
14
0
22 Aug 2022
Effectiveness of Function Matching in Driving Scene Recognition
Effectiveness of Function Matching in Driving Scene Recognition
Shingo Yashima
26
1
0
20 Aug 2022
SKDCGN: Source-free Knowledge Distillation of Counterfactual Generative
  Networks using cGANs
SKDCGN: Source-free Knowledge Distillation of Counterfactual Generative Networks using cGANs
Sameer Ambekar
Matteo Tafuro
Ankit Ankit
Diego van der Mast
Mark Alence
C. Athanasiadis
GAN
28
4
0
08 Aug 2022
Task-Balanced Distillation for Object Detection
Task-Balanced Distillation for Object Detection
Ruining Tang
Zhen-yu Liu
Yangguang Li
Yiguo Song
Hui Liu
Qide Wang
Jing Shao
Guifang Duan
Jianrong Tan
28
20
0
05 Aug 2022
Overlooked Poses Actually Make Sense: Distilling Privileged Knowledge
  for Human Motion Prediction
Overlooked Poses Actually Make Sense: Distilling Privileged Knowledge for Human Motion Prediction
Xiaoning Sun
Qiongjie Cui
Huaijiang Sun
Bin Li
Weiqing Li
Jianfeng Lu
34
7
0
02 Aug 2022
Domain-invariant Feature Exploration for Domain Generalization
Domain-invariant Feature Exploration for Domain Generalization
Wang Lu
Jindong Wang
Haoliang Li
Yiqiang Chen
Xing Xie
OOD
35
70
0
25 Jul 2022
Online Knowledge Distillation via Mutual Contrastive Learning for Visual
  Recognition
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
46
50
0
23 Jul 2022
Locality Guidance for Improving Vision Transformers on Tiny Datasets
Locality Guidance for Improving Vision Transformers on Tiny Datasets
Kehan Li
Runyi Yu
Zhennan Wang
Li-ming Yuan
Guoli Song
Jie Chen
ViT
34
44
0
20 Jul 2022
Multi-Level Branched Regularization for Federated Learning
Multi-Level Branched Regularization for Federated Learning
Jinkyu Kim
Geeho Kim
Bohyung Han
FedML
27
53
0
14 Jul 2022
Distilled Non-Semantic Speech Embeddings with Binary Neural Networks for
  Low-Resource Devices
Distilled Non-Semantic Speech Embeddings with Binary Neural Networks for Low-Resource Devices
Harlin Lee
Aaqib Saeed
21
2
0
12 Jul 2022
Knowledge Condensation Distillation
Knowledge Condensation Distillation
Chenxin Li
Mingbao Lin
Zhiyuan Ding
Nie Lin
Yihong Zhuang
Yue Huang
Xinghao Ding
Liujuan Cao
42
28
0
12 Jul 2022
HEAD: HEtero-Assists Distillation for Heterogeneous Object Detectors
HEAD: HEtero-Assists Distillation for Heterogeneous Object Detectors
Luting Wang
Xiaojie Li
Yue Liao
Jiang
Jianlong Wu
Fei Wang
Chao Qian
Si Liu
25
20
0
12 Jul 2022
Normalized Feature Distillation for Semantic Segmentation
Normalized Feature Distillation for Semantic Segmentation
Tao Liu
Xi Yang
Chenshu Chen
9
5
0
12 Jul 2022
PKD: General Distillation Framework for Object Detectors via Pearson
  Correlation Coefficient
PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient
Weihan Cao
Yifan Zhang
Jianfei Gao
Anda Cheng
Ke Cheng
Jian Cheng
29
64
0
05 Jul 2022
Factorizing Knowledge in Neural Networks
Factorizing Knowledge in Neural Networks
Xingyi Yang
Jingwen Ye
Xinchao Wang
MoMe
47
121
0
04 Jul 2022
PrUE: Distilling Knowledge from Sparse Teacher Networks
PrUE: Distilling Knowledge from Sparse Teacher Networks
Shaopu Wang
Xiaojun Chen
Mengzhen Kou
Jinqiao Shi
27
2
0
03 Jul 2022
FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech
  Self-Supervised Learning
FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning
Yeonghyeon Lee
Kangwook Jang
Jahyun Goo
Youngmoon Jung
Hoi-Rim Kim
31
29
0
01 Jul 2022
ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature
  Entropy State
ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State
Xinshao Wang
Yang Hua
Elyor Kodirov
S. Mukherjee
David Clifton
N. Robertson
30
6
0
30 Jun 2022
Teach me how to Interpolate a Myriad of Embeddings
Teach me how to Interpolate a Myriad of Embeddings
Shashanka Venkataramanan
Ewa Kijak
Laurent Amsaleg
Yannis Avrithis
48
2
0
29 Jun 2022
Previous
123456...131415
Next