Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2011.01424
Cited By
Distilling Knowledge by Mimicking Features
3 November 2020
G. Wang
Yifan Ge
Jianxin Wu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Distilling Knowledge by Mimicking Features"
20 / 20 papers shown
Title
Quantization without Tears
Minghao Fu
Hao Yu
Jie Shao
Junjie Zhou
Ke Zhu
Jianxin Wu
MQ
64
1
0
21 Nov 2024
LLaVA-KD: A Framework of Distilling Multimodal Large Language Models
Y. Cai
Jiangning Zhang
Haoyang He
Xinwei He
Ao Tong
Zhenye Gan
Chengjie Wang
X. Bai
VLM
24
2
0
21 Oct 2024
Shape-intensity knowledge distillation for robust medical image segmentation
Wenhui Dong
Bo Du
Yongchao Xu
33
0
0
26 Sep 2024
Vision-Based Detection of Uncooperative Targets and Components on Small Satellites
Hannah Grauer
E. Lupu
Connor T. Lee
Soon-Jo Chung
Darren Rowen
Benjamen P. Bycroft
Phaedrus Leeds
John Brader
35
1
0
22 Aug 2024
Overcoming Uncertain Incompleteness for Robust Multimodal Sequential Diagnosis Prediction via Curriculum Data Erasing Guided Knowledge Distillation
Heejoon Koo
43
0
0
28 Jul 2024
Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanism
Chenqi Guo
Shiwei Zhong
Xiaofeng Liu
Qianli Feng
Yinglong Ma
22
1
0
30 Apr 2024
Towards Real-Time Neural Video Codec for Cross-Platform Application Using Calibration Information
Kuan Tian
Yonghang Guan
Jin-Peng Xiang
Jun Zhang
Xiao Han
Wei Yang
34
7
0
20 Sep 2023
Long-Tailed Continual Learning For Visual Food Recognition
Jiangpeng He
Luotao Lin
Jack Ma
H. Eicher-Miller
F. Zhu
Fengqing M Zhu
59
14
0
01 Jul 2023
Towards Efficient Task-Driven Model Reprogramming with Foundation Models
Shoukai Xu
Jiangchao Yao
Ran Luo
Shuhai Zhang
Zihao Lian
Mingkui Tan
Bo Han
Yaowei Wang
24
6
0
05 Apr 2023
EVC: Towards Real-Time Neural Image Compression with Mask Decay
G. Wang
Jiahao Li
Bin Li
Yan Lu
22
63
0
10 Feb 2023
Guided Hybrid Quantization for Object detection in Multimodal Remote Sensing Imagery via One-to-one Self-teaching
Jiaqing Zhang
Jie Lei
Weiying Xie
Yunsong Li
Wenxuan Wang
MQ
27
18
0
31 Dec 2022
IDa-Det: An Information Discrepancy-aware Distillation for 1-bit Detectors
Sheng Xu
Yanjing Li
Bo-Wen Zeng
Teli Ma
Baochang Zhang
Xianbin Cao
Penglei Gao
Jinhu Lv
30
15
0
07 Oct 2022
Skeleton-based Action Recognition via Adaptive Cross-Form Learning
Xuanhan Wang
Yan Dai
Lianli Gao
Jingkuan Song
21
20
0
30 Jun 2022
Localization Distillation for Object Detection
Zhaohui Zheng
Rongguang Ye
Ping Wang
Dongwei Ren
Jun Wang
W. Zuo
Ming-Ming Cheng
21
64
0
12 Apr 2022
Practical Network Acceleration with Tiny Sets
G. Wang
Jianxin Wu
32
8
0
16 Feb 2022
Compressing Models with Few Samples: Mimicking then Replacing
Huanyu Wang
Junjie Liu
Xin Ma
Yang Yong
Z. Chai
Jianxin Wu
VLM
OffRL
11
11
0
07 Jan 2022
Optimizing for In-memory Deep Learning with Emerging Memory Technology
Zhehui Wang
Tao Luo
Rick Siow Mong Goh
Wei Zhang
Weng-Fai Wong
13
1
0
01 Dec 2021
Towards Efficient Post-training Quantization of Pre-trained Language Models
Haoli Bai
Lu Hou
Lifeng Shang
Xin Jiang
Irwin King
M. Lyu
MQ
76
47
0
30 Sep 2021
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
267
3,371
0
09 Mar 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
192
473
0
12 Jun 2018
1