Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1612.03928
Cited By
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
12 December 2016
Sergey Zagoruyko
N. Komodakis
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer"
50 / 1,157 papers shown
Title
Leveraging Near-Field Lighting for Monocular Depth Estimation from Endoscopy Videos
Akshay Paruchuri
S. Ehrenstein
Shuxian Wang
Inbar Fried
Stephen M. Pizer
Marc Niethammer
Roni Sengupta
MDE
50
6
0
26 Mar 2024
Learning to Project for Cross-Task Knowledge Distillation
Dylan Auty
Roy Miles
Benedikt Kolbeinsson
K. Mikolajczyk
47
0
0
21 Mar 2024
Ranking Distillation for Open-Ended Video Question Answering with Insufficient Labels
Tianming Liang
Chaolei Tan
Beihao Xia
Wei-Shi Zheng
Jianfang Hu
36
1
0
21 Mar 2024
REAL: Representation Enhanced Analytic Learning for Exemplar-free Class-incremental Learning
Run He
Huiping Zhuang
Di Fang
Yizhu Chen
Kai Tong
Cen Chen
43
1
0
20 Mar 2024
Scale Decoupled Distillation
Shicai Wei
52
4
0
20 Mar 2024
LNPT: Label-free Network Pruning and Training
Jinying Xiao
Ping Li
Zhe Tang
Jie Nie
43
2
0
19 Mar 2024
Self-Supervised Quantization-Aware Knowledge Distillation
Kaiqi Zhao
Ming Zhao
MQ
38
4
0
17 Mar 2024
Histo-Genomic Knowledge Distillation For Cancer Prognosis From Histopathology Whole Slide Images
Zhikang Wang
Yumeng Zhang
Yingxue Xu
S. Imoto
Hao Chen
Jiangning Song
25
6
0
15 Mar 2024
AutoDFP: Automatic Data-Free Pruning via Channel Similarity Reconstruction
Siqi Li
Jun Chen
Jingyang Xiang
Chengrui Zhu
Yong-Jin Liu
44
0
0
13 Mar 2024
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving
Sicen Guo
Zhiyuan Wu
Qijun Chen
Ioannis Pitas
Rui Fan
Rui Fan
45
1
0
13 Mar 2024
Distilling the Knowledge in Data Pruning
Emanuel Ben-Baruch
Adam Botach
Igor Kviatkovsky
Manoj Aggarwal
Gérard Medioni
38
1
0
12 Mar 2024
V
k
D
:
V_kD:
V
k
D
:
Improving Knowledge Distillation using Orthogonal Projections
Roy Miles
Ismail Elezi
Jiankang Deng
52
10
0
10 Mar 2024
Frequency Attention for Knowledge Distillation
Cuong Pham
Van-Anh Nguyen
Trung Le
Dinh Q. Phung
Gustavo Carneiro
Thanh-Toan Do
35
16
0
09 Mar 2024
Adversarial Sparse Teacher: Defense Against Distillation-Based Model Stealing Attacks Using Adversarial Examples
Eda Yilmaz
H. Keles
AAML
24
2
0
08 Mar 2024
Attention-guided Feature Distillation for Semantic Segmentation
Amir M. Mansourian
Arya Jalali
Rozhan Ahmadi
S. Kasaei
36
0
0
08 Mar 2024
On the Effectiveness of Distillation in Mitigating Backdoors in Pre-trained Encoder
Tingxu Han
Shenghan Huang
Ziqi Ding
Weisong Sun
Yebo Feng
...
Hanwei Qian
Cong Wu
Quanjun Zhang
Yang Liu
Zhenyu Chen
28
8
0
06 Mar 2024
A general approach to enhance the survivability of backdoor attacks by decision path coupling
Yufei Zhao
Dingji Wang
Bihuan Chen
Ziqian Chen
Xin Peng
AAML
32
0
0
05 Mar 2024
Align-to-Distill: Trainable Attention Alignment for Knowledge Distillation in Neural Machine Translation
Heegon Jin
Seonil Son
Jemin Park
Youngseok Kim
Hyungjong Noh
Yeonsoo Lee
41
2
0
03 Mar 2024
Logit Standardization in Knowledge Distillation
Shangquan Sun
Wenqi Ren
Jingzhi Li
Rui Wang
Xiaochun Cao
37
60
0
03 Mar 2024
On the Road to Portability: Compressing End-to-End Motion Planner for Autonomous Driving
Kaituo Feng
Changsheng Li
Dongchun Ren
Ye Yuan
Guoren Wang
38
6
0
02 Mar 2024
A Cognitive-Based Trajectory Prediction Approach for Autonomous Driving
Haicheng Liao
Yongkang Li
Zhenning Li
Chengyue Wang
Zhiyong Cui
Shengbo Eben Li
Chengzhong Xu
45
26
0
29 Feb 2024
Continuous Sign Language Recognition Based on Motor attention mechanism and frame-level Self-distillation
Qidan Zhu
Jing Li
Fei Yuan
Quan Gan
SLR
55
3
0
29 Feb 2024
Source-Guided Similarity Preservation for Online Person Re-Identification
Hamza Rami
Jhony H. Giraldo
Nicolas Winckler
Stéphane Lathuilière
CLL
31
3
0
23 Feb 2024
TIE-KD: Teacher-Independent and Explainable Knowledge Distillation for Monocular Depth Estimation
Sangwon Choi
Daejune Choi
Duksu Kim
37
4
0
22 Feb 2024
Data Distribution Distilled Generative Model for Generalized Zero-Shot Recognition
Yijie Wang
Mingjian Hong
Luwen Huangfu
Shengyue Huang
SyDa
44
6
0
18 Feb 2024
GraphKD: Exploring Knowledge Distillation Towards Document Object Detection with Structured Graph Creation
Ayan Banerjee
Sanket Biswas
Josep Lladós
Umapada Pal
51
2
0
17 Feb 2024
On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models
Juliette Marrie
Michael Arbel
Julien Mairal
Diane Larlus
VLM
MQ
48
1
0
17 Feb 2024
Knowledge Distillation Based on Transformed Teacher Matching
Kaixiang Zheng
En-Hui Yang
37
19
0
17 Feb 2024
Data-efficient Large Vision Models through Sequential Autoregression
Jianyuan Guo
Zhiwei Hao
Chengcheng Wang
Yehui Tang
Han Wu
Han Hu
Kai Han
Chang Xu
VLM
38
10
0
07 Feb 2024
Good Teachers Explain: Explanation-Enhanced Knowledge Distillation
Amin Parchami-Araghi
Moritz Bohle
Sukrut Rao
Bernt Schiele
FAtt
20
3
0
05 Feb 2024
Precise Knowledge Transfer via Flow Matching
Shitong Shao
Zhiqiang Shen
Linrui Gong
Huanran Chen
Xu Dai
34
2
0
03 Feb 2024
Bi-CryptoNets: Leveraging Different-Level Privacy for Encrypted Inference
Man-Jie Yuan
Zheng Zou
Wei Gao
22
0
0
02 Feb 2024
MoDE: A Mixture-of-Experts Model with Mutual Distillation among the Experts
Zhitian Xie
Yinger Zhang
Chenyi Zhuang
Qitao Shi
Zhining Liu
Jinjie Gu
Guannan Zhang
MoE
43
3
0
31 Jan 2024
Progressive Multi-task Anti-Noise Learning and Distilling Frameworks for Fine-grained Vehicle Recognition
Dichao Liu
21
0
0
25 Jan 2024
A Novel Garment Transfer Method Supervised by Distilled Knowledge of Virtual Try-on Model
N. Fang
Le-miao Qiu
Shuyou Zhang
Zili Wang
Kerui Hu
Jianrong Tan
30
1
0
23 Jan 2024
Rethinking Centered Kernel Alignment in Knowledge Distillation
Zikai Zhou
Yunhang Shen
Shitong Shao
Linrui Gong
Shaohui Lin
24
1
0
22 Jan 2024
Knowledge Distillation on Spatial-Temporal Graph Convolutional Network for Traffic Prediction
Mohammad Izadi
M. Safayani
Abdolreza Mirzaei
11
3
0
22 Jan 2024
Bayes Conditional Distribution Estimation for Knowledge Distillation Based on Conditional Mutual Information
Linfeng Ye
Shayan Mohajer Hamidi
Renhao Tan
En-Hui Yang
VLM
37
14
0
16 Jan 2024
Faster ISNet for Background Bias Mitigation on Deep Neural Networks
P. R. Bassi
S. Decherchi
Andrea Cavalli
25
0
0
16 Jan 2024
Convolutional Neural Network Compression via Dynamic Parameter Rank Pruning
Manish Sharma
Jamison Heard
Eli Saber
Panos P. Markopoulos
31
1
0
15 Jan 2024
A Deep Hierarchical Feature Sparse Framework for Occluded Person Re-Identification
Yihu Song
Shuaishi Liu
33
1
0
15 Jan 2024
Graph Relation Distillation for Efficient Biomedical Instance Segmentation
Xiaoyu Liu
Yueyi Zhang
Zhiwei Xiong
Wei Huang
Bo Hu
Xiaoyan Sun
Feng Wu
49
0
0
12 Jan 2024
Direct Distillation between Different Domains
Jialiang Tang
Shuo Chen
Gang Niu
Hongyuan Zhu
Qiufeng Wang
Chen Gong
Masashi Sugiyama
60
3
0
12 Jan 2024
Attention to detail: inter-resolution knowledge distillation
Rocío del Amor
Julio Silva-Rodríguez
Adrián Colomer
Valery Naranjo
43
0
0
11 Jan 2024
Source-Free Cross-Modal Knowledge Transfer by Unleashing the Potential of Task-Irrelevant Data
Jinjin Zhu
Yucheng Chen
Lin Wang
43
2
0
10 Jan 2024
CATFace: Cross-Attribute-Guided Transformer with Self-Attention Distillation for Low-Quality Face Recognition
Niloufar Alipour Talemi
Hossein Kashiani
Nasser M. Nasrabadi
ViT
CVBM
19
4
0
05 Jan 2024
Dual Teacher Knowledge Distillation with Domain Alignment for Face Anti-spoofing
Zhe Kong
Wentian Zhang
Tao Wang
Kaihao Zhang
Yuexiang Li
Xiaoying Tang
Wenhan Luo
AAML
CVBM
25
1
0
02 Jan 2024
Exploring Hyperspectral Anomaly Detection with Human Vision: A Small Target Aware Detector
Jitao Ma
Weiying Xie
Yunsong Li
43
1
0
02 Jan 2024
Adaptive Depth Networks with Skippable Sub-Paths
Woochul Kang
46
1
0
27 Dec 2023
Revisiting Knowledge Distillation under Distribution Shift
Songming Zhang
Ziyu Lyu
Xiaofeng Chen
32
1
0
25 Dec 2023
Previous
1
2
3
4
5
...
22
23
24
Next