Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1612.03928
Cited By
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
12 December 2016
Sergey Zagoruyko
N. Komodakis
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer"
50 / 1,157 papers shown
Title
Heterogeneous Knowledge Distillation using Information Flow Modeling
Nikolaos Passalis
Maria Tzelepi
Anastasios Tefas
32
138
0
02 May 2020
Reinforcement Learning with Augmented Data
Michael Laskin
Kimin Lee
Adam Stooke
Lerrel Pinto
Pieter Abbeel
A. Srinivas
OffRL
20
648
0
30 Apr 2020
PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning
Arthur Douillard
Matthieu Cord
Charles Ollion
Thomas Robert
Eduardo Valle
CLL
17
6
0
28 Apr 2020
KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow
Balamurali Murugesan
S. Vijayarangan
Kaushik Sarveswaran
Keerthi Ram
M. Sivaprakasam
28
12
0
11 Apr 2020
Inter-Region Affinity Distillation for Road Marking Segmentation
Yuenan Hou
Zheng Ma
Chunxiao Liu
Tak-Wai Hui
Chen Change Loy
36
121
0
11 Apr 2020
LIAAD: Lightweight Attentive Angular Distillation for Large-scale Age-Invariant Face Recognition
Thanh-Dat Truong
C. Duong
Kha Gia Quach
Ngan Le
Tien D. Bui
Khoa Luu
CVBM
14
8
0
09 Apr 2020
S2A: Wasserstein GAN with Spatio-Spectral Laplacian Attention for Multi-Spectral Band Synthesis
Litu Rout
Indranil Misra
Manthira Moorthi Subbiah
D. Dhar
12
7
0
08 Apr 2020
MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
Zhiqing Sun
Hongkun Yu
Xiaodan Song
Renjie Liu
Yiming Yang
Denny Zhou
MQ
30
797
0
06 Apr 2020
Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing
Hengtong Hu
Lingxi Xie
Richang Hong
Qi Tian
10
110
0
01 Apr 2020
Knowledge as Priors: Cross-Modal Knowledge Generalization for Datasets without Superior Knowledge
Long Zhao
Xi Peng
Yuxiao Chen
Mubbasir Kapadia
Dimitris N. Metaxas
16
66
0
01 Apr 2020
Binary Neural Networks: A Survey
Haotong Qin
Ruihao Gong
Xianglong Liu
Xiao Bai
Jingkuan Song
N. Sebe
MQ
55
459
0
31 Mar 2020
Regularizing Class-wise Predictions via Self-knowledge Distillation
Sukmin Yun
Jongjin Park
Kimin Lee
Jinwoo Shin
29
276
0
31 Mar 2020
Edge Intelligence: Architectures, Challenges, and Applications
Dianlei Xu
Tong Li
Yong Li
Xiang Su
Sasu Tarkoma
Tao Jiang
Jon Crowcroft
Pan Hui
53
29
0
26 Mar 2020
Training Binary Neural Networks with Real-to-Binary Convolutions
Brais Martínez
Jing Yang
Adrian Bulat
Georgios Tzimiropoulos
MQ
17
226
0
25 Mar 2020
Dynamic Hierarchical Mimicking Towards Consistent Optimization Objectives
Duo Li
Qifeng Chen
153
19
0
24 Mar 2020
Synergic Adversarial Label Learning for Grading Retinal Diseases via Knowledge Distillation and Multi-task Learning
Lie Ju
Xin Wang
Xin Zhao
Huimin Lu
Dwarikanath Mahapatra
Paul Bonnington
Z. Ge
24
1
0
24 Mar 2020
Distilling Knowledge from Graph Convolutional Networks
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
166
226
0
23 Mar 2020
Efficient Crowd Counting via Structured Knowledge Transfer
Lingbo Liu
Jiaqi Chen
Hefeng Wu
Tianshui Chen
Guanbin Li
Liang Lin
29
64
0
23 Mar 2020
Fine-grained Species Recognition with Privileged Pooling: Better Sample Efficiency Through Supervised Attention
Andrés C. Rodríguez
Stefano Dáronco
Konrad Schindler
Jan Dirk Wegner
19
4
0
20 Mar 2020
GAN Compression: Efficient Architectures for Interactive Conditional GANs
Zhekai Zhang
Ji Lin
Yaoyao Ding
Zhijian Liu
Jun-Yan Zhu
Song Han
GAN
22
2
0
19 Mar 2020
Collaborative Distillation for Ultra-Resolution Universal Style Transfer
Huan Wang
Yijun Li
Yuehai Wang
Haoji Hu
Ming-Hsuan Yang
115
103
0
18 Mar 2020
Semi-supervised Contrastive Learning Using Partial Label Information
Colin B. Hansen
V. Nath
Diego A. Mesa
Yuankai Huo
Bennett A. Landman
Thomas A. Lasko
SSL
22
0
0
17 Mar 2020
Incremental Object Detection via Meta-Learning
K. J. Joseph
Jathushan Rajasegaran
Salman Khan
Fahad Shahbaz Khan
V. Balasubramanian
ObjD
CLL
VLM
179
98
0
17 Mar 2020
DEPARA: Deep Attribution Graph for Deep Knowledge Transferability
Mingli Song
Yixin Chen
Jingwen Ye
Xinchao Wang
Chengchao Shen
Feng Mao
Xiuming Zhang
23
29
0
17 Mar 2020
SuperMix: Supervising the Mixing Data Augmentation
Ali Dabouei
Sobhan Soleymani
Fariborz Taherkhani
Nasser M. Nasrabadi
21
98
0
10 Mar 2020
Knowledge distillation via adaptive instance normalization
Jing Yang
Brais Martínez
Adrian Bulat
Georgios Tzimiropoulos
21
23
0
09 Mar 2020
Pacemaker: Intermediate Teacher Knowledge Distillation For On-The-Fly Convolutional Neural Network
Wonchul Son
Youngbin Kim
Wonseok Song
Youngsuk Moon
Wonjun Hwang
14
0
0
09 Mar 2020
Long Short-Term Sample Distillation
Liang Jiang
Zujie Wen
Zhongping Liang
Yafang Wang
Gerard de Melo
Zhe Li
Liangzhuang Ma
Jiaxing Zhang
Xiaolong Li
Yuan Qi
10
6
0
02 Mar 2020
Efficient Semantic Video Segmentation with Per-frame Inference
Yifan Liu
Chunhua Shen
Changqian Yu
Jingdong Wang
VOS
22
126
0
26 Feb 2020
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Wenhui Wang
Furu Wei
Li Dong
Hangbo Bao
Nan Yang
Ming Zhou
VLM
47
1,214
0
25 Feb 2020
Residual Knowledge Distillation
Mengya Gao
Yujun Shen
Quanquan Li
Chen Change Loy
22
28
0
21 Feb 2020
Key Points Estimation and Point Instance Segmentation Approach for Lane Detection
Yeongmin Ko
Jiwon Jun
Shoaib Azam
Donghwuy Ko
M. Jeon
Witold Pedrycz
3DPC
23
238
0
16 Feb 2020
Multi-Task Incremental Learning for Object Detection
Xialei Liu
Hao Yang
Avinash Ravichandran
Rahul Bhotika
Stefano Soatto
ObjD
VLM
CLL
8
12
0
13 Feb 2020
Subclass Distillation
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
34
33
0
10 Feb 2020
Switchable Precision Neural Networks
Luis Guerra
Bohan Zhuang
Ian Reid
Tom Drummond
MQ
30
20
0
07 Feb 2020
Feature-map-level Online Adversarial Knowledge Distillation
Inseop Chung
Seonguk Park
Jangho Kim
Nojun Kwak
GAN
30
128
0
05 Feb 2020
Widening and Squeezing: Towards Accurate and Efficient QNNs
Chuanjian Liu
Kai Han
Yunhe Wang
Hanting Chen
Qi Tian
Chunjing Xu
MQ
8
0
0
03 Feb 2020
Search for Better Students to Learn Distilled Knowledge
Jindong Gu
Volker Tresp
20
19
0
30 Jan 2020
Evaluating Weakly Supervised Object Localization Methods Right
Junsuk Choe
Seong Joon Oh
Seungho Lee
Sanghyuk Chun
Zeynep Akata
Hyunjung Shim
WSOL
303
186
0
21 Jan 2020
PDANet: Pyramid Density-aware Attention Net for Accurate Crowd Counting
Saeed K. Amirgholipour
Xiangjian He
W. Jia
Dadong Wang
Lei Liu
28
11
0
16 Jan 2020
Theory In, Theory Out: The uses of social theory in machine learning for social science
J. Radford
K. Joseph
16
44
0
09 Jan 2020
Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification
Liuyu Xiang
Guiguang Ding
Jungong Han
28
281
0
06 Jan 2020
Data-Free Adversarial Distillation
Gongfan Fang
Mingli Song
Chengchao Shen
Xinchao Wang
Da Chen
Xiuming Zhang
22
146
0
23 Dec 2019
DBP: Discrimination Based Block-Level Pruning for Deep Model Acceleration
Wenxiao Wang
Shuai Zhao
Minghao Chen
Jinming Hu
Deng Cai
Haifeng Liu
21
35
0
21 Dec 2019
The State of Knowledge Distillation for Classification
Fabian Ruffy
K. Chahal
30
20
0
20 Dec 2019
Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion
Hongxu Yin
Pavlo Molchanov
Zhizhong Li
J. Álvarez
Arun Mallya
Derek Hoiem
N. Jha
Jan Kautz
37
554
0
18 Dec 2019
Cross-Modality Attention with Semantic Graph Embedding for Multi-Label Classification
Renchun You
Zhiyao Guo
Lei Cui
Xiang Long
Sid Ying-Ze Bao
Shilei Wen
22
184
0
17 Dec 2019
Joint Architecture and Knowledge Distillation in CNN for Chinese Text Recognition
Zirui Wang
Jun Du
22
0
0
17 Dec 2019
An Improving Framework of regularization for Network Compression
E. Zhenqian
Weiguo Gao
AI4CE
21
0
0
11 Dec 2019
QUEST: Quantized embedding space for transferring knowledge
Himalaya Jain
Spyros Gidaris
N. Komodakis
P. Pérez
Matthieu Cord
24
14
0
03 Dec 2019
Previous
1
2
3
...
19
20
21
22
23
24
Next