Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1612.03928
Cited By
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
12 December 2016
Sergey Zagoruyko
N. Komodakis
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer"
50 / 1,157 papers shown
Title
Spatio-Temporal Representation Factorization for Video-based Person Re-Identification
Abhishek Aich
Meng Zheng
Srikrishna Karanam
Terrence Chen
Amit K. Roy-Chowdhury
Ziyan Wu
42
70
0
25 Jul 2021
Multi-Label Image Classification with Contrastive Learning
Son D.Dao
Ethan Zhao
D.Q. Phung
Jianfei Cai
SSL
23
25
0
24 Jul 2021
Exploring Set Similarity for Dense Self-supervised Representation Learning
Zhaoqing Wang
Qiang Li
Guoxin Zhang
Pengfei Wan
Wen Zheng
N. Wang
Biwei Huang
Tongliang Liu
SSL
33
44
0
19 Jul 2021
Double Similarity Distillation for Semantic Image Segmentation
Yingchao Feng
Xian Sun
Wenhui Diao
Jihao Li
Xin Gao
24
62
0
19 Jul 2021
Towards Low-Latency Energy-Efficient Deep SNNs via Attention-Guided Compression
Souvik Kundu
Gourav Datta
Massoud Pedram
Peter A. Beerel
23
14
0
16 Jul 2021
Align before Fuse: Vision and Language Representation Learning with Momentum Distillation
Junnan Li
Ramprasaath R. Selvaraju
Akhilesh Deepak Gotmare
Shafiq Joty
Caiming Xiong
Guosheng Lin
FaML
83
1,893
0
16 Jul 2021
Noise Stability Regularization for Improving BERT Fine-tuning
Hang Hua
Xingjian Li
Dejing Dou
Chengzhong Xu
Jiebo Luo
19
44
0
10 Jul 2021
Weight Reparametrization for Budget-Aware Network Pruning
Robin Dupont
H. Sahbi
Guillaume Michel
26
1
0
08 Jul 2021
Novel Visual Category Discovery with Dual Ranking Statistics and Mutual Knowledge Distillation
Bingchen Zhao
Kai Han
26
107
0
07 Jul 2021
WeClick: Weakly-Supervised Video Semantic Segmentation with Click Annotations
Peidong Liu
Zibin He
Xiyu Yan
Yong-jia Jiang
Shutao Xia
Feng Zheng
Maowei Hu
35
10
0
07 Jul 2021
Deep Learning for Micro-expression Recognition: A Survey
Yante Li
Jinsheng Wei
Yang Liu
Janne Kauttonen
Guoying Zhao
43
61
0
06 Jul 2021
Confidence Conditioned Knowledge Distillation
Sourav Mishra
Suresh Sundaram
15
1
0
06 Jul 2021
On The Distribution of Penultimate Activations of Classification Networks
Minkyo Seo
Yoonho Lee
Suha Kwak
UQCV
18
4
0
05 Jul 2021
Bag of Instances Aggregation Boosts Self-supervised Distillation
Haohang Xu
Jiemin Fang
Xiaopeng Zhang
Lingxi Xie
Xinggang Wang
Wenrui Dai
H. Xiong
Qi Tian
SSL
33
21
0
04 Jul 2021
Pool of Experts: Realtime Querying Specialized Knowledge in Massive Neural Networks
Hakbin Kim
Dong-Wan Choi
25
2
0
03 Jul 2021
Revisiting Knowledge Distillation: An Inheritance and Exploration Framework
Zhen Huang
Xu Shen
Jun Xing
Tongliang Liu
Xinmei Tian
Houqiang Li
Bing Deng
Jianqiang Huang
Xiansheng Hua
38
27
0
01 Jul 2021
Tackling Catastrophic Forgetting and Background Shift in Continual Semantic Segmentation
Arthur Douillard
Yifu Chen
Arnaud Dapogny
Matthieu Cord
CLL
30
21
0
29 Jun 2021
Multi-layered Semantic Representation Network for Multi-label Image Classification
Xiwen Qu
Haoyang Che
Jun Huang
Linchuan Xu
Xiao Zheng
24
24
0
22 Jun 2021
Knowledge Distillation via Instance-level Sequence Learning
Haoran Zhao
Xin Sun
Junyu Dong
Zihe Dong
Qiong Li
34
23
0
21 Jun 2021
Cogradient Descent for Dependable Learning
Runqi Wang
Baochang Zhang
Lian Zhuo
QiXiang Ye
David Doermann
23
0
0
20 Jun 2021
CompConv: A Compact Convolution Module for Efficient Feature Learning
Chen Zhang
Yinghao Xu
Yujun Shen
VLM
SSL
16
10
0
19 Jun 2021
Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better
Gaurav Menghani
VLM
MedIm
23
367
0
16 Jun 2021
Topology Distillation for Recommender System
SeongKu Kang
Junyoung Hwang
Wonbin Kweon
Hwanjo Yu
13
42
0
16 Jun 2021
Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation
Cody Blakeney
Nathaniel Huish
Yan Yan
Ziliang Zong
23
18
0
15 Jun 2021
Dynamic Clone Transformer for Efficient Convolutional Neural Netwoks
Longqing Ye
ViT
25
0
0
12 Jun 2021
Does Knowledge Distillation Really Work?
Samuel Stanton
Pavel Izmailov
Polina Kirichenko
Alexander A. Alemi
A. Wilson
FedML
32
215
0
10 Jun 2021
Distilling Image Classifiers in Object Detectors
Shuxuan Guo
J. Álvarez
Mathieu Salzmann
VLM
30
8
0
09 Jun 2021
BERT Learns to Teach: Knowledge Distillation with Meta Learning
Wangchunshu Zhou
Canwen Xu
Julian McAuley
36
87
0
08 Jun 2021
Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model
Zehao Wang
19
43
0
07 Jun 2021
Natural Statistics of Network Activations and Implications for Knowledge Distillation
Michael Rotman
Lior Wolf
4
0
0
01 Jun 2021
Privileged Graph Distillation for Cold Start Recommendation
Shuai Wang
Kun Zhang
Le Wu
Haiping Ma
Richang Hong
Meng Wang
12
28
0
31 May 2021
Fair Feature Distillation for Visual Recognition
S. Jung
Donggyu Lee
Taeeon Park
Taesup Moon
27
75
0
27 May 2021
Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic Distillation
Lewei Yao
Renjie Pi
Hang Xu
Wei Zhang
Zhenguo Li
Tong Zhang
89
38
0
27 May 2021
Revisiting Knowledge Distillation for Object Detection
Amin Banitalebi-Dehkordi
24
6
0
22 May 2021
Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation
Taehyeon Kim
Jaehoon Oh
Nakyil Kim
Sangwook Cho
Se-Young Yun
20
228
0
19 May 2021
Graph-Free Knowledge Distillation for Graph Neural Networks
Xiang Deng
Zhongfei Zhang
34
65
0
16 May 2021
Undistillable: Making A Nasty Teacher That CANNOT teach students
Haoyu Ma
Tianlong Chen
Ting-Kuei Hu
Chenyu You
Xiaohui Xie
Zhangyang Wang
27
41
0
16 May 2021
KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation
Mengqi Xue
Mingli Song
Xinchao Wang
Ying Chen
Xingen Wang
Xiuming Zhang
20
10
0
10 May 2021
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack
Yixu Wang
Jie Li
Hong Liu
Yan Wang
Yongjian Wu
Feiyue Huang
Rongrong Ji
AAML
25
34
0
03 May 2021
Semantic Relation Preserving Knowledge Distillation for Image-to-Image Translation
Zeqi Li
R. Jiang
P. Aarabi
GAN
VLM
36
28
0
30 Apr 2021
Interpretable Embedding Procedure Knowledge Transfer via Stacked Principal Component Analysis and Graph Neural Network
Seunghyun Lee
B. Song
21
6
0
28 Apr 2021
Self-supervised Spatial Reasoning on Multi-View Line Drawings
Siyuan Xiang
Anbang Yang
Yanfei Xue
Yaoqing Yang
Chen Feng
SSL
3DPC
34
1
0
27 Apr 2021
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
34
82
0
26 Apr 2021
Piggyback GAN: Efficient Lifelong Learning for Image Conditioned Generation
Mengyao Zhai
Lei Chen
Jiawei He
Megha Nawhal
Frederick Tung
Greg Mori
CLL
38
28
0
24 Apr 2021
Distilling Audio-Visual Knowledge by Compositional Contrastive Learning
Yanbei Chen
Yongqin Xian
A. Sophia Koepke
Ying Shan
Zeynep Akata
80
82
0
22 Apr 2021
Balanced Knowledge Distillation for Long-tailed Learning
Shaoyu Zhang
Chen Chen
Xiyuan Hu
Silong Peng
56
57
0
21 Apr 2021
Orderly Dual-Teacher Knowledge Distillation for Lightweight Human Pose Estimation
Zhong-Qiu Zhao
Yao Gao
Yuchen Ge
Weidong Tian
3DH
16
3
0
21 Apr 2021
Voice2Mesh: Cross-Modal 3D Face Model Generation from Voices
Cho-Ying Wu
Ke Xu
Chin-Cheng Hsu
Ulrich Neumann
CVBM
3DH
50
4
0
21 Apr 2021
DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning
Yuting Gao
Jia-Xin Zhuang
Xiaowei Guo
Hao Cheng
Xing Sun
Ke Li
Feiyue Huang
59
40
0
19 Apr 2021
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu Liu
Hengshuang Zhao
Jiaya Jia
155
424
0
19 Apr 2021
Previous
1
2
3
...
14
15
16
...
22
23
24
Next