ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1612.03928
  4. Cited By
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer

Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer

12 December 2016
Sergey Zagoruyko
N. Komodakis
ArXivPDFHTML

Papers citing "Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer"

50 / 1,157 papers shown
Title
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient
  Image Retrieval
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval
Yi Xie
Huaidong Zhang
Xuemiao Xu
Jianqing Zhu
Shengfeng He
VLM
21
13
0
16 Mar 2023
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Youcai Zhang
Yuzhuo Qin
Heng-Ye Liu
Yanhao Zhang
Yaqian Li
X. Gu
VLM
55
2
0
15 Mar 2023
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
Maorong Wang
L. Xiao
T. Yamasaki
KELM
MoE
32
1
0
14 Mar 2023
MobileVOS: Real-Time Video Object Segmentation Contrastive Learning
  meets Knowledge Distillation
MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation
Roy Miles
M. K. Yucel
Bruno Manganelli
Albert Saà-Garriga
VOS
43
24
0
14 Mar 2023
Automatic Attention Pruning: Improving and Automating Model Pruning
  using Attentions
Automatic Attention Pruning: Improving and Automating Model Pruning using Attentions
Kaiqi Zhao
Animesh Jain
Ming Zhao
28
10
0
14 Mar 2023
A Contrastive Knowledge Transfer Framework for Model Compression and
  Transfer Learning
A Contrastive Knowledge Transfer Framework for Model Compression and Transfer Learning
Kaiqi Zhao
Yitao Chen
Ming Zhao
VLM
23
2
0
14 Mar 2023
SCPNet: Semantic Scene Completion on Point Cloud
SCPNet: Semantic Scene Completion on Point Cloud
Zhaoyang Xia
You-Chen Liu
Xin Li
Xinge Zhu
Yuexin Ma
Yikang Li
Yuenan Hou
Yu Qiao
36
70
0
13 Mar 2023
Digital Twin-Assisted Knowledge Distillation Framework for Heterogeneous
  Federated Learning
Digital Twin-Assisted Knowledge Distillation Framework for Heterogeneous Federated Learning
Xiucheng Wang
Nan Cheng
Longfei Ma
Ruijin Sun
Rong Chai
Ning Lu
FedML
41
11
0
10 Mar 2023
TAKT: Target-Aware Knowledge Transfer for Whole Slide Image
  Classification
TAKT: Target-Aware Knowledge Transfer for Whole Slide Image Classification
Conghao Xiong
Yi-Mou Lin
Hao Chen
Hao Zheng
Dong Wei
Yefeng Zheng
Joseph J. Y. Sung
Irwin King
37
3
0
10 Mar 2023
Enhancing Low-resolution Face Recognition with Feature Similarity
  Knowledge Distillation
Enhancing Low-resolution Face Recognition with Feature Similarity Knowledge Distillation
Sungho Shin
Yeonguk Yu
Kyoobin Lee
CVBM
24
3
0
08 Mar 2023
Gradient-Guided Knowledge Distillation for Object Detectors
Gradient-Guided Knowledge Distillation for Object Detectors
Qizhen Lan
Qingze Tian
34
8
0
07 Mar 2023
Students Parrot Their Teachers: Membership Inference on Model
  Distillation
Students Parrot Their Teachers: Membership Inference on Model Distillation
Matthew Jagielski
Milad Nasr
Christopher A. Choquette-Choo
Katherine Lee
Nicholas Carlini
FedML
46
21
0
06 Mar 2023
Learning to Retain while Acquiring: Combating Distribution-Shift in
  Adversarial Data-Free Knowledge Distillation
Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation
Gaurav Patel
Konda Reddy Mopuri
Qiang Qiu
29
28
0
28 Feb 2023
Leveraging Angular Distributions for Improved Knowledge Distillation
Leveraging Angular Distributions for Improved Knowledge Distillation
Eunyeong Jeon
Hongjun Choi
Ankita Shukla
Pavan Turaga
14
8
0
27 Feb 2023
Graph-based Knowledge Distillation: A survey and experimental evaluation
Graph-based Knowledge Distillation: A survey and experimental evaluation
Jing Liu
Tongya Zheng
Guanzheng Zhang
Qinfen Hao
38
8
0
27 Feb 2023
Distilling Calibrated Student from an Uncalibrated Teacher
Distilling Calibrated Student from an Uncalibrated Teacher
Ishan Mishra
Sethu Vamsi Krishna
Deepak Mishra
FedML
40
2
0
22 Feb 2023
Two-in-one Knowledge Distillation for Efficient Facial Forgery Detection
Two-in-one Knowledge Distillation for Efficient Facial Forgery Detection
Chu Zhou
Jiajun Huang
Daochang Liu
Chengbin Du
Siqi Ma
Surya Nepal
Chang Xu
33
0
0
21 Feb 2023
Learning From Biased Soft Labels
Learning From Biased Soft Labels
Hua Yuan
Ning Xu
Yuge Shi
Xin Geng
Yong Rui
FedML
34
6
0
16 Feb 2023
A lightweight network for photovoltaic cell defect detection in
  electroluminescence images based on neural architecture search and knowledge
  distillation
A lightweight network for photovoltaic cell defect detection in electroluminescence images based on neural architecture search and knowledge distillation
Jinxia Zhang
Xinyi Chen
Haikun Wei
Kanjian Zhang
21
37
0
15 Feb 2023
Take a Prior from Other Tasks for Severe Blur Removal
Take a Prior from Other Tasks for Severe Blur Removal
Pei Wang
Danna Xue
Yu Zhu
Jinqiu Sun
Qingsen Yan
Sung-eui Yoon
Yanning Zhang
36
2
0
14 Feb 2023
Anti-Compression Contrastive Facial Forgery Detection
Anti-Compression Contrastive Facial Forgery Detection
Jiajun Huang
Xinqi Zhu
Chengbin Du
Siqi Ma
Surya Nepal
Chang Xu
CVBM
32
3
0
13 Feb 2023
Feature Affinity Assisted Knowledge Distillation and Quantization of
  Deep Neural Networks on Label-Free Data
Feature Affinity Assisted Knowledge Distillation and Quantization of Deep Neural Networks on Label-Free Data
Zhijian Li
Biao Yang
Penghang Yin
Y. Qi
Jack Xin
MQ
20
1
0
10 Feb 2023
Audio Representation Learning by Distilling Video as Privileged
  Information
Audio Representation Learning by Distilling Video as Privileged Information
Amirhossein Hajavi
Ali Etemad
21
4
0
06 Feb 2023
Crucial Semantic Classifier-based Adversarial Learning for Unsupervised
  Domain Adaptation
Crucial Semantic Classifier-based Adversarial Learning for Unsupervised Domain Adaptation
Yumin Zhang
Yajun Gao
Hongliu Li
Ating Yin
Duzhen Zhang
Xiuyi Chen
37
0
0
03 Feb 2023
Rethinking Soft Label in Label Distribution Learning Perspective
Rethinking Soft Label in Label Distribution Learning Perspective
Seungbum Hong
Jihun Yoon
Bogyu Park
Min-Kook Choi
36
0
0
31 Jan 2023
Knowledge Distillation $\approx$ Label Smoothing: Fact or Fallacy?
Knowledge Distillation ≈\approx≈ Label Smoothing: Fact or Fallacy?
Md Arafat Sultan
22
2
0
30 Jan 2023
Supervision Complexity and its Role in Knowledge Distillation
Supervision Complexity and its Role in Knowledge Distillation
Hrayr Harutyunyan
A. S. Rawat
A. Menon
Seungyeon Kim
Surinder Kumar
35
12
0
28 Jan 2023
BiBench: Benchmarking and Analyzing Network Binarization
BiBench: Benchmarking and Analyzing Network Binarization
Haotong Qin
Mingyuan Zhang
Yifu Ding
Aoyu Li
Zhongang Cai
Ziwei Liu
Feng Yu
Xianglong Liu
MQ
AAML
49
36
0
26 Jan 2023
Learning to Linearize Deep Neural Networks for Secure and Efficient
  Private Inference
Learning to Linearize Deep Neural Networks for Secure and Efficient Private Inference
Souvik Kundu
Shun Lu
Yuke Zhang
Jacqueline Liu
Peter A. Beerel
8
29
0
23 Jan 2023
RNAS-CL: Robust Neural Architecture Search by Cross-Layer Knowledge
  Distillation
RNAS-CL: Robust Neural Architecture Search by Cross-Layer Knowledge Distillation
Utkarsh Nath
Yancheng Wang
Yingzhen Yang
AAML
34
2
0
19 Jan 2023
ACQ: Improving Generative Data-free Quantization Via Attention
  Correction
ACQ: Improving Generative Data-free Quantization Via Attention Correction
Jixing Li
Xiaozhou Guo
Benzhe Dai
Guoliang Gong
Min Jin
Gang Chen
Wenyu Mao
Huaxiang Lu
MQ
35
4
0
18 Jan 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
60
121
0
17 Jan 2023
StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D
  Object Detection
StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D Object Detection
Zhe Liu
Xiaoqing Ye
Xiao Tan
Errui Ding
Xiang Bai
3DPC
26
8
0
04 Jan 2023
On Deep Recurrent Reinforcement Learning for Active Visual Tracking of
  Space Noncooperative Objects
On Deep Recurrent Reinforcement Learning for Active Visual Tracking of Space Noncooperative Objects
D. Zhou
Guanghui Sun
Zhao-jie Zhang
Ligang Wu
25
8
0
29 Dec 2022
Discriminator-Cooperated Feature Map Distillation for GAN Compression
Discriminator-Cooperated Feature Map Distillation for GAN Compression
Tie Hu
Mingbao Lin
Lizhou You
Rongrong Ji
Rongrong Ji
36
7
0
29 Dec 2022
TiG-BEV: Multi-view BEV 3D Object Detection via Target Inner-Geometry
  Learning
TiG-BEV: Multi-view BEV 3D Object Detection via Target Inner-Geometry Learning
Pei-Kai Huang
L. Liu
Renrui Zhang
Song Zhang
Xin Xu
Bai-Qi Wang
G. Liu
3DPC
MDE
38
42
0
28 Dec 2022
Publishing Efficient On-device Models Increases Adversarial
  Vulnerability
Publishing Efficient On-device Models Increases Adversarial Vulnerability
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
AAML
38
2
0
28 Dec 2022
NeRN -- Learning Neural Representations for Neural Networks
NeRN -- Learning Neural Representations for Neural Networks
Maor Ashkenazi
Zohar Rimon
Ron Vainshtein
Shir Levi
Elad Richardson
Pinchas Mintz
Eran Treister
3DH
33
13
0
27 Dec 2022
Prototype-guided Cross-task Knowledge Distillation for Large-scale
  Models
Prototype-guided Cross-task Knowledge Distillation for Large-scale Models
Deng Li
Aming Wu
Yahong Han
Qingwen Tian
VLM
33
2
0
26 Dec 2022
BD-KD: Balancing the Divergences for Online Knowledge Distillation
BD-KD: Balancing the Divergences for Online Knowledge Distillation
Ibtihel Amara
N. Sepahvand
B. Meyer
W. Gross
J. Clark
32
2
0
25 Dec 2022
T2-GNN: Graph Neural Networks for Graphs with Incomplete Features and
  Structure via Teacher-Student Distillation
T2-GNN: Graph Neural Networks for Graphs with Incomplete Features and Structure via Teacher-Student Distillation
Cuiying Huo
Di Jin
Yawen Li
Dongxiao He
Yubin Yang
Lingfei Wu
11
38
0
24 Dec 2022
In-Sensor & Neuromorphic Computing are all you need for Energy Efficient
  Computer Vision
In-Sensor & Neuromorphic Computing are all you need for Energy Efficient Computer Vision
Gourav Datta
Zeyu Liu
Md. Abdullah-Al Kaiser
Souvik Kundu
Joe Mathai
Zihan Yin
Ajey P. Jacob
Akhilesh R. Jaiswal
Peter A. Beerel
28
12
0
21 Dec 2022
Training Lightweight Graph Convolutional Networks with Phase-field
  Models
Training Lightweight Graph Convolutional Networks with Phase-field Models
H. Sahbi
32
0
0
19 Dec 2022
Gait Recognition Using 3-D Human Body Shape Inference
Gait Recognition Using 3-D Human Body Shape Inference
Haidong Zhu
Zhao-Heng Zheng
Ramkant Nevatia
CVBM
3DH
36
23
0
18 Dec 2022
Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image
  Transformers Help 3D Representation Learning?
Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?
Runpei Dong
Zekun Qi
Linfeng Zhang
Junbo Zhang
Jian‐Yuan Sun
Zheng Ge
Li Yi
Kaisheng Ma
ViT
3DPC
29
84
0
16 Dec 2022
Vision Transformer with Attentive Pooling for Robust Facial Expression
  Recognition
Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition
Fanglei Xue
Qiangchang Wang
Zichang Tan
Zhongsong Ma
G. Guo
ViT
43
67
0
11 Dec 2022
LEAD: Liberal Feature-based Distillation for Dense Retrieval
LEAD: Liberal Feature-based Distillation for Dense Retrieval
Hao Sun
Xiao Liu
Yeyun Gong
Anlei Dong
Jing Lu
Yan Zhang
Linjun Yang
Rangan Majumder
Nan Duan
67
2
0
10 Dec 2022
Enhancing Low-Density EEG-Based Brain-Computer Interfaces with
  Similarity-Keeping Knowledge Distillation
Enhancing Low-Density EEG-Based Brain-Computer Interfaces with Similarity-Keeping Knowledge Distillation
Xin Huang
Sung-Yu Chen
Chun-Shu Wei
16
0
0
06 Dec 2022
Leveraging Different Learning Styles for Improved Knowledge Distillation
  in Biomedical Imaging
Leveraging Different Learning Styles for Improved Knowledge Distillation in Biomedical Imaging
Usma Niyaz
A. Sambyal
Deepti R. Bathula
25
0
0
06 Dec 2022
SimpleMind adds thinking to deep neural networks
SimpleMind adds thinking to deep neural networks
Y. Choi
M. Wahi-Anwar
Matthew S. Brown
AI4CE
16
5
0
02 Dec 2022
Previous
123...789...222324
Next