Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.08114
Cited By
Zero-Shot Knowledge Distillation in Deep Networks
20 May 2019
Gaurav Kumar Nayak
Konda Reddy Mopuri
Vaisakh Shaj
R. Venkatesh Babu
Anirban Chakraborty
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Zero-Shot Knowledge Distillation in Deep Networks"
50 / 151 papers shown
Title
Zero-shot Adversarial Quantization
Yuang Liu
Wei Zhang
Jun Wang
MQ
19
77
0
29 Mar 2021
ZS-IL: Looking Back on Learned Experiences For Zero-Shot Incremental Learning
Mozhgan Pourkeshavarz
Mohammad Sabokrou
CLL
VLM
27
0
0
22 Mar 2021
Efficient Encrypted Inference on Ensembles of Decision Trees
Kanthi Kiran Sarpatwar
Karthik Nandakumar
Nalini Ratha
J. Rayfield
Karthikeyan Shanmugam
Sharath Pankanti
Roman Vaculin
FedML
22
5
0
05 Mar 2021
Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation
Kenneth Borup
L. Andersen
25
14
0
25 Feb 2021
Domain Impression: A Source Data Free Domain Adaptation Method
V. Kurmi
Venkatesh Subramanian
Vinay P. Namboodiri
TTA
151
150
0
17 Feb 2021
Self Regulated Learning Mechanism for Data Efficient Knowledge Distillation
Sourav Mishra
Suresh Sundaram
16
1
0
14 Feb 2021
FedAUX: Leveraging Unlabeled Auxiliary Data in Federated Learning
Felix Sattler
Tim Korjakow
R. Rischke
Wojciech Samek
FedML
19
115
0
04 Feb 2021
Generative Zero-shot Network Quantization
Xiangyu He
Qinghao Hu
Peisong Wang
Jian Cheng
GAN
MQ
28
23
0
21 Jan 2021
Mining Data Impressions from Deep Models as Substitute for the Unavailable Training Data
Gaurav Kumar Nayak
Konda Reddy Mopuri
Saksham Jain
Anirban Chakraborty
19
13
0
15 Jan 2021
Towards Zero-Shot Knowledge Distillation for Natural Language Processing
Ahmad Rashid
Vasileios Lioutas
Abbas Ghaddar
Mehdi Rezagholizadeh
21
27
0
31 Dec 2020
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Guodong Xu
Ziwei Liu
Chen Change Loy
UQCV
21
39
0
17 Dec 2020
Large-Scale Generative Data-Free Distillation
Liangchen Luo
Mark Sandler
Zi Lin
A. Zhmoginov
Andrew G. Howard
24
43
0
10 Dec 2020
Progressive Network Grafting for Few-Shot Knowledge Distillation
Chengchao Shen
Xinchao Wang
Youtan Yin
Mingli Song
Sihui Luo
Xiuming Zhang
6
46
0
09 Dec 2020
Generative Adversarial Simulator
Jonathan Raiman
GAN
8
0
0
23 Nov 2020
Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation
Gaurav Kumar Nayak
Konda Reddy Mopuri
Anirban Chakraborty
22
18
0
18 Nov 2020
Real-Time Decentralized knowledge Transfer at the Edge
Orpaz Goldstein
Mohammad Kachuee
Dereck Shiell
Majid Sarrafzadeh
22
1
0
11 Nov 2020
Robustness and Diversity Seeking Data-Free Knowledge Distillation
Pengchao Han
Jihong Park
Shiqiang Wang
Yejun Liu
15
12
0
07 Nov 2020
Data-free Knowledge Distillation for Segmentation using Data-Enriching GAN
Kaushal Bhogale
8
3
0
02 Nov 2020
Black-Box Ripper: Copying black-box models using generative evolutionary algorithms
Antonio Bărbălău
Adrian Cosma
Radu Tudor Ionescu
Marius Popescu
MIACV
MLAU
30
43
0
21 Oct 2020
Towards Accurate Quantization and Pruning via Data-free Knowledge Transfer
Chen Zhu
Zheng Xu
Ali Shafahi
Manli Shu
Amin Ghiasi
Tom Goldstein
MQ
14
3
0
14 Oct 2020
Adversarial Self-Supervised Data-Free Distillation for Text Classification
Xinyin Ma
Yongliang Shen
Gongfan Fang
Chen Chen
Chenghao Jia
Weiming Lu
30
24
0
10 Oct 2020
Neighbourhood Distillation: On the benefits of non end-to-end distillation
Laetitia Shao
Max Moroz
Elad Eban
Yair Movshovitz-Attias
ODL
18
0
0
02 Oct 2020
ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles
Xiaoyong Yuan
Lei Ding
Lan Zhang
Xiaolin Li
D. Wu
19
40
0
21 Sep 2020
Classification of Diabetic Retinopathy Using Unlabeled Data and Knowledge Distillation
Sajjad Abbasi
M. Hajabdollahi
P. Khadivi
N. Karimi
Roshank Roshandel
S. Shirani
S. Samavi
14
18
0
01 Sep 2020
Source Free Domain Adaptation with Image Translation
Yunzhong Hou
Liang Zheng
12
35
0
17 Aug 2020
Class-Incremental Domain Adaptation
Jogendra Nath Kundu
R. Venkatesh
Naveen Venkat
Ambareesh Revanur
R. Venkatesh Babu
CLL
6
51
0
04 Aug 2020
Unsupervised Cross-Modal Alignment for Multi-Person 3D Pose Estimation
Jogendra Nath Kundu
Ambareesh Revanur
Govind V Waghmare
R. Venkatesh
R. Venkatesh Babu
3DH
19
18
0
04 Aug 2020
Dynamic Knowledge Distillation for Black-box Hypothesis Transfer Learning
Yiqin Yu
Xu Min
Shiwan Zhao
Jing Mei
Fei Wang
Dongsheng Li
Kenney Ng
Shaochun Li
14
2
0
24 Jul 2020
Knowledge Distillation in Deep Learning and its Applications
Abdolmaged Alkhulaifi
Fahad Alsahli
Irfan Ahmad
FedML
28
76
0
17 Jul 2020
Differential Replication in Machine Learning
Irene Unceta
Jordi Nin
O. Pujol
SyDa
9
1
0
15 Jul 2020
Ensemble Distillation for Robust Model Fusion in Federated Learning
Tao R. Lin
Lingjing Kong
Sebastian U. Stich
Martin Jaggi
FedML
19
1,015
0
12 Jun 2020
Dataset Condensation with Gradient Matching
Bo Zhao
Konda Reddy Mopuri
Hakan Bilen
DD
36
477
0
10 Jun 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,851
0
09 Jun 2020
Why distillation helps: a statistical perspective
A. Menon
A. S. Rawat
Sashank J. Reddi
Seungyeon Kim
Sanjiv Kumar
FedML
25
22
0
21 May 2020
Data-Free Network Quantization With Adversarial Knowledge Distillation
Yoojin Choi
Jihwan P. Choi
Mostafa El-Khamy
Jungwon Lee
MQ
27
119
0
08 May 2020
Adversarial Fooling Beyond "Flipping the Label"
Konda Reddy Mopuri
Vaisakh Shaj
R. Venkatesh Babu
AAML
23
12
0
27 Apr 2020
Filter Grafting for Deep Neural Networks: Reason, Method, and Cultivation
Hao Cheng
Fanxu Meng
Ke Li
Huixiang Luo
Guangming Lu
Xing Sun
Feiyue Huang
8
0
0
26 Apr 2020
Towards Inheritable Models for Open-Set Domain Adaptation
Jogendra Nath Kundu
Naveen Venkat
R. Ambareesh
V. RahulM.
R. Venkatesh Babu
VLM
17
117
0
09 Apr 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
24
48
0
31 Mar 2020
Generative Low-bitwidth Data Free Quantization
Shoukai Xu
Haokun Li
Bohan Zhuang
Jing Liu
Jingyun Liang
Chuangrun Liang
Mingkui Tan
MQ
18
126
0
07 Mar 2020
Self-Distillation Amplifies Regularization in Hilbert Space
H. Mobahi
Mehrdad Farajtabar
Peter L. Bartlett
33
227
0
13 Feb 2020
Unlabeled Data Deployment for Classification of Diabetic Retinopathy Images Using Knowledge Transfer
Sajjad Abbasi
M. Hajabdollahi
N. Karimi
S. Samavi
S. Shirani
4
0
0
09 Feb 2020
Modeling Teacher-Student Techniques in Deep Neural Networks for Knowledge Distillation
Sajjad Abbasi
M. Hajabdollahi
N. Karimi
S. Samavi
10
28
0
31 Dec 2019
DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier
Sravanti Addepalli
Gaurav Kumar Nayak
Anirban Chakraborty
R. Venkatesh Babu
14
36
0
27 Dec 2019
Data-Free Adversarial Distillation
Gongfan Fang
Mingli Song
Chengchao Shen
Xinchao Wang
Da Chen
Xiuming Zhang
14
146
0
23 Dec 2019
The Knowledge Within: Methods for Data-Free Model Compression
Matan Haroush
Itay Hubara
Elad Hoffer
Daniel Soudry
20
105
0
03 Dec 2019
Few Shot Network Compression via Cross Distillation
Haoli Bai
Jiaxiang Wu
Irwin King
Michael Lyu
FedML
20
60
0
21 Nov 2019
Thieves on Sesame Street! Model Extraction of BERT-based APIs
Kalpesh Krishna
Gaurav Singh Tomar
Ankur P. Parikh
Nicolas Papernot
Mohit Iyyer
MIACV
MLAU
27
194
0
27 Oct 2019
A Meta-Learning Framework for Generalized Zero-Shot Learning
Vinay Kumar Verma
Dhanajit Brahma
Piyush Rai
BDL
DiffM
VLM
41
19
0
10 Sep 2019
Membership Privacy for Machine Learning Models Through Knowledge Transfer
Virat Shejwalkar
Amir Houmansadr
19
10
0
15 Jun 2019
Previous
1
2
3
4
Next