Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2201.03019
Cited By
Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay
9 January 2022
Kuluhan Binici
Shivam Aggarwal
N. Pham
K. Leman
T. Mitra
TTA
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay"
23 / 23 papers shown
Title
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Jianlong Wu
Yan Yan
Liqiang Nie
DiffM
91
10
0
05 Sep 2024
Towards Synchronous Memorizability and Generalizability with Site-Modulated Diffusion Replay for Cross-Site Continual Segmentation
Dunyuan Xu
Xi Wang
Jingyang Zhang
Pheng-Ann Heng
MedIm
CLL
147
0
0
26 Jun 2024
Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
Kuluhan Binici
N. Pham
T. Mitra
K. Leman
59
41
0
11 Aug 2021
Contrastive Model Inversion for Data-Free Knowledge Distillation
Gongfan Fang
Mingli Song
Xinchao Wang
Chen Shen
Xingen Wang
Xiuming Zhang
45
81
0
18 May 2021
Online Continual Learning in Image Classification: An Empirical Survey
Zheda Mai
Ruiwen Li
Jihwan Jeong
David Quispe
Hyunwoo J. Kim
Scott Sanner
VLM
CLL
81
417
0
25 Jan 2021
Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation
Gaurav Kumar Nayak
Konda Reddy Mopuri
Anirban Chakraborty
45
18
0
18 Nov 2020
Adversarial Self-Supervised Data-Free Distillation for Text Classification
Xinyin Ma
Yongliang Shen
Gongfan Fang
Chen Chen
Chenghao Jia
Weiming Lu
119
24
0
10 Oct 2020
Membership Leakage in Label-Only Exposures
Zheng Li
Yang Zhang
70
245
0
30 Jul 2020
Decision-Making with Auto-Encoding Variational Bayes
Romain Lopez
Pierre Boyeau
Nir Yosef
Michael I. Jordan
Jeffrey Regier
BDL
393
10,591
0
17 Feb 2020
DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier
Sravanti Addepalli
Gaurav Kumar Nayak
Anirban Chakraborty
R. Venkatesh Babu
60
37
0
27 Dec 2019
Data-Free Adversarial Distillation
Gongfan Fang
Mingli Song
Chengchao Shen
Xinchao Wang
Da Chen
Xiuming Zhang
58
147
0
23 Dec 2019
Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion
Hongxu Yin
Pavlo Molchanov
Zhizhong Li
J. Álvarez
Arun Mallya
Derek Hoiem
N. Jha
Jan Kautz
66
565
0
18 Dec 2019
The Knowledge Within: Methods for Data-Free Model Compression
Matan Haroush
Itay Hubara
Elad Hoffer
Daniel Soudry
45
108
0
03 Dec 2019
Zero-shot Knowledge Transfer via Adversarial Belief Matching
P. Micaelli
Amos Storkey
52
230
0
23 May 2019
Zero-Shot Knowledge Distillation in Deep Networks
Gaurav Kumar Nayak
Konda Reddy Mopuri
Vaisakh Shaj
R. Venkatesh Babu
Anirban Chakraborty
75
245
0
20 May 2019
Data-Free Learning of Student Networks
Hanting Chen
Yunhe Wang
Chang Xu
Zhaohui Yang
Chuanjian Liu
Boxin Shi
Chunjing Xu
Chao Xu
Qi Tian
FedML
53
371
0
02 Apr 2019
Incremental Classifier Learning with Generative Adversarial Networks
Yue Wu
Yinpeng Chen
Lijuan Wang
Yuancheng Ye
Zicheng Liu
Yandong Guo
Zhengyou Zhang
Y. Fu
GAN
112
109
0
02 Feb 2018
Continual Learning with Deep Generative Replay
Hanul Shin
Jung Kwon Lee
Jaehong Kim
Jiwon Kim
KELM
CLL
80
2,071
0
24 May 2017
Neural Style Transfer: A Review
Yongcheng Jing
Yezhou Yang
Zunlei Feng
Jingwen Ye
Yizhou Yu
Xiuming Zhang
66
736
0
11 May 2017
iCaRL: Incremental Classifier and Representation Learning
Sylvestre-Alvise Rebuffi
Alexander Kolesnikov
G. Sperl
Christoph H. Lampert
CLL
OOD
145
3,755
0
23 Nov 2016
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
193,878
0
10 Dec 2015
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
350
19,643
0
09 Mar 2015
An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks
Ian Goodfellow
M. Berk Mirza
Xia Da
Aaron Courville
Yoshua Bengio
149
1,442
0
21 Dec 2013
1