ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.05578
  4. Cited By
Large-Scale Generative Data-Free Distillation

Large-Scale Generative Data-Free Distillation

10 December 2020
Liangchen Luo
Mark Sandler
Zi Lin
A. Zhmoginov
Andrew G. Howard
ArXiv (abs)PDFHTML

Papers citing "Large-Scale Generative Data-Free Distillation"

27 / 27 papers shown
Title
DFDG: Data-Free Dual-Generator Adversarial Distillation for One-Shot
  Federated Learning
DFDG: Data-Free Dual-Generator Adversarial Distillation for One-Shot Federated Learning
Kangyang Luo
Shuai Wang
Y. Fu
Renrong Shao
Xiang Li
Yunshi Lan
Ming Gao
Jinlong Shu
FedML
111
3
0
12 Sep 2024
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Jianlong Wu
Yan Yan
Liqiang Nie
DiffM
120
10
0
05 Sep 2024
Small Scale Data-Free Knowledge Distillation
Small Scale Data-Free Knowledge Distillation
He Liu
Yikai Wang
Huaping Liu
Fuchun Sun
Anbang Yao
73
10
0
12 Jun 2024
De-confounded Data-free Knowledge Distillation for Handling Distribution
  Shifts
De-confounded Data-free Knowledge Distillation for Handling Distribution Shifts
Yuzheng Wang
Dingkang Yang
Zhaoyu Chen
Yang Liu
Siao Liu
Wenqiang Zhang
Lihua Zhang
Lizhe Qi
70
9
0
28 Mar 2024
Distilling the Knowledge in Data Pruning
Distilling the Knowledge in Data Pruning
Emanuel Ben-Baruch
Adam Botach
Igor Kviatkovsky
Manoj Aggarwal
Gérard Medioni
80
2
0
12 Mar 2024
Robustness-Guided Image Synthesis for Data-Free Quantization
Robustness-Guided Image Synthesis for Data-Free Quantization
Jianhong Bai
Yuchen Yang
Huanpeng Chu
Hualiang Wang
Zuo-Qiang Liu
Ruizhe Chen
Xiaoxuan He
Lianrui Mu
Chengfei Cai
Haoji Hu
DiffMMQ
149
5
0
05 Oct 2023
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free
  Knowledge Distillation
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge Distillation
Minh-Tuan Tran
Trung Le
Xuan-May Le
Mehrtash Harandi
Quan Hung Tran
Dinh Q. Phung
82
13
0
30 Sep 2023
DFRD: Data-Free Robustness Distillation for Heterogeneous Federated
  Learning
DFRD: Data-Free Robustness Distillation for Heterogeneous Federated Learning
Kangyang Luo
Shuai Wang
Y. Fu
Xiang Li
Yunshi Lan
Minghui Gao
FedML
89
29
0
24 Sep 2023
Sampling to Distill: Knowledge Transfer from Open-World Data
Sampling to Distill: Knowledge Transfer from Open-World Data
Yuzheng Wang
Zhaoyu Chen
Jie M. Zhang
Dingkang Yang
Zuhao Ge
Yang Liu
Siao Liu
Yunquan Sun
Wenqiang Zhang
Lizhe Qi
85
9
0
31 Jul 2023
Image Captions are Natural Prompts for Text-to-Image Models
Image Captions are Natural Prompts for Text-to-Image Models
Shiye Lei
Hao Chen
Senyang Zhang
Bo Zhao
Dacheng Tao
VLM
109
23
0
17 Jul 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
92
20
0
22 May 2023
Feature-Rich Audio Model Inversion for Data-Free Knowledge Distillation
  Towards General Sound Classification
Feature-Rich Audio Model Inversion for Data-Free Knowledge Distillation Towards General Sound Classification
Zuheng Kang
Yayun He
Jianzong Wang
Junqing Peng
Xiaoyang Qu
Jing Xiao
43
2
0
14 Mar 2023
A Prototype-Oriented Clustering for Domain Shift with Source Privacy
A Prototype-Oriented Clustering for Domain Shift with Source Privacy
Korawat Tanwisuth
Shujian Zhang
Pengcheng He
Mingyuan Zhou
77
3
0
08 Feb 2023
Momentum Adversarial Distillation: Handling Large Distribution Shifts in
  Data-Free Knowledge Distillation
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
68
33
0
21 Sep 2022
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning
  Strategy
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning Strategy
Jingru Li
Sheng Zhou
Liangcheng Li
Haishuai Wang
Zhi Yu
Jiajun Bu
81
14
0
29 Aug 2022
Few-Shot Unlearning by Model Inversion
Few-Shot Unlearning by Model Inversion
Youngsik Yoon
Jinhwan Nam
Hyojeong Yun
Jaeho Lee
Dongwoo Kim
Jungseul Ok
MU
73
18
0
31 May 2022
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via
  Multi-level Feature Sharing
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature Sharing
Zhiwei Hao
Yong Luo
Zhi Wang
Han Hu
J. An
97
28
0
24 May 2022
Synthesizing Informative Training Samples with GAN
Synthesizing Informative Training Samples with GAN
Bo Zhao
Hakan Bilen
DD
128
77
0
15 Apr 2022
GradViT: Gradient Inversion of Vision Transformers
GradViT: Gradient Inversion of Vision Transformers
Ali Hatamizadeh
Hongxu Yin
H. Roth
Wenqi Li
Jan Kautz
Daguang Xu
Pavlo Molchanov
ViT
183
65
0
22 Mar 2022
Conditional Generative Data-free Knowledge Distillation
Conditional Generative Data-free Knowledge Distillation
Xinyi Yu
Ling Yan
Yang Yang
Libo Zhou
Linlin Ou
69
8
0
31 Dec 2021
Data-Free Knowledge Transfer: A Survey
Data-Free Knowledge Transfer: A Survey
Yuang Liu
Wei Zhang
Jun Wang
Jianyong Wang
117
48
0
31 Dec 2021
Up to 100$\times$ Faster Data-free Knowledge Distillation
Up to 100×\times× Faster Data-free Knowledge Distillation
Gongfan Fang
Kanya Mo
Xinchao Wang
Mingli Song
Shitao Bei
Haofei Zhang
Xiuming Zhang
DD
89
4
0
12 Dec 2021
Towards Data-Free Domain Generalization
Towards Data-Free Domain Generalization
A. Frikha
Haokun Chen
Denis Krompass
Thomas Runkler
Volker Tresp
OOD
151
14
0
09 Oct 2021
Preventing Catastrophic Forgetting and Distribution Mismatch in
  Knowledge Distillation via Synthetic Data
Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
Kuluhan Binici
N. Pham
T. Mitra
K. Leman
84
42
0
11 Aug 2021
Representation Consolidation for Training Expert Students
Representation Consolidation for Training Expert Students
Zhizhong Li
Avinash Ravichandran
Charless C. Fowlkes
M. Polito
Rahul Bhotika
Stefano Soatto
57
6
0
16 Jul 2021
Always Be Dreaming: A New Approach for Data-Free Class-Incremental
  Learning
Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning
James Smith
Yen-Chang Hsu
John C. Balloch
Yilin Shen
Hongxia Jin
Z. Kira
CLL
106
169
0
17 Jun 2021
Contrastive Model Inversion for Data-Free Knowledge Distillation
Contrastive Model Inversion for Data-Free Knowledge Distillation
Gongfan Fang
Mingli Song
Xinchao Wang
Chen Shen
Xingen Wang
Xiuming Zhang
59
82
0
18 May 2021
1