ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.01186
  4. Cited By
Data-Free Learning of Student Networks

Data-Free Learning of Student Networks

2 April 2019
Hanting Chen
Yunhe Wang
Chang Xu
Zhaohui Yang
Chuanjian Liu
Boxin Shi
Chunjing Xu
Chao Xu
Qi Tian
    FedML
ArXivPDFHTML

Papers citing "Data-Free Learning of Student Networks"

50 / 68 papers shown
Title
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via D\mathbf{\texttt{D}}Dual-H\mathbf{\texttt{H}}Head O\mathbf{\texttt{O}}Optimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
58
0
0
12 May 2025
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Dong Wang
Haris Šikić
Lothar Thiele
O. Saukh
59
0
0
17 Feb 2025
A Unified Solution to Diverse Heterogeneities in One-shot Federated Learning
A Unified Solution to Diverse Heterogeneities in One-shot Federated Learning
Jun Bai
Yiliao Song
Di Wu
Atul Sajjanhar
Yong Xiang
Wei Zhou
Xiaohui Tao
Yan Li
Y. Li
FedML
55
0
0
28 Oct 2024
Privacy-Preserving Student Learning with Differentially Private
  Data-Free Distillation
Privacy-Preserving Student Learning with Differentially Private Data-Free Distillation
Bochao Liu
Jianghu Lu
Pengju Wang
Junjie Zhang
Dan Zeng
Zhenxing Qian
Shiming Ge
25
1
0
19 Sep 2024
Few-Shot Class-Incremental Learning with Non-IID Decentralized Data
Few-Shot Class-Incremental Learning with Non-IID Decentralized Data
Cuiwei Liu
Siang Xu
Huaijun Qiu
Jing Zhang
Zhi Liu
Liang Zhao
CLL
36
0
0
18 Sep 2024
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Jianlong Wu
Yan Yan
Liqiang Nie
DiffM
66
10
0
05 Sep 2024
Learning Privacy-Preserving Student Networks via
  Discriminative-Generative Distillation
Learning Privacy-Preserving Student Networks via Discriminative-Generative Distillation
Shiming Ge
Bochao Liu
Pengju Wang
Yong Li
Dan Zeng
FedML
44
9
0
04 Sep 2024
Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in
  Federated Class Continual Learning
Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning
Jinglin Liang
Jin Zhong
Hanlin Gu
Zhongqi Lu
Xingxing Tang
Gang Dai
Shuangping Huang
Lixin Fan
Qiang Yang
DiffM
47
7
0
02 Sep 2024
Distilling Vision-Language Foundation Models: A Data-Free Approach via
  Prompt Diversification
Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification
Yunyi Xuan
Weijie Chen
Shicai Yang
Di Xie
Luojun Lin
Yueting Zhuang
VLM
40
4
0
21 Jul 2024
FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph
  Federated Learning
FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning
Yinlin Zhu
Xunkai Li
Zhengyu Wu
Di Wu
Miao Hu
Ronghua Li
FedML
29
6
0
22 Apr 2024
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Renrong Shao
Wei Zhang
Jianhua Yin
Jun Wang
39
2
0
18 Apr 2024
FedLPA: One-shot Federated Learning with Layer-Wise Posterior
  Aggregation
FedLPA: One-shot Federated Learning with Layer-Wise Posterior Aggregation
Xiang Liu
Liangxi Liu
Feiyang Ye
Yunheng Shen
Xia Li
Linshan Jiang
Jialin Li
36
4
0
30 Sep 2023
Sampling to Distill: Knowledge Transfer from Open-World Data
Sampling to Distill: Knowledge Transfer from Open-World Data
Yuzheng Wang
Zhaoyu Chen
Jie M. Zhang
Dingkang Yang
Zuhao Ge
Yang Liu
Siao Liu
Yunquan Sun
Wenqiang Zhang
Lizhe Qi
34
9
0
31 Jul 2023
Mitigating Cross-client GANs-based Attack in Federated Learning
Mitigating Cross-client GANs-based Attack in Federated Learning
Hong Huang
Xinyu Lei
Tao Xiang
AAML
55
1
0
25 Jul 2023
A Survey of What to Share in Federated Learning: Perspectives on Model
  Utility, Privacy Leakage, and Communication Efficiency
A Survey of What to Share in Federated Learning: Perspectives on Model Utility, Privacy Leakage, and Communication Efficiency
Jiawei Shao
Zijian Li
Wenqiang Sun
Tailin Zhou
Yuchang Sun
Lumin Liu
Zehong Lin
Yuyi Mao
Jun Zhang
FedML
43
23
0
20 Jul 2023
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via
  Restricted Adversarial Distillation
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation
Zhicong Yan
Shenghong Li
Ruijie Zhao
Yuan Tian
Yuanyuan Zhao
AAML
39
11
0
13 Jun 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
34
19
0
22 May 2023
A Comprehensive Survey on Source-free Domain Adaptation
A Comprehensive Survey on Source-free Domain Adaptation
Zhiqi Yu
Jingjing Li
Zhekai Du
Lei Zhu
H. Shen
TTA
34
96
0
23 Feb 2023
Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental
  Learning
Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning
Xialei Liu
Jiang-Tian Zhai
Andrew D. Bagdanov
Ke Li
Mingg-Ming Cheng
CLL
28
4
0
16 Dec 2022
Scalable Collaborative Learning via Representation Sharing
Scalable Collaborative Learning via Representation Sharing
Frédéric Berdoz
Abhishek Singh
Martin Jaggi
Ramesh Raskar
FedML
30
3
0
20 Nov 2022
Momentum Adversarial Distillation: Handling Large Distribution Shifts in
  Data-Free Knowledge Distillation
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
18
32
0
21 Sep 2022
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning
  Strategy
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning Strategy
Jingru Li
Sheng Zhou
Liangcheng Li
Haishuai Wang
Zhi Yu
Jiajun Bu
39
14
0
29 Aug 2022
Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free
  Replay
Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free Replay
Huan Liu
Li Gu
Zhixiang Chi
Yang Wang
Yuanhao Yu
Jun Chen
Jingshan Tang
33
82
0
22 Jul 2022
Few-Shot Unlearning by Model Inversion
Few-Shot Unlearning by Model Inversion
Youngsik Yoon
Jinhwan Nam
Hyojeong Yun
Jaeho Lee
Dongwoo Kim
Jungseul Ok
MU
30
17
0
31 May 2022
IDEAL: Query-Efficient Data-Free Learning from Black-box Models
IDEAL: Query-Efficient Data-Free Learning from Black-box Models
Jie M. Zhang
Chen Chen
Lingjuan Lyu
55
14
0
23 May 2022
Prompting to Distill: Boosting Data-Free Knowledge Distillation via
  Reinforced Prompt
Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt
Xinyin Ma
Xinchao Wang
Gongfan Fang
Yongliang Shen
Weiming Lu
24
11
0
16 May 2022
Spot-adaptive Knowledge Distillation
Spot-adaptive Knowledge Distillation
Mingli Song
Ying Chen
Jingwen Ye
Mingli Song
25
72
0
05 May 2022
DearKD: Data-Efficient Early Knowledge Distillation for Vision
  Transformers
DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Xianing Chen
Qiong Cao
Yujie Zhong
Jing Zhang
Shenghua Gao
Dacheng Tao
ViT
40
76
0
27 Apr 2022
Knowledge Distillation with the Reused Teacher Classifier
Knowledge Distillation with the Reused Teacher Classifier
Defang Chen
Jianhan Mei
Hailin Zhang
C. Wang
Yan Feng
Chun-Yen Chen
36
166
0
26 Mar 2022
Fine-tuning Global Model via Data-Free Knowledge Distillation for
  Non-IID Federated Learning
Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning
Lin Zhang
Li Shen
Liang Ding
Dacheng Tao
Ling-Yu Duan
FedML
28
254
0
17 Mar 2022
DropNAS: Grouped Operation Dropout for Differentiable Architecture
  Search
DropNAS: Grouped Operation Dropout for Differentiable Architecture Search
Weijun Hong
Guilin Li
Weinan Zhang
Ruiming Tang
Yunhe Wang
Zhenguo Li
Yong Yu
OOD
41
54
0
27 Jan 2022
GhostNets on Heterogeneous Devices via Cheap Operations
GhostNets on Heterogeneous Devices via Cheap Operations
Kai Han
Yunhe Wang
Chang Xu
Jianyuan Guo
Chunjing Xu
Enhua Wu
Qi Tian
19
102
0
10 Jan 2022
Data-Free Knowledge Transfer: A Survey
Data-Free Knowledge Transfer: A Survey
Yuang Liu
Wei Zhang
Jun Wang
Jianyong Wang
35
48
0
31 Dec 2021
TAGPerson: A Target-Aware Generation Pipeline for Person
  Re-identification
TAGPerson: A Target-Aware Generation Pipeline for Person Re-identification
Kai Chen
Weihua Chen
Tao He
Rong Du
Fan Wang
Xiuyu Sun
Yuchen Guo
Guiguang Ding
32
8
0
28 Dec 2021
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from
  a Single Image
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from a Single Image
Yuki M. Asano
Aaqib Saeed
43
7
0
01 Dec 2021
Source-free unsupervised domain adaptation for cross-modality abdominal
  multi-organ segmentation
Source-free unsupervised domain adaptation for cross-modality abdominal multi-organ segmentation
Jin Hong
Yudong Zhang
Weitian Chen
OOD
MedIm
30
82
0
24 Nov 2021
Qimera: Data-free Quantization with Synthetic Boundary Supporting
  Samples
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
Kanghyun Choi
Deokki Hong
Noseong Park
Youngsok Kim
Jinho Lee
MQ
24
64
0
04 Nov 2021
Fine-grained Data Distribution Alignment for Post-Training Quantization
Fine-grained Data Distribution Alignment for Post-Training Quantization
Mingliang Xu
Mingbao Lin
Yonghong Tian
Ke Li
Yunhang Shen
Rongrong Ji
Yongjian Wu
Rongrong Ji
MQ
84
19
0
09 Sep 2021
Memory-Free Generative Replay For Class-Incremental Learning
Memory-Free Generative Replay For Class-Incremental Learning
Xiaomeng Xin
Yiran Zhong
Yunzhong Hou
Jinjun Wang
Liang Zheng
24
8
0
01 Sep 2021
Preventing Catastrophic Forgetting and Distribution Mismatch in
  Knowledge Distillation via Synthetic Data
Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
Kuluhan Binici
N. Pham
T. Mitra
K. Leman
28
40
0
11 Aug 2021
Always Be Dreaming: A New Approach for Data-Free Class-Incremental
  Learning
Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning
James Smith
Yen-Chang Hsu
John C. Balloch
Yilin Shen
Hongxia Jin
Z. Kira
CLL
60
162
0
17 Jun 2021
AutoReCon: Neural Architecture Search-based Reconstruction for Data-free
  Compression
AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression
Baozhou Zhu
P. Hofstee
J. Peltenburg
Jinho Lee
Zaid Al-Ars
24
22
0
25 May 2021
Graph-Free Knowledge Distillation for Graph Neural Networks
Graph-Free Knowledge Distillation for Graph Neural Networks
Xiang Deng
Zhongfei Zhang
34
65
0
16 May 2021
Visualizing Adapted Knowledge in Domain Transfer
Visualizing Adapted Knowledge in Domain Transfer
Yunzhong Hou
Liang Zheng
121
54
0
20 Apr 2021
See through Gradients: Image Batch Recovery via GradInversion
See through Gradients: Image Batch Recovery via GradInversion
Hongxu Yin
Arun Mallya
Arash Vahdat
J. Álvarez
Jan Kautz
Pavlo Molchanov
FedML
25
460
0
15 Apr 2021
Distilling and Transferring Knowledge via cGAN-generated Samples for
  Image Classification and Regression
Distilling and Transferring Knowledge via cGAN-generated Samples for Image Classification and Regression
Xin Ding
Z. J. Wang
Zuheng Xu
Z. Jane Wang
William J. Welch
41
22
0
07 Apr 2021
Enhancing Data-Free Adversarial Distillation with Activation
  Regularization and Virtual Interpolation
Enhancing Data-Free Adversarial Distillation with Activation Regularization and Virtual Interpolation
Xiaoyang Qu
Jianzong Wang
Jing Xiao
18
14
0
23 Feb 2021
Domain Impression: A Source Data Free Domain Adaptation Method
Domain Impression: A Source Data Free Domain Adaptation Method
V. Kurmi
Venkatesh Subramanian
Vinay P. Namboodiri
TTA
151
150
0
17 Feb 2021
Towards Zero-Shot Knowledge Distillation for Natural Language Processing
Towards Zero-Shot Knowledge Distillation for Natural Language Processing
Ahmad Rashid
Vasileios Lioutas
Abbas Ghaddar
Mehdi Rezagholizadeh
21
27
0
31 Dec 2020
Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge
  Distillation
Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation
Gaurav Kumar Nayak
Konda Reddy Mopuri
Anirban Chakraborty
25
18
0
18 Nov 2020
12
Next