ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.07535
  4. Cited By
Data-Free Knowledge Distillation for Deep Neural Networks

Data-Free Knowledge Distillation for Deep Neural Networks

19 October 2017
Raphael Gontijo-Lopes
Stefano Fenu
Thad Starner
ArXivPDFHTML

Papers citing "Data-Free Knowledge Distillation for Deep Neural Networks"

48 / 48 papers shown
Title
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via D\mathbf{\texttt{D}}Dual-H\mathbf{\texttt{H}}Head O\mathbf{\texttt{O}}Optimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
58
0
0
12 May 2025
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
S. Joshi
Jiayi Ni
Baharan Mirzasoleiman
DD
72
2
0
03 Oct 2024
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Jianlong Wu
Yan Yan
Liqiang Nie
DiffM
66
10
0
05 Sep 2024
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
21
16
0
08 Aug 2023
Sampling to Distill: Knowledge Transfer from Open-World Data
Sampling to Distill: Knowledge Transfer from Open-World Data
Yuzheng Wang
Zhaoyu Chen
Jie M. Zhang
Dingkang Yang
Zuhao Ge
Yang Liu
Siao Liu
Yunquan Sun
Wenqiang Zhang
Lizhe Qi
34
9
0
31 Jul 2023
Learning to Learn from APIs: Black-Box Data-Free Meta-Learning
Learning to Learn from APIs: Black-Box Data-Free Meta-Learning
Zixuan Hu
Li Shen
Zhenyi Wang
Baoyuan Wu
Chun Yuan
Dacheng Tao
49
7
0
28 May 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
34
19
0
22 May 2023
Self-discipline on multiple channels
Self-discipline on multiple channels
Jiutian Zhao
Liangchen Luo
Hao Wang
32
0
0
27 Apr 2023
Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning
Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning
Zixuan Hu
Li Shen
Zhenyi Wang
Tongliang Liu
Chun Yuan
Dacheng Tao
47
4
0
20 Mar 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
53
121
0
17 Jan 2023
Topics in Contextualised Attention Embeddings
Topics in Contextualised Attention Embeddings
Mozhgan Talebpour
A. G. S. D. Herrera
Shoaib Jameel
34
2
0
11 Jan 2023
Dataless Knowledge Fusion by Merging Weights of Language Models
Dataless Knowledge Fusion by Merging Weights of Language Models
Xisen Jin
Xiang Ren
Daniel Preotiuc-Pietro
Pengxiang Cheng
FedML
MoMe
24
214
0
19 Dec 2022
Scalable Collaborative Learning via Representation Sharing
Scalable Collaborative Learning via Representation Sharing
Frédéric Berdoz
Abhishek Singh
Martin Jaggi
Ramesh Raskar
FedML
27
3
0
20 Nov 2022
Momentum Adversarial Distillation: Handling Large Distribution Shifts in
  Data-Free Knowledge Distillation
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
18
32
0
21 Sep 2022
Dense Depth Distillation with Out-of-Distribution Simulated Images
Dense Depth Distillation with Out-of-Distribution Simulated Images
Junjie Hu
Chenyou Fan
Mete Ozay
Hualie Jiang
Tin Lun Lam
24
4
0
26 Aug 2022
Factorizing Knowledge in Neural Networks
Factorizing Knowledge in Neural Networks
Xingyi Yang
Jingwen Ye
Xinchao Wang
MoMe
47
121
0
04 Jul 2022
A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental
  Learning
A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning
Da-Wei Zhou
Qiwen Wang
Han-Jia Ye
De-Chuan Zhan
29
123
0
26 May 2022
Prompting to Distill: Boosting Data-Free Knowledge Distillation via
  Reinforced Prompt
Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt
Xinyin Ma
Xinchao Wang
Gongfan Fang
Yongliang Shen
Weiming Lu
24
11
0
16 May 2022
Data-Free Adversarial Knowledge Distillation for Graph Neural Networks
Data-Free Adversarial Knowledge Distillation for Graph Neural Networks
Yu-Lin Zhuang
Lingjuan Lyu
Chuan Shi
Carl Yang
Lichao Sun
35
16
0
08 May 2022
DearKD: Data-Efficient Early Knowledge Distillation for Vision
  Transformers
DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Xianing Chen
Qiong Cao
Yujie Zhong
Jing Zhang
Shenghua Gao
Dacheng Tao
ViT
40
76
0
27 Apr 2022
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from
  a Single Image
The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from a Single Image
Yuki M. Asano
Aaqib Saeed
43
7
0
01 Dec 2021
Qimera: Data-free Quantization with Synthetic Boundary Supporting
  Samples
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
Kanghyun Choi
Deokki Hong
Noseong Park
Youngsok Kim
Jinho Lee
MQ
19
64
0
04 Nov 2021
Always Be Dreaming: A New Approach for Data-Free Class-Incremental
  Learning
Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning
James Smith
Yen-Chang Hsu
John C. Balloch
Yilin Shen
Hongxia Jin
Z. Kira
CLL
60
161
0
17 Jun 2021
AutoReCon: Neural Architecture Search-based Reconstruction for Data-free
  Compression
AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression
Baozhou Zhu
P. Hofstee
J. Peltenburg
Jinho Lee
Zaid Al-Ars
24
22
0
25 May 2021
Data-Free Knowledge Distillation for Heterogeneous Federated Learning
Data-Free Knowledge Distillation for Heterogeneous Federated Learning
Zhuangdi Zhu
Junyuan Hong
Jiayu Zhou
FedML
27
630
0
20 May 2021
Graph-Free Knowledge Distillation for Graph Neural Networks
Graph-Free Knowledge Distillation for Graph Neural Networks
Xiang Deng
Zhongfei Zhang
34
65
0
16 May 2021
Visualizing Adapted Knowledge in Domain Transfer
Visualizing Adapted Knowledge in Domain Transfer
Yunzhong Hou
Liang Zheng
119
54
0
20 Apr 2021
Knowledge Distillation as Semiparametric Inference
Knowledge Distillation as Semiparametric Inference
Tri Dao
G. Kamath
Vasilis Syrgkanis
Lester W. Mackey
40
31
0
20 Apr 2021
Distilling and Transferring Knowledge via cGAN-generated Samples for
  Image Classification and Regression
Distilling and Transferring Knowledge via cGAN-generated Samples for Image Classification and Regression
Xin Ding
Z. J. Wang
Zuheng Xu
Z. Jane Wang
William J. Welch
36
22
0
07 Apr 2021
Efficient Encrypted Inference on Ensembles of Decision Trees
Efficient Encrypted Inference on Ensembles of Decision Trees
Kanthi Kiran Sarpatwar
Karthik Nandakumar
N. Ratha
J. Rayfield
Karthikeyan Shanmugam
Sharath Pankanti
Roman Vaculin
FedML
22
5
0
05 Mar 2021
Enhancing Data-Free Adversarial Distillation with Activation
  Regularization and Virtual Interpolation
Enhancing Data-Free Adversarial Distillation with Activation Regularization and Virtual Interpolation
Xiaoyang Qu
Jianzong Wang
Jing Xiao
18
14
0
23 Feb 2021
Towards Zero-Shot Knowledge Distillation for Natural Language Processing
Towards Zero-Shot Knowledge Distillation for Natural Language Processing
Ahmad Rashid
Vasileios Lioutas
Abbas Ghaddar
Mehdi Rezagholizadeh
21
27
0
31 Dec 2020
Data-Free Model Extraction
Data-Free Model Extraction
Jean-Baptiste Truong
Pratyush Maini
R. Walls
Nicolas Papernot
MIACV
15
181
0
30 Nov 2020
Learnable Boundary Guided Adversarial Training
Learnable Boundary Guided Adversarial Training
Jiequan Cui
Shu Liu
Liwei Wang
Jiaya Jia
OOD
AAML
30
124
0
23 Nov 2020
Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge
  Distillation
Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation
Gaurav Kumar Nayak
Konda Reddy Mopuri
Anirban Chakraborty
19
18
0
18 Nov 2020
Robustness and Diversity Seeking Data-Free Knowledge Distillation
Robustness and Diversity Seeking Data-Free Knowledge Distillation
Pengchao Han
Jihong Park
Shiqiang Wang
Yejun Liu
15
12
0
07 Nov 2020
Dataset Condensation with Gradient Matching
Dataset Condensation with Gradient Matching
Bo Zhao
Konda Reddy Mopuri
Hakan Bilen
DD
36
472
0
10 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,843
0
09 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Data-Free Network Quantization With Adversarial Knowledge Distillation
Data-Free Network Quantization With Adversarial Knowledge Distillation
Yoojin Choi
Jihwan P. Choi
Mostafa El-Khamy
Jungwon Lee
MQ
27
119
0
08 May 2020
Towards Inheritable Models for Open-Set Domain Adaptation
Towards Inheritable Models for Open-Set Domain Adaptation
Jogendra Nath Kundu
Naveen Venkat
R. Ambareesh
V. RahulM.
R. Venkatesh Babu
VLM
17
117
0
09 Apr 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active
  Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
24
48
0
31 Mar 2020
Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN
Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN
Jingwen Ye
Yixin Ji
Xinchao Wang
Xin Gao
Xiuming Zhang
29
53
0
20 Mar 2020
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
Pierre Stock
Armand Joulin
Rémi Gribonval
Benjamin Graham
Hervé Jégou
MQ
37
149
0
12 Jul 2019
Zero-shot Knowledge Transfer via Adversarial Belief Matching
Zero-shot Knowledge Transfer via Adversarial Belief Matching
P. Micaelli
Amos Storkey
19
228
0
23 May 2019
Zero-Shot Knowledge Distillation in Deep Networks
Zero-Shot Knowledge Distillation in Deep Networks
Gaurav Kumar Nayak
Konda Reddy Mopuri
Vaisakh Shaj
R. Venkatesh Babu
Anirban Chakraborty
24
245
0
20 May 2019
SlimNets: An Exploration of Deep Model Compression and Acceleration
SlimNets: An Exploration of Deep Model Compression and Acceleration
Ini Oguntola
Subby Olubeko
Chris Sweeney
24
11
0
01 Aug 2018
Few-shot learning of neural networks from scratch by pseudo example
  optimization
Few-shot learning of neural networks from scratch by pseudo example optimization
Akisato Kimura
Zoubin Ghahramani
Koh Takeuchi
Tomoharu Iwata
N. Ueda
35
52
0
08 Feb 2018
1