ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.04136
  4. Cited By
Data-Free Network Quantization With Adversarial Knowledge Distillation

Data-Free Network Quantization With Adversarial Knowledge Distillation

8 May 2020
Yoojin Choi
Jihwan P. Choi
Mostafa El-Khamy
Jungwon Lee
    MQ
ArXivPDFHTML

Papers citing "Data-Free Network Quantization With Adversarial Knowledge Distillation"

50 / 69 papers shown
Title
CAE-DFKD: Bridging the Transferability Gap in Data-Free Knowledge Distillation
CAE-DFKD: Bridging the Transferability Gap in Data-Free Knowledge Distillation
Zherui Zhang
Changwei Wang
Rongtao Xu
Wenyuan Xu
Shibiao Xu
Yu Zhang
Li Guo
41
1
0
30 Apr 2025
Knowledge Distillation: Enhancing Neural Network Compression with Integrated Gradients
Knowledge Distillation: Enhancing Neural Network Compression with Integrated Gradients
David E. Hernandez
J. Chang
Torbjörn E. M. Nordling
58
0
0
17 Mar 2025
Defense Against Model Stealing Based on Account-Aware Distribution Discrepancy
Defense Against Model Stealing Based on Account-Aware Distribution Discrepancy
Jian-Ping Mei
Weibin Zhang
Jie Chen
X. Zhang
Tiantian Zhu
AAML
50
0
0
16 Mar 2025
Toward Efficient Data-Free Unlearning
Toward Efficient Data-Free Unlearning
Chenhao Zhang
Shaofei Shen
Weitong Chen
Miao Xu
MU
69
0
0
18 Dec 2024
Relation-Guided Adversarial Learning for Data-free Knowledge Transfer
Relation-Guided Adversarial Learning for Data-free Knowledge Transfer
Yingping Liang
Ying Fu
72
1
0
16 Dec 2024
Large-Scale Data-Free Knowledge Distillation for ImageNet via
  Multi-Resolution Data Generation
Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data Generation
Minh-Tuan Tran
Trung Le
Xuan-May Le
Jianfei Cai
Mehrtash Harandi
Dinh Q. Phung
76
2
0
26 Nov 2024
Data Generation for Hardware-Friendly Post-Training Quantization
Data Generation for Hardware-Friendly Post-Training Quantization
Lior Dikstein
Ariel Lapid
Arnon Netzer
H. Habi
MQ
154
0
0
29 Oct 2024
A method of using RSVD in residual calculation of LowBit GEMM
A method of using RSVD in residual calculation of LowBit GEMM
Hongyaoxing Gu
MQ
35
0
0
27 Sep 2024
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Jianlong Wu
Yan Yan
Liqiang Nie
DiffM
63
10
0
05 Sep 2024
Small Scale Data-Free Knowledge Distillation
Small Scale Data-Free Knowledge Distillation
He Liu
Yikai Wang
Huaping Liu
Fuchun Sun
Anbang Yao
21
8
0
12 Jun 2024
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Data-free Knowledge Distillation for Fine-grained Visual Categorization
Renrong Shao
Wei Zhang
Jianhua Yin
Jun Wang
31
2
0
18 Apr 2024
Efficient Data-Free Model Stealing with Label Diversity
Efficient Data-Free Model Stealing with Label Diversity
Yiyong Liu
Rui Wen
Michael Backes
Yang Zhang
AAML
41
2
0
29 Mar 2024
De-confounded Data-free Knowledge Distillation for Handling Distribution
  Shifts
De-confounded Data-free Knowledge Distillation for Handling Distribution Shifts
Yuzheng Wang
Dingkang Yang
Zhaoyu Chen
Yang Liu
Siao Liu
Wenqiang Zhang
Lihua Zhang
Lizhe Qi
32
6
0
28 Mar 2024
AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge
  Distillation
AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge Distillation
Zihao Tang
Zheqi Lv
Shengyu Zhang
Yifan Zhou
Xinyu Duan
Fei Wu
Kun Kuang
29
1
0
11 Mar 2024
Model Compression Techniques in Biometrics Applications: A Survey
Model Compression Techniques in Biometrics Applications: A Survey
Eduarda Caldeira
Pedro C. Neto
Marco Huber
Naser Damer
Ana F. Sequeira
40
11
0
18 Jan 2024
Direct Distillation between Different Domains
Direct Distillation between Different Domains
Jialiang Tang
Shuo Chen
Gang Niu
Hongyuan Zhu
Qiufeng Wang
Chen Gong
Masashi Sugiyama
55
3
0
12 Jan 2024
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL
  Shader Images
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images
Logan Frank
Jim Davis
33
1
0
20 Oct 2023
Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models
Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models
Miaoxi Zhu
Qihuang Zhong
Li Shen
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
MQ
VLM
29
1
0
20 Oct 2023
Robustness-Guided Image Synthesis for Data-Free Quantization
Robustness-Guided Image Synthesis for Data-Free Quantization
Jianhong Bai
Yuchen Yang
Huanpeng Chu
Hualiang Wang
Zuo-Qiang Liu
Ruizhe Chen
Xiaoxuan He
Lianrui Mu
Chengfei Cai
Haoji Hu
DiffM
MQ
28
5
0
05 Oct 2023
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free
  Knowledge Distillation
NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge Distillation
Minh-Tuan Tran
Trung Le
Xuan-May Le
Mehrtash Harandi
Quan Hung Tran
Dinh Q. Phung
15
11
0
30 Sep 2023
Jumping through Local Minima: Quantization in the Loss Landscape of
  Vision Transformers
Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers
N. Frumkin
Dibakar Gope
Diana Marculescu
MQ
41
16
0
21 Aug 2023
Sampling to Distill: Knowledge Transfer from Open-World Data
Sampling to Distill: Knowledge Transfer from Open-World Data
Yuzheng Wang
Zhaoyu Chen
Jie M. Zhang
Dingkang Yang
Zuhao Ge
Yang Liu
Siao Liu
Yunquan Sun
Wenqiang Zhang
Lizhe Qi
28
9
0
31 Jul 2023
Distribution Shift Matters for Knowledge Distillation with Webly
  Collected Images
Distribution Shift Matters for Knowledge Distillation with Webly Collected Images
Jialiang Tang
Shuo Chen
Gang Niu
Masashi Sugiyama
Chenggui Gong
21
13
0
21 Jul 2023
Customizing Synthetic Data for Data-Free Student Learning
Customizing Synthetic Data for Data-Free Student Learning
Shiya Luo
Defang Chen
Can Wang
14
2
0
10 Jul 2023
Data-Free Backbone Fine-Tuning for Pruned Neural Networks
Data-Free Backbone Fine-Tuning for Pruned Neural Networks
Adrian Holzbock
Achyut Hegde
Klaus C. J. Dietmayer
Vasileios Belagiannis
17
0
0
22 Jun 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
34
19
0
22 May 2023
Model Conversion via Differentially Private Data-Free Distillation
Model Conversion via Differentially Private Data-Free Distillation
Bochao Liu
Pengju Wang
Shikun Li
Dan Zeng
Shiming Ge
FedML
18
3
0
25 Apr 2023
A Survey on Approximate Edge AI for Energy Efficient Autonomous Driving
  Services
A Survey on Approximate Edge AI for Energy Efficient Autonomous Driving Services
Dewant Katare
Diego Perino
J. Nurmi
M. Warnier
Marijn Janssen
Aaron Yi Ding
34
36
0
13 Apr 2023
Out of Thin Air: Exploring Data-Free Adversarial Robustness Distillation
Out of Thin Air: Exploring Data-Free Adversarial Robustness Distillation
Yuzheng Wang
Zhaoyu Chen
Dingkang Yang
Pinxue Guo
Kaixun Jiang
Wenqiang Zhang
Lizhe Qi
AAML
27
6
0
21 Mar 2023
Data-Free Sketch-Based Image Retrieval
Data-Free Sketch-Based Image Retrieval
Abhra Chaudhuri
A. Bhunia
Yi-Zhe Song
Anjan Dutta
45
7
0
14 Mar 2023
Learning to Retain while Acquiring: Combating Distribution-Shift in
  Adversarial Data-Free Knowledge Distillation
Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation
Gaurav Patel
Konda Reddy Mopuri
Qiang Qiu
23
28
0
28 Feb 2023
Explicit and Implicit Knowledge Distillation via Unlabeled Data
Explicit and Implicit Knowledge Distillation via Unlabeled Data
Yuzheng Wang
Zuhao Ge
Zhaoyu Chen
Xiangjian Liu
Chuang Ma
Yunquan Sun
Lizhe Qi
44
10
0
17 Feb 2023
BOMP-NAS: Bayesian Optimization Mixed Precision NAS
BOMP-NAS: Bayesian Optimization Mixed Precision NAS
David van Son
F. D. Putter
Sebastian Vogel
Henk Corporaal
MQ
27
3
0
27 Jan 2023
AI-KD: Adversarial learning and Implicit regularization for
  self-Knowledge Distillation
AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation
Hyungmin Kim
Sungho Suh
Sunghyun Baek
Daehwan Kim
Daun Jeong
Hansang Cho
Junmo Kim
22
5
0
20 Nov 2022
CPT-V: A Contrastive Approach to Post-Training Quantization of Vision
  Transformers
CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers
N. Frumkin
Dibakar Gope
Diana Marculescu
ViT
MQ
26
1
0
17 Nov 2022
Long-Range Zero-Shot Generative Deep Network Quantization
Long-Range Zero-Shot Generative Deep Network Quantization
Yan Luo
Yangcheng Gao
Zhao Zhang
Haijun Zhang
Mingliang Xu
Meng Wang
MQ
31
9
0
13 Nov 2022
Zero-Shot Learning of a Conditional Generative Adversarial Network for
  Data-Free Network Quantization
Zero-Shot Learning of a Conditional Generative Adversarial Network for Data-Free Network Quantization
Yoojin Choi
Mostafa El-Khamy
Jungwon Lee
GAN
24
1
0
26 Oct 2022
Momentum Adversarial Distillation: Handling Large Distribution Shifts in
  Data-Free Knowledge Distillation
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
18
32
0
21 Sep 2022
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning
  Strategy
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning Strategy
Jingru Li
Sheng Zhou
Liangcheng Li
Haishuai Wang
Zhi Yu
Jiajun Bu
34
14
0
29 Aug 2022
QuantFace: Towards Lightweight Face Recognition by Synthetic Data
  Low-bit Quantization
QuantFace: Towards Lightweight Face Recognition by Synthetic Data Low-bit Quantization
Fadi Boutros
Naser Damer
Arjan Kuijper
CVBM
MQ
22
37
0
21 Jun 2022
Optimal Clipping and Magnitude-aware Differentiation for Improved
  Quantization-aware Training
Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training
Charbel Sakr
Steve Dai
Rangharajan Venkatesan
B. Zimmer
W. Dally
Brucek Khailany
MQ
13
41
0
13 Jun 2022
Few-Shot Unlearning by Model Inversion
Few-Shot Unlearning by Model Inversion
Youngsik Yoon
Jinhwan Nam
Hyojeong Yun
Jaeho Lee
Dongwoo Kim
Jungseul Ok
MU
22
17
0
31 May 2022
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via
  Multi-level Feature Sharing
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature Sharing
Zhiwei Hao
Yong Luo
Zhi Wang
Han Hu
J. An
39
27
0
24 May 2022
Self-distilled Knowledge Delegator for Exemplar-free Class Incremental
  Learning
Self-distilled Knowledge Delegator for Exemplar-free Class Incremental Learning
Fanfan Ye
Liang Ma
Qiaoyong Zhong
Di Xie
Shiliang Pu
BDL
CLL
21
2
0
23 May 2022
It's All In the Teacher: Zero-Shot Quantization Brought Closer to the
  Teacher
It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher
Kanghyun Choi
Hye Yoon Lee
Deokki Hong
Joonsang Yu
Noseong Park
Youngsok Kim
Jinho Lee
MQ
35
31
0
31 Mar 2022
SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian
  Approximation
SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian Approximation
Cong Guo
Yuxian Qiu
Jingwen Leng
Xiaotian Gao
Chen Zhang
Yunxin Liu
Fan Yang
Yuhao Zhu
Minyi Guo
MQ
72
70
0
14 Feb 2022
Distillation from heterogeneous unlabeled collections
Distillation from heterogeneous unlabeled collections
Jean-Michel Begon
Pierre Geurts
22
0
0
17 Jan 2022
Data-Free Knowledge Transfer: A Survey
Data-Free Knowledge Transfer: A Survey
Yuang Liu
Wei Zhang
Jun Wang
Jianyong Wang
32
48
0
31 Dec 2021
Few-shot Backdoor Defense Using Shapley Estimation
Few-shot Backdoor Defense Using Shapley Estimation
Jiyang Guan
Zhuozhuo Tu
Ran He
Dacheng Tao
AAML
31
53
0
30 Dec 2021
Up to 100$\times$ Faster Data-free Knowledge Distillation
Up to 100×\times× Faster Data-free Knowledge Distillation
Gongfan Fang
Kanya Mo
Xinchao Wang
Jie Song
Shitao Bei
Haofei Zhang
Xiuming Zhang
DD
36
4
0
12 Dec 2021
12
Next