ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.01528
  4. Cited By
A Fast Knowledge Distillation Framework for Visual Recognition

A Fast Knowledge Distillation Framework for Visual Recognition

2 December 2021
Zhiqiang Shen
Eric P. Xing
    VLM
ArXiv (abs)PDFHTMLGithub (188★)

Papers citing "A Fast Knowledge Distillation Framework for Visual Recognition"

50 / 53 papers shown
Title
CONCORD: Concept-Informed Diffusion for Dataset Distillation
CONCORD: Concept-Informed Diffusion for Dataset Distillation
Jianyang Gu
Haonan Wang
Ruoxi Jia
Saeed Vahidian
Vyacheslav Kungurtsev
Wei Jiang
Yiran Chen
DiffMDD
922
0
0
23 May 2025
FDBPL: Faster Distillation-Based Prompt Learning for Region-Aware Vision-Language Models Adaptation
FDBPL: Faster Distillation-Based Prompt Learning for Region-Aware Vision-Language Models Adaptation
Zherui Zhang
Jiaxin Wu
Changwei Wang
Rongtao Xu
Longzhao Huang
Wenhao Xu
Wenbo Xu
Li Guo
Shibiao Xu
VLMVPVLM
128
0
0
23 May 2025
A Label is Worth a Thousand Images in Dataset Distillation
A Label is Worth a Thousand Images in Dataset Distillation
Tian Qin
Zhiwei Deng
David Alvarez-Melis
DD
167
13
0
15 Jun 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
112
1
0
22 Apr 2024
ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
Chaojian Li
Wenwan Chen
Jiayi Yuan
Yingyan Lin
Ashutosh Sabharwal
74
0
0
19 Mar 2023
Sliced Recursive Transformer
Sliced Recursive Transformer
Zhiqiang Shen
Zechun Liu
Eric P. Xing
ViT
49
27
0
09 Nov 2021
ResNet strikes back: An improved training procedure in timm
ResNet strikes back: An improved training procedure in timm
Ross Wightman
Hugo Touvron
Hervé Jégou
AI4TS
260
497
0
01 Oct 2021
Does Knowledge Distillation Really Work?
Does Knowledge Distillation Really Work?
Samuel Stanton
Pavel Izmailov
Polina Kirichenko
Alexander A. Alemi
A. Wilson
FedML
69
221
0
10 Jun 2021
Knowledge distillation: A good teacher is patient and consistent
Knowledge distillation: A good teacher is patient and consistent
Lucas Beyer
Xiaohua Zhai
Amelie Royer
L. Markeeva
Rohan Anil
Alexander Kolesnikov
VLM
107
296
0
09 Jun 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
700
6,121
0
29 Apr 2021
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An
  Empirical Study
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study
Zhiqiang Shen
Zechun Liu
Dejia Xu
Zitian Chen
Kwang-Ting Cheng
Marios Savvides
51
76
0
01 Apr 2021
EfficientNetV2: Smaller Models and Faster Training
EfficientNetV2: Smaller Models and Faster Training
Mingxing Tan
Quoc V. Le
EgoV
122
2,714
0
01 Apr 2021
Rethinking Spatial Dimensions of Vision Transformers
Rethinking Spatial Dimensions of Vision Transformers
Byeongho Heo
Sangdoo Yun
Dongyoon Han
Sanghyuk Chun
Junsuk Choe
Seong Joon Oh
ViT
506
582
0
30 Mar 2021
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
  Networks via Guided Distribution Calibration
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration
Zhiqiang Shen
Zechun Liu
Jie Qin
Lei Huang
Kwang-Ting Cheng
Marios Savvides
UQCVSSLMQ
295
22
0
17 Feb 2021
Re-labeling ImageNet: from Single to Multi-Labels, from Global to
  Localized Labels
Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels
Sangdoo Yun
Seong Joon Oh
Byeongho Heo
Dongyoon Han
Junsuk Choe
Sanghyuk Chun
482
147
0
13 Jan 2021
SEED: Self-supervised Distillation For Visual Representation
SEED: Self-supervised Distillation For Visual Representation
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
301
194
0
12 Jan 2021
Training data-efficient image transformers & distillation through
  attention
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
389
6,793
0
23 Dec 2020
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet
  without Tricks
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks
Zhiqiang Shen
Marios Savvides
59
63
0
17 Sep 2020
Unsupervised Learning of Visual Features by Contrasting Cluster
  Assignments
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
Mathilde Caron
Ishan Misra
Julien Mairal
Priya Goyal
Piotr Bojanowski
Armand Joulin
OCLSSL
249
4,097
0
17 Jun 2020
Are we done with ImageNet?
Are we done with ImageNet?
Lucas Beyer
Olivier J. Hénaff
Alexander Kolesnikov
Xiaohua Zhai
Aaron van den Oord
VLM
126
405
0
12 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
110
2,976
0
09 Jun 2020
Improved Baselines with Momentum Contrastive Learning
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
486
3,442
0
09 Mar 2020
ReActNet: Towards Precise Binary Neural Network with Generalized
  Activation Functions
ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions
Zechun Liu
Zhiqiang Shen
Marios Savvides
Kwang-Ting Cheng
MQ
143
354
0
07 Mar 2020
Feature-map-level Online Adversarial Knowledge Distillation
Feature-map-level Online Adversarial Knowledge Distillation
Inseop Chung
Seonguk Park
Jangho Kim
Nojun Kwak
GAN
81
131
0
05 Feb 2020
Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion
Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion
Hongxu Yin
Pavlo Molchanov
Zhizhong Li
J. Álvarez
Arun Mallya
Derek Hoiem
N. Jha
Jan Kautz
72
569
0
18 Dec 2019
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
523
42,559
0
03 Dec 2019
Self-training with Noisy Student improves ImageNet classification
Self-training with Noisy Student improves ImageNet classification
Qizhe Xie
Minh-Thang Luong
Eduard H. Hovy
Quoc V. Le
NoLa
312
2,392
0
11 Nov 2019
Contrastive Representation Distillation
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
151
1,052
0
23 Oct 2019
RandAugment: Practical automated data augmentation with a reduced search
  space
RandAugment: Practical automated data augmentation with a reduced search space
E. D. Cubuk
Barret Zoph
Jonathon Shlens
Quoc V. Le
MQ
241
3,503
0
30 Sep 2019
GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural
  Language Processing
GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing
Jian Guo
He He
Tong He
Leonard Lausen
Mu Li
...
Hang Zhang
Zhi-Li Zhang
Zhongyue Zhang
Shuai Zheng
Yi Zhu
VLMBDL
74
197
0
09 Jul 2019
When Does Label Smoothing Help?
When Does Label Smoothing Help?
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
UQCV
207
1,953
0
06 Jun 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DVMedIm
142
18,168
0
28 May 2019
Be Your Own Teacher: Improve the Performance of Convolutional Neural
  Networks via Self Distillation
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Linfeng Zhang
Jiebo Song
Anni Gao
Jingwei Chen
Chenglong Bao
Kaisheng Ma
FedML
76
863
0
17 May 2019
CutMix: Regularization Strategy to Train Strong Classifiers with
  Localizable Features
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
Sangdoo Yun
Dongyoon Han
Seong Joon Oh
Sanghyuk Chun
Junsuk Choe
Y. Yoo
OOD
620
4,798
0
13 May 2019
Relational Knowledge Distillation
Relational Knowledge Distillation
Wonpyo Park
Dongju Kim
Yan Lu
Minsu Cho
68
1,420
0
10 Apr 2019
Do ImageNet Classifiers Generalize to ImageNet?
Do ImageNet Classifiers Generalize to ImageNet?
Benjamin Recht
Rebecca Roelofs
Ludwig Schmidt
Vaishaal Shankar
OODSSegVLM
118
1,726
0
13 Feb 2019
FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural
  Architecture Search
FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search
Bichen Wu
Xiaoliang Dai
Peizhao Zhang
Yanghan Wang
Fei Sun
Yiming Wu
Yuandong Tian
Peter Vajda
Yangqing Jia
Kurt Keutzer
MQ
100
1,305
0
09 Dec 2018
MEAL: Multi-Model Ensemble via Adversarial Learning
MEAL: Multi-Model Ensemble via Adversarial Learning
Zhiqiang Shen
Zhankui He
Xiangyang Xue
AAMLFedML
65
146
0
06 Dec 2018
Dataset Distillation
Dataset Distillation
Tongzhou Wang
Jun-Yan Zhu
Antonio Torralba
Alexei A. Efros
DD
78
297
0
27 Nov 2018
mixup: Beyond Empirical Risk Minimization
mixup: Beyond Empirical Risk Minimization
Hongyi Zhang
Moustapha Cissé
Yann N. Dauphin
David Lopez-Paz
NoLa
282
9,797
0
25 Oct 2017
Large Batch Training of Convolutional Networks
Large Batch Training of Convolutional Networks
Yang You
Igor Gitman
Boris Ginsburg
ODL
139
852
0
13 Aug 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
728
132,199
0
12 Jun 2017
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
1.2K
20,880
0
17 Apr 2017
Mask R-CNN
Mask R-CNN
Kaiming He
Georgia Gkioxari
Piotr Dollár
Ross B. Girshick
ObjD
352
27,230
0
20 Mar 2017
Feature Pyramid Networks for Object Detection
Feature Pyramid Networks for Object Detection
Nayeon Lee
Piotr Dollár
Ross B. Girshick
Kaiming He
Bharath Hariharan
Serge J. Belongie
ObjD
483
22,134
0
09 Dec 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
321
20,070
0
07 Oct 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,322
0
10 Dec 2015
Rethinking the Inception Architecture for Computer Vision
Rethinking the Inception Architecture for Computer Vision
Christian Szegedy
Vincent Vanhoucke
Sergey Ioffe
Jonathon Shlens
Z. Wojna
3DVBDL
886
27,412
0
02 Dec 2015
Distillation as a Defense to Adversarial Perturbations against Deep
  Neural Networks
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
113
3,077
0
14 Nov 2015
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal
  Networks
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
Shaoqing Ren
Kaiming He
Ross B. Girshick
Jian Sun
AIMatObjD
520
62,360
0
04 Jun 2015
12
Next