ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.08453
  4. Cited By
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet
  without Tricks

MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks

17 September 2020
Zhiqiang Shen
Marios Savvides
ArXivPDFHTML

Papers citing "MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks"

41 / 41 papers shown
Title
Bezier Distillation
Bezier Distillation
Ling Feng
SK Yang
44
0
0
20 Mar 2025
Efficient Knowledge Distillation: Empowering Small Language Models with
  Teacher Model Insights
Efficient Knowledge Distillation: Empowering Small Language Models with Teacher Model Insights
Mohamad Ballout
U. Krumnack
Gunther Heidemann
Kai-Uwe Kühnberger
35
2
0
19 Sep 2024
CLIP-CID: Efficient CLIP Distillation via Cluster-Instance
  Discrimination
CLIP-CID: Efficient CLIP Distillation via Cluster-Instance Discrimination
Kaicheng Yang
Tiancheng Gu
Xiang An
Haiqiang Jiang
Xiangzi Dai
Ziyong Feng
Weidong Cai
Jiankang Deng
VLM
54
7
0
18 Aug 2024
FerKD: Surgical Label Adaptation for Efficient Distillation
FerKD: Surgical Label Adaptation for Efficient Distillation
Zhiqiang Shen
26
3
0
29 Dec 2023
Understanding the Effects of Projectors in Knowledge Distillation
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
26
0
0
26 Oct 2023
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL
  Shader Images
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images
Logan Frank
Jim Davis
33
1
0
20 Oct 2023
FeedbackLogs: Recording and Incorporating Stakeholder Feedback into
  Machine Learning Pipelines
FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
Matthew Barker
Emma Kallina
D. Ashok
Katherine M. Collins
Ashley Casovan
Adrian Weller
Ameet Talwalkar
Valerie Chen
Umang Bhatt
38
5
0
28 Jul 2023
Multimodal Distillation for Egocentric Action Recognition
Multimodal Distillation for Egocentric Action Recognition
Gorjan Radevski
Dusan Grujicic
Marie-Francine Moens
Matthew Blaschko
Tinne Tuytelaars
EgoV
26
23
0
14 Jul 2023
ERM++: An Improved Baseline for Domain Generalization
ERM++: An Improved Baseline for Domain Generalization
Piotr Teterwak
Kuniaki Saito
Theodoros Tsiligkaridis
Kate Saenko
Bryan A. Plummer
OOD
41
9
0
04 Apr 2023
Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness
  with Dataset Reinforcement
Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Fartash Faghri
Hadi Pouransari
Sachin Mehta
Mehrdad Farajtabar
Ali Farhadi
Mohammad Rastegari
Oncel Tuzel
43
9
0
15 Mar 2023
Rethinking Soft Label in Label Distribution Learning Perspective
Rethinking Soft Label in Label Distribution Learning Perspective
Seungbum Hong
Jihun Yoon
Bogyu Park
Min-Kook Choi
31
0
0
31 Jan 2023
Masked Video Distillation: Rethinking Masked Feature Modeling for
  Self-supervised Video Representation Learning
Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning
Rui Wang
Dongdong Chen
Zuxuan Wu
Yinpeng Chen
Xiyang Dai
Mengchen Liu
Lu Yuan
Yu-Gang Jiang
VGen
32
87
0
08 Dec 2022
Rethinking Out-of-Distribution Detection From a Human-Centric
  Perspective
Rethinking Out-of-Distribution Detection From a Human-Centric Perspective
Yao Zhu
YueFeng Chen
Xiaodan Li
Rong Zhang
Hui Xue
Xiang Tian
Rongxin Jiang
Bo Zheng
Yao-wu Chen
OODD
30
7
0
30 Nov 2022
Progressive Learning without Forgetting
Progressive Learning without Forgetting
Tao Feng
Hangjie Yuan
Mang Wang
Ziyuan Huang
Ang Bian
Jianzhou Zhang
CLL
KELM
44
4
0
28 Nov 2022
Join the High Accuracy Club on ImageNet with A Binary Neural Network
  Ticket
Join the High Accuracy Club on ImageNet with A Binary Neural Network Ticket
Nianhui Guo
Joseph Bethge
Christoph Meinel
Haojin Yang
MQ
34
19
0
23 Nov 2022
Towards Understanding and Boosting Adversarial Transferability from a
  Distribution Perspective
Towards Understanding and Boosting Adversarial Transferability from a Distribution Perspective
Yao Zhu
YueFeng Chen
Xiaodan Li
Kejiang Chen
Yuan He
Xiang Tian
Bo Zheng
Yao-wu Chen
Qingming Huang
AAML
33
58
0
09 Oct 2022
KD-SCFNet: Towards More Accurate and Efficient Salient Object Detection via Knowledge Distillation
Jin Zhang
Qiuwei Liang
Yanjiao Shi
21
11
0
03 Aug 2022
PEA: Improving the Performance of ReLU Networks for Free by Using
  Progressive Ensemble Activations
PEA: Improving the Performance of ReLU Networks for Free by Using Progressive Ensemble Activations
Á. Utasi
35
0
0
28 Jul 2022
Knowledge Distillation of Transformer-based Language Models Revisited
Knowledge Distillation of Transformer-based Language Models Revisited
Chengqiang Lu
Jianwei Zhang
Yunfei Chu
Zhengyu Chen
Jingren Zhou
Fei Wu
Haiqing Chen
Hongxia Yang
VLM
27
10
0
29 Jun 2022
MobileOne: An Improved One millisecond Mobile Backbone
MobileOne: An Improved One millisecond Mobile Backbone
Pavan Kumar Anasosalu Vasu
J. Gabriel
Jeff J. Zhu
Oncel Tuzel
Anurag Ranjan
30
154
0
08 Jun 2022
TransBoost: Improving the Best ImageNet Performance using Deep
  Transduction
TransBoost: Improving the Best ImageNet Performance using Deep Transduction
Omer Belhasin
Guy Bar-Shalom
Ran El-Yaniv
ViT
38
3
0
26 May 2022
Enhanced Performance of Pre-Trained Networks by Matched Augmentation
  Distributions
Enhanced Performance of Pre-Trained Networks by Matched Augmentation Distributions
T. Ahmad
Mohsen Jafarzadeh
A. Dhamija
Ryan Rabinowitz
Steve Cruz
Chunchun Li
Terrance E. Boult
25
0
0
19 Jan 2022
Cross-modal Contrastive Distillation for Instructional Activity
  Anticipation
Cross-modal Contrastive Distillation for Instructional Activity Anticipation
Zhengyuan Yang
Jingen Liu
Jing-ling Huang
Xiaodong He
Tao Mei
Chenliang Xu
Jiebo Luo
31
6
0
18 Jan 2022
Vision Transformer Slimming: Multi-Dimension Searching in Continuous
  Optimization Space
Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space
Arnav Chavan
Zhiqiang Shen
Zhuang Liu
Zechun Liu
Kwang-Ting Cheng
Eric P. Xing
ViT
33
70
0
03 Jan 2022
A Fast Knowledge Distillation Framework for Visual Recognition
A Fast Knowledge Distillation Framework for Visual Recognition
Zhiqiang Shen
Eric P. Xing
VLM
14
45
0
02 Dec 2021
Sliced Recursive Transformer
Sliced Recursive Transformer
Zhiqiang Shen
Zechun Liu
Eric P. Xing
ViT
22
27
0
09 Nov 2021
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Weixin Xu
Zipeng Feng
Shuangkang Fang
Song Yuan
Yi Yang
Shuchang Zhou
MQ
27
1
0
01 Nov 2021
DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and
  Transformers
DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Transformers
Changlin Li
Guangrun Wang
Bing Wang
Xiaodan Liang
Zhihui Li
Xiaojun Chang
30
9
0
21 Sep 2021
Knowledge distillation: A good teacher is patient and consistent
Knowledge distillation: A good teacher is patient and consistent
Lucas Beyer
Xiaohua Zhai
Amelie Royer
L. Markeeva
Rohan Anil
Alexander Kolesnikov
VLM
38
287
0
09 Jun 2021
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An
  Empirical Study
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study
Zhiqiang Shen
Zechun Liu
Dejia Xu
Zitian Chen
Kwang-Ting Cheng
Marios Savvides
13
75
0
01 Apr 2021
Dynamic Slimmable Network
Dynamic Slimmable Network
Changlin Li
Guangrun Wang
Bing Wang
Xiaodan Liang
Zhihui Li
Xiaojun Chang
29
143
0
24 Mar 2021
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely
  Self-supervised Neural Architecture Search
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
Changlin Li
Tao Tang
Guangrun Wang
Jiefeng Peng
Bing Wang
Xiaodan Liang
Xiaojun Chang
ViT
46
105
0
23 Mar 2021
Student Network Learning via Evolutionary Knowledge Distillation
Student Network Learning via Evolutionary Knowledge Distillation
Kangkai Zhang
Chunhui Zhang
Shikun Li
Dan Zeng
Shiming Ge
22
83
0
23 Mar 2021
Densely Nested Top-Down Flows for Salient Object Detection
Densely Nested Top-Down Flows for Salient Object Detection
Chaowei Fang
Haibin Tian
Dingwen Zhang
Qiang Zhang
Jungong Han
Junwei Han
35
72
0
18 Feb 2021
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
  Networks via Guided Distribution Calibration
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration
Zhiqiang Shen
Zechun Liu
Jie Qin
Lei Huang
Kwang-Ting Cheng
Marios Savvides
UQCV
SSL
MQ
248
22
0
17 Feb 2021
SEED: Self-supervised Distillation For Visual Representation
SEED: Self-supervised Distillation For Visual Representation
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
245
190
0
12 Jan 2021
Concept Generalization in Visual Representation Learning
Concept Generalization in Visual Representation Learning
Mert Bulent Sariyildiz
Yannis Kalantidis
Diane Larlus
Alahari Karteek
SSL
28
50
0
10 Dec 2020
NPAS: A Compiler-aware Framework of Unified Network Pruning and
  Architecture Search for Beyond Real-Time Mobile Acceleration
NPAS: A Compiler-aware Framework of Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Zhengang Li
Geng Yuan
Wei Niu
Pu Zhao
Yanyu Li
...
Sijia Liu
Kaiyuan Yang
Bin Ren
Yanzhi Wang
Xue Lin
MQ
34
27
0
01 Dec 2020
Joint Multi-Dimension Pruning via Numerical Gradient Update
Joint Multi-Dimension Pruning via Numerical Gradient Update
Zechun Liu
Xinming Zhang
Zhiqiang Shen
Zhe Li
Yichen Wei
Kwang-Ting Cheng
Jian Sun
47
19
0
18 May 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
200
473
0
12 Jun 2018
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
271
5,327
0
05 Nov 2016
1