ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.08453
  4. Cited By
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet
  without Tricks

MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks

17 September 2020
Zhiqiang Shen
Marios Savvides
ArXivPDFHTML

Papers citing "MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks"

22 / 22 papers shown
Title
Efficient Knowledge Distillation: Empowering Small Language Models with
  Teacher Model Insights
Efficient Knowledge Distillation: Empowering Small Language Models with Teacher Model Insights
Mohamad Ballout
U. Krumnack
Gunther Heidemann
Kai-Uwe Kühnberger
35
2
0
19 Sep 2024
Understanding the Effects of Projectors in Knowledge Distillation
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
26
0
0
26 Oct 2023
Multimodal Distillation for Egocentric Action Recognition
Multimodal Distillation for Egocentric Action Recognition
Gorjan Radevski
Dusan Grujicic
Marie-Francine Moens
Matthew Blaschko
Tinne Tuytelaars
EgoV
23
23
0
14 Jul 2023
ERM++: An Improved Baseline for Domain Generalization
ERM++: An Improved Baseline for Domain Generalization
Piotr Teterwak
Kuniaki Saito
Theodoros Tsiligkaridis
Kate Saenko
Bryan A. Plummer
OOD
38
9
0
04 Apr 2023
Rethinking Soft Label in Label Distribution Learning Perspective
Rethinking Soft Label in Label Distribution Learning Perspective
Seungbum Hong
Jihun Yoon
Bogyu Park
Min-Kook Choi
31
0
0
31 Jan 2023
Rethinking Out-of-Distribution Detection From a Human-Centric
  Perspective
Rethinking Out-of-Distribution Detection From a Human-Centric Perspective
Yao Zhu
YueFeng Chen
Xiaodan Li
Rong Zhang
Hui Xue
Xiang Tian
Rongxin Jiang
Bo Zheng
Yao-wu Chen
OODD
27
7
0
30 Nov 2022
Progressive Learning without Forgetting
Progressive Learning without Forgetting
Tao Feng
Hangjie Yuan
Mang Wang
Ziyuan Huang
Ang Bian
Jianzhou Zhang
CLL
KELM
44
4
0
28 Nov 2022
Join the High Accuracy Club on ImageNet with A Binary Neural Network
  Ticket
Join the High Accuracy Club on ImageNet with A Binary Neural Network Ticket
Nianhui Guo
Joseph Bethge
Christoph Meinel
Haojin Yang
MQ
34
19
0
23 Nov 2022
Towards Understanding and Boosting Adversarial Transferability from a
  Distribution Perspective
Towards Understanding and Boosting Adversarial Transferability from a Distribution Perspective
Yao Zhu
YueFeng Chen
Xiaodan Li
Kejiang Chen
Yuan He
Xiang Tian
Bo Zheng
Yao-wu Chen
Qingming Huang
AAML
30
58
0
09 Oct 2022
Knowledge Distillation of Transformer-based Language Models Revisited
Knowledge Distillation of Transformer-based Language Models Revisited
Chengqiang Lu
Jianwei Zhang
Yunfei Chu
Zhengyu Chen
Jingren Zhou
Fei Wu
Haiqing Chen
Hongxia Yang
VLM
27
10
0
29 Jun 2022
TransBoost: Improving the Best ImageNet Performance using Deep
  Transduction
TransBoost: Improving the Best ImageNet Performance using Deep Transduction
Omer Belhasin
Guy Bar-Shalom
Ran El-Yaniv
ViT
38
3
0
26 May 2022
Cross-modal Contrastive Distillation for Instructional Activity
  Anticipation
Cross-modal Contrastive Distillation for Instructional Activity Anticipation
Zhengyuan Yang
Jingen Liu
Jing-ling Huang
Xiaodong He
Tao Mei
Chenliang Xu
Jiebo Luo
31
6
0
18 Jan 2022
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Weixin Xu
Zipeng Feng
Shuangkang Fang
Song Yuan
Yi Yang
Shuchang Zhou
MQ
24
1
0
01 Nov 2021
Knowledge distillation: A good teacher is patient and consistent
Knowledge distillation: A good teacher is patient and consistent
Lucas Beyer
Xiaohua Zhai
Amelie Royer
L. Markeeva
Rohan Anil
Alexander Kolesnikov
VLM
35
287
0
09 Jun 2021
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely
  Self-supervised Neural Architecture Search
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
Changlin Li
Tao Tang
Guangrun Wang
Jiefeng Peng
Bing Wang
Xiaodan Liang
Xiaojun Chang
ViT
46
105
0
23 Mar 2021
Student Network Learning via Evolutionary Knowledge Distillation
Student Network Learning via Evolutionary Knowledge Distillation
Kangkai Zhang
Chunhui Zhang
Shikun Li
Dan Zeng
Shiming Ge
19
83
0
23 Mar 2021
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
  Networks via Guided Distribution Calibration
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration
Zhiqiang Shen
Zechun Liu
Jie Qin
Lei Huang
Kwang-Ting Cheng
Marios Savvides
UQCV
SSL
MQ
248
22
0
17 Feb 2021
SEED: Self-supervised Distillation For Visual Representation
SEED: Self-supervised Distillation For Visual Representation
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
242
190
0
12 Jan 2021
Concept Generalization in Visual Representation Learning
Concept Generalization in Visual Representation Learning
Mert Bulent Sariyildiz
Yannis Kalantidis
Diane Larlus
Alahari Karteek
SSL
28
50
0
10 Dec 2020
Joint Multi-Dimension Pruning via Numerical Gradient Update
Joint Multi-Dimension Pruning via Numerical Gradient Update
Zechun Liu
Xinming Zhang
Zhiqiang Shen
Zhe Li
Yichen Wei
Kwang-Ting Cheng
Jian Sun
47
18
0
18 May 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
195
473
0
12 Jun 2018
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
271
5,329
0
05 Nov 2016
1