ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.13298
  4. Cited By
Self-distillation with Batch Knowledge Ensembling Improves ImageNet
  Classification

Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification

27 April 2021
Yixiao Ge
Xiao Zhang
Ching Lam Choi
Ka Chun Cheung
Peipei Zhao
Feng Zhu
Xiaogang Wang
Rui Zhao
Hongsheng Li
    FedML
    UQCV
ArXivPDFHTML

Papers citing "Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification"

9 / 9 papers shown
Title
SnapCap: Efficient Snapshot Compressive Video Captioning
SnapCap: Efficient Snapshot Compressive Video Captioning
Jianqiao Sun
Yudi Su
Hao Zhang
Ziheng Cheng
Zequn Zeng
Zhengjue Wang
Bo Chen
Xin Yuan
32
1
0
10 Jan 2024
Improving Generalization of Metric Learning via Listwise
  Self-distillation
Improving Generalization of Metric Learning via Listwise Self-distillation
Zelong Zeng
Fan Yang
Z. Wang
Shiníchi Satoh
FedML
35
1
0
17 Jun 2022
Privacy-Preserving Model Upgrades with Bidirectional Compatible Training
  in Image Retrieval
Privacy-Preserving Model Upgrades with Bidirectional Compatible Training in Image Retrieval
Shupeng Su
Binjie Zhang
Yixiao Ge
Xuyuan Xu
Yexin Wang
Chun Yuan
Ying Shan
25
10
0
29 Apr 2022
Vision Pair Learning: An Efficient Training Framework for Image
  Classification
Vision Pair Learning: An Efficient Training Framework for Image Classification
Bei Tong
Xiaoyuan Yu
ViT
17
0
0
02 Dec 2021
Complementary Calibration: Boosting General Continual Learning with
  Collaborative Distillation and Self-Supervision
Complementary Calibration: Boosting General Continual Learning with Collaborative Distillation and Self-Supervision
Zhong Ji
Jin Li
Qiang Wang
Zhongfei Zhang
CLL
25
18
0
03 Sep 2021
Re-labeling ImageNet: from Single to Multi-Labels, from Global to
  Localized Labels
Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels
Sangdoo Yun
Seong Joon Oh
Byeongho Heo
Dongyoon Han
Junsuk Choe
Sanghyuk Chun
398
142
0
13 Jan 2021
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
192
473
0
12 Jun 2018
Mean teachers are better role models: Weight-averaged consistency
  targets improve semi-supervised deep learning results
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen
Harri Valpola
OOD
MoMe
261
1,275
0
06 Mar 2017
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
297
10,220
0
16 Nov 2016
1