ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.04606
  4. Cited By
Knowledge Distillation by On-the-Fly Native Ensemble

Knowledge Distillation by On-the-Fly Native Ensemble

12 June 2018
Xu Lan
Xiatian Zhu
S. Gong
ArXivPDFHTML

Papers citing "Knowledge Distillation by On-the-Fly Native Ensemble"

50 / 87 papers shown
Title
CR-CTC: Consistency regularization on CTC for improved speech recognition
CR-CTC: Consistency regularization on CTC for improved speech recognition
Zengwei Yao
Wei Kang
Xiaoyu Yang
Fangjun Kuang
Liyong Guo
Han Zhu
Zengrui Jin
Zhaoqing Li
Long Lin
Daniel Povey
53
0
0
17 Feb 2025
Knowledge Distillation with Adapted Weight
Sirong Wu
Xi Luo
Junjie Liu
Yuhui Deng
40
0
0
06 Jan 2025
PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation
PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation
Mike Ranzinger
Jon Barker
Greg Heinrich
Pavlo Molchanov
Bryan Catanzaro
Andrew Tao
39
5
0
02 Oct 2024
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Shalini Sarode
Muhammad Saif Ullah Khan
Tahira Shehzadi
Didier Stricker
Muhammad Zeshan Afzal
41
0
0
30 Sep 2024
Online Multi-level Contrastive Representation Distillation for
  Cross-Subject fNIRS Emotion Recognition
Online Multi-level Contrastive Representation Distillation for Cross-Subject fNIRS Emotion Recognition
Zhili Lai
Chunmei Qing
Junpeng Tan
Wanxiang Luo
Xiangmin Xu
28
1
0
24 Sep 2024
GMM-ResNet2: Ensemble of Group ResNet Networks for Synthetic Speech
  Detection
GMM-ResNet2: Ensemble of Group ResNet Networks for Synthetic Speech Detection
Zhenchun Lei
Hui Yan
Changhong Liu
Yong Zhou
Minglei Ma
42
2
0
02 Jul 2024
MTKD: Multi-Teacher Knowledge Distillation for Image Super-Resolution
MTKD: Multi-Teacher Knowledge Distillation for Image Super-Resolution
Yuxuan Jiang
Chen Feng
Fan Zhang
David Bull
SupR
51
11
0
15 Apr 2024
Shifting Focus: From Global Semantics to Local Prominent Features in
  Swin-Transformer for Knee Osteoarthritis Severity Assessment
Shifting Focus: From Global Semantics to Local Prominent Features in Swin-Transformer for Knee Osteoarthritis Severity Assessment
Aymen Sekhri
Marouane Tliba
M. A. Kerkouri
Yassine Nasser
Aladine Chetouani
Alessandro Bruno
Rachid Jennane
26
0
0
15 Mar 2024
Predictive Churn with the Set of Good Models
Predictive Churn with the Set of Good Models
J. Watson-Daniels
Flavio du Pin Calmon
Alexander DÁmour
Carol Xuan Long
David C. Parkes
Berk Ustun
83
7
0
12 Feb 2024
Decoupled Knowledge with Ensemble Learning for Online Distillation
Decoupled Knowledge with Ensemble Learning for Online Distillation
Baitan Shao
Ying Chen
26
0
0
18 Dec 2023
AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains
  Into One
AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains Into One
Michael Ranzinger
Greg Heinrich
Jan Kautz
Pavlo Molchanov
VLM
41
42
0
10 Dec 2023
Gramian Attention Heads are Strong yet Efficient Vision Learners
Gramian Attention Heads are Strong yet Efficient Vision Learners
Jongbin Ryu
Dongyoon Han
J. Lim
30
1
0
25 Oct 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
21
16
0
08 Aug 2023
Tailoring Instructions to Student's Learning Levels Boosts Knowledge
  Distillation
Tailoring Instructions to Student's Learning Levels Boosts Knowledge Distillation
Yuxin Ren
Zi-Qi Zhong
Xingjian Shi
Yi Zhu
Chun Yuan
Mu Li
21
7
0
16 May 2023
Self-discipline on multiple channels
Self-discipline on multiple channels
Jiutian Zhao
Liangchen Luo
Hao Wang
26
0
0
27 Apr 2023
Generalization Matters: Loss Minima Flattening via Parameter
  Hybridization for Efficient Online Knowledge Distillation
Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation
Tianli Zhang
Mengqi Xue
Jiangtao Zhang
Haofei Zhang
Yu Wang
Lechao Cheng
Jie Song
Mingli Song
28
5
0
26 Mar 2023
Distillation from Heterogeneous Models for Top-K Recommendation
Distillation from Heterogeneous Models for Top-K Recommendation
SeongKu Kang
Wonbin Kweon
Dongha Lee
Jianxun Lian
Xing Xie
Hwanjo Yu
VLM
32
21
0
02 Mar 2023
Pathologies of Predictive Diversity in Deep Ensembles
Pathologies of Predictive Diversity in Deep Ensembles
Taiga Abe
E. Kelly Buchanan
Geoff Pleiss
John P. Cunningham
UQCV
38
13
0
01 Feb 2023
Deep Negative Correlation Classification
Deep Negative Correlation Classification
Le Zhang
Qibin Hou
Yun-Hai Liu
Jiawang Bian
Xun Xu
Qiufeng Wang
Ce Zhu
24
1
0
14 Dec 2022
Co-training $2^L$ Submodels for Visual Recognition
Co-training 2L2^L2L Submodels for Visual Recognition
Hugo Touvron
Matthieu Cord
Maxime Oquab
Piotr Bojanowski
Jakob Verbeek
Hervé Jégou
VLM
35
9
0
09 Dec 2022
DGEKT: A Dual Graph Ensemble Learning Method for Knowledge Tracing
DGEKT: A Dual Graph Ensemble Learning Method for Knowledge Tracing
C. Cui
Yumo Yao
Chunyun Zhang
Hebo Ma
Yuling Ma
Z. Ren
Chen Zhang
James Ko
AI4Ed
31
26
0
23 Nov 2022
Self-Supervised Visual Representation Learning via Residual Momentum
Self-Supervised Visual Representation Learning via Residual Momentum
T. Pham
Axi Niu
Zhang Kang
Sultan Rizky Hikmawan Madjid
Jiajing Hong
Daehyeok Kim
Joshua Tian Jin Tee
Chang D. Yoo
SSL
46
6
0
17 Nov 2022
Teacher-Student Architecture for Knowledge Learning: A Survey
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
20
35
0
28 Oct 2022
Federated Learning with Privacy-Preserving Ensemble Attention
  Distillation
Federated Learning with Privacy-Preserving Ensemble Attention Distillation
Xuan Gong
Liangchen Song
Rishi Vedula
Abhishek Sharma
Meng Zheng
...
Arun Innanje
Terrence Chen
Junsong Yuan
David Doermann
Ziyan Wu
FedML
20
27
0
16 Oct 2022
Label driven Knowledge Distillation for Federated Learning with non-IID
  Data
Label driven Knowledge Distillation for Federated Learning with non-IID Data
Minh-Duong Nguyen
Viet Quoc Pham
D. Hoang
Long Tran-Thanh
Diep N. Nguyen
W. Hwang
21
2
0
29 Sep 2022
Integrating Object-aware and Interaction-aware Knowledge for Weakly
  Supervised Scene Graph Generation
Integrating Object-aware and Interaction-aware Knowledge for Weakly Supervised Scene Graph Generation
Xingchen Li
Long Chen
Wenbo Ma
Yi Yang
Jun Xiao
15
26
0
03 Aug 2022
Online Knowledge Distillation via Mutual Contrastive Learning for Visual
  Recognition
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
39
50
0
23 Jul 2022
Multi scale Feature Extraction and Fusion for Online Knowledge
  Distillation
Multi scale Feature Extraction and Fusion for Online Knowledge Distillation
Panpan Zou
Yinglei Teng
Tao Niu
27
3
0
16 Jun 2022
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Jing Yang
Xiatian Zhu
Adrian Bulat
Brais Martínez
Georgios Tzimiropoulos
31
7
0
13 May 2022
A Closer Look at Branch Classifiers of Multi-exit Architectures
A Closer Look at Branch Classifiers of Multi-exit Architectures
Shaohui Lin
Bo Ji
Rongrong Ji
Angela Yao
12
4
0
28 Apr 2022
HFT: Lifting Perspective Representations via Hybrid Feature
  Transformation
HFT: Lifting Perspective Representations via Hybrid Feature Transformation
Jiayu Zou
Jun Xiao
Zheng Hua Zhu
Junjie Huang
Guan Huang
Dalong Du
Xingang Wang
36
18
0
11 Apr 2022
Self-Distillation from the Last Mini-Batch for Consistency
  Regularization
Self-Distillation from the Last Mini-Batch for Consistency Regularization
Yiqing Shen
Liwu Xu
Yuzhe Yang
Yaqian Li
Yandong Guo
15
60
0
30 Mar 2022
Channel Self-Supervision for Online Knowledge Distillation
Channel Self-Supervision for Online Knowledge Distillation
Shixi Fan
Xuan Cheng
Xiaomin Wang
Chun Yang
Pan Deng
Minghui Liu
Jiali Deng
Meilin Liu
16
1
0
22 Mar 2022
Consensus Learning from Heterogeneous Objectives for One-Class
  Collaborative Filtering
Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering
SeongKu Kang
Dongha Lee
Wonbin Kweon
Junyoung Hwang
Hwanjo Yu
17
12
0
26 Feb 2022
Learn From the Past: Experience Ensemble Knowledge Distillation
Learn From the Past: Experience Ensemble Knowledge Distillation
Chaofei Wang
Shaowei Zhang
S. Song
Gao Huang
27
4
0
25 Feb 2022
Handwritten Mathematical Expression Recognition via Attention
  Aggregation based Bi-directional Mutual Learning
Handwritten Mathematical Expression Recognition via Attention Aggregation based Bi-directional Mutual Learning
Xiaohang Bian
Bo Qin
Xiaozhe Xin
Jianwu Li
Xuefeng Su
Yanfeng Wang
35
49
0
07 Dec 2021
Improved Knowledge Distillation via Adversarial Collaboration
Improved Knowledge Distillation via Adversarial Collaboration
Zhiqiang Liu
Chengkai Huang
Yanxia Liu
29
2
0
29 Nov 2021
MUSE: Feature Self-Distillation with Mutual Information and
  Self-Information
MUSE: Feature Self-Distillation with Mutual Information and Self-Information
Yunpeng Gong
Ye Yu
Gaurav Mittal
Greg Mori
Mei Chen
SSL
28
2
0
25 Oct 2021
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for
  Efficient Distillation
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for Efficient Distillation
Sumanth Chennupati
Mohammad Mahdi Kamani
Zhongwei Cheng
Lin Chen
26
4
0
19 Oct 2021
Boost Neural Networks by Checkpoints
Boost Neural Networks by Checkpoints
Feng Wang
Gu-Yeon Wei
Qiao Liu
Jinxiang Ou
Xian Wei
Hairong Lv
FedML
UQCV
19
10
0
03 Oct 2021
Deep Structured Instance Graph for Distilling Object Detectors
Deep Structured Instance Graph for Distilling Object Detectors
Yixin Chen
Pengguang Chen
Shu-Lin Liu
Liwei Wang
Jiaya Jia
ObjD
ISeg
16
12
0
27 Sep 2021
Personalized Federated Learning for Heterogeneous Clients with Clustered
  Knowledge Transfer
Personalized Federated Learning for Heterogeneous Clients with Clustered Knowledge Transfer
Yae Jee Cho
Jianyu Wang
Tarun Chiruvolu
Gauri Joshi
FedML
35
30
0
16 Sep 2021
Knowledge Distillation Using Hierarchical Self-Supervision Augmented
  Distribution
Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution
Chuanguang Yang
Zhulin An
Linhang Cai
Yongjun Xu
22
15
0
07 Sep 2021
Efficient training of lightweight neural networks using Online
  Self-Acquired Knowledge Distillation
Efficient training of lightweight neural networks using Online Self-Acquired Knowledge Distillation
Maria Tzelepi
Anastasios Tefas
11
6
0
26 Aug 2021
Communication Optimization in Large Scale Federated Learning using
  Autoencoder Compressed Weight Updates
Communication Optimization in Large Scale Federated Learning using Autoencoder Compressed Weight Updates
Srikanth Chandar
Pravin Chandran
Raghavendra Bhat
Avinash Chakravarthi
AI4CE
31
3
0
12 Aug 2021
Online Knowledge Distillation for Efficient Pose Estimation
Online Knowledge Distillation for Efficient Pose Estimation
Zheng Li
Jingwen Ye
Xiuming Zhang
Ying Huang
Zhigeng Pan
17
93
0
04 Aug 2021
Evaluating Deep Graph Neural Networks
Evaluating Deep Graph Neural Networks
Wentao Zhang
Zeang Sheng
Yuezihan Jiang
Yikuan Xia
Jun Gao
Zhi-Xin Yang
Bin Cui
GNN
AI4CE
21
31
0
02 Aug 2021
Mutual Contrastive Learning for Visual Representation Learning
Mutual Contrastive Learning for Visual Representation Learning
Chuanguang Yang
Zhulin An
Linhang Cai
Yongjun Xu
VLM
SSL
99
75
0
26 Apr 2021
Distill on the Go: Online knowledge distillation in self-supervised
  learning
Distill on the Go: Online knowledge distillation in self-supervised learning
Prashant Bhat
Elahe Arani
Bahram Zonooz
SSL
22
28
0
20 Apr 2021
Dive into Ambiguity: Latent Distribution Mining and Pairwise Uncertainty
  Estimation for Facial Expression Recognition
Dive into Ambiguity: Latent Distribution Mining and Pairwise Uncertainty Estimation for Facial Expression Recognition
Jiahui She
Yibo Hu
Hailin Shi
Jun Wang
Qiu Shen
Tao Mei
25
186
0
01 Apr 2021
12
Next