Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1706.00384
Cited By
Deep Mutual Learning
1 June 2017
Ying Zhang
Tao Xiang
Timothy M. Hospedales
Huchuan Lu
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Deep Mutual Learning"
50 / 710 papers shown
Title
CORSD: Class-Oriented Relational Self Distillation
Muzhou Yu
S. Tan
Kailu Wu
Runpei Dong
Linfeng Zhang
Kaisheng Ma
24
0
0
28 Apr 2023
Self-discipline on multiple channels
Jiutian Zhao
Liangchen Luo
Hao Wang
32
0
0
27 Apr 2023
Deeply-Coupled Convolution-Transformer with Spatial-temporal Complementary Learning for Video-based Person Re-identification
Xuehu Liu
Chenyang Yu
Pingping Zhang
Huchuan Lu
ViT
87
25
0
27 Apr 2023
Improving Knowledge Distillation via Transferring Learning Ability
Long Liu
Tong Li
Hui Cheng
13
1
0
24 Apr 2023
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
52
18
0
24 Apr 2023
Deep Collective Knowledge Distillation
Jihyeon Seo
Kyusam Oh
Chanho Min
Yongkeun Yun
Sungwoo Cho
19
0
0
18 Apr 2023
Teacher Network Calibration Improves Cross-Quality Knowledge Distillation
Pia Cuk
Robin Senge
M. Lauri
Simone Frintrop
15
1
0
15 Apr 2023
Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning
Kaiyou Song
Jin Xie
Shanyi Zhang
Zimeng Luo
35
29
0
13 Apr 2023
Grouped Knowledge Distillation for Deep Face Recognition
Weisong Zhao
Xiangyu Zhu
Kaiwen Guo
Xiaoyu Zhang
Zhen Lei
CVBM
33
6
0
10 Apr 2023
Label-guided Attention Distillation for Lane Segmentation
Zhikang Liu
Lanyun Zhu
24
15
0
04 Apr 2023
Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation
Yang Jin
Mengke Li
Yang Lu
Y. Cheung
Hanzi Wang
43
21
0
03 Apr 2023
DisWOT: Student Architecture Search for Distillation WithOut Training
Peijie Dong
Lujun Li
Zimian Wei
46
57
0
28 Mar 2023
Feature Shrinkage Pyramid for Camouflaged Object Detection with Transformers
Zhou Huang
Hang Dai
Tian-Zhu Xiang
Shuo Wang
Huaixin Chen
Jie Qin
Huan Xiong
ViT
56
96
0
26 Mar 2023
Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation
Tianli Zhang
Mengqi Xue
Jiangtao Zhang
Haofei Zhang
Yu Wang
Lechao Cheng
Mingli Song
Mingli Song
33
5
0
26 Mar 2023
Heterogeneous-Branch Collaborative Learning for Dialogue Generation
Yiwei Li
Shaoxiong Feng
Bin Sun
Kan Li
32
3
0
21 Mar 2023
Channel-Aware Distillation Transformer for Depth Estimation on Nano Drones
Ning Zhang
F. Nex
G. Vosselman
N. Kerle
34
1
0
18 Mar 2023
ELFIS: Expert Learning for Fine-grained Image Recognition Using Subsets
Pablo J. Villacorta
Jesús M. Rodríguez-de-Vera
Marc Bolaños
Ignacio Sarasúa
Bhalaji Nagarajan
Petia Radeva
30
1
0
16 Mar 2023
Focus on Your Target: A Dual Teacher-Student Framework for Domain-adaptive Semantic Segmentation
Xinyue Huo
Lingxi Xie
Wen-gang Zhou
Houqiang Li
Qi Tian
31
8
0
16 Mar 2023
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
Maorong Wang
L. Xiao
T. Yamasaki
KELM
MoE
32
1
0
14 Mar 2023
CoT-MISR:Marrying Convolution and Transformer for Multi-Image Super-Resolution
Mingming Xiu
Yang Nie
Qing-Huang Song
Chun Liu
SupR
ViT
26
1
0
12 Mar 2023
Learn More for Food Recognition via Progressive Self-Distillation
Yaohui Zhu
Linhu Liu
Jiang Tian
39
5
0
09 Mar 2023
Smooth and Stepwise Self-Distillation for Object Detection
Jieren Deng
Xiaoxia Zhou
Hao Tian
Zhihong Pan
Derek Aguiar
ObjD
39
0
0
09 Mar 2023
Memory-adaptive Depth-wise Heterogenous Federated Learning
Kai Zhang
Yutong Dai
Hongyi Wang
Eric P. Xing
Xun Chen
Lichao Sun
FedML
33
7
0
08 Mar 2023
Knowledge-Enhanced Semi-Supervised Federated Learning for Aggregating Heterogeneous Lightweight Clients in IoT
Jiaqi Wang
Shenglai Zeng
Zewei Long
Yaqing Wang
Houping Xiao
Fenglong Ma
27
16
0
05 Mar 2023
Graph-based Knowledge Distillation: A survey and experimental evaluation
Jing Liu
Tongya Zheng
Guanzheng Zhang
Qinfen Hao
38
8
0
27 Feb 2023
Improving Sentence Similarity Estimation for Unsupervised Extractive Summarization
Shichao Sun
Ruifeng Yuan
Wenjie Li
Sujian Li
13
0
0
24 Feb 2023
Distilling Calibrated Student from an Uncalibrated Teacher
Ishan Mishra
Sethu Vamsi Krishna
Deepak Mishra
FedML
40
2
0
22 Feb 2023
A Survey on Semi-Supervised Semantic Segmentation
Adrian Peláez-Vegas
Pablo Mesejo
Julián Luengo
32
27
0
20 Feb 2023
Learning From Biased Soft Labels
Hua Yuan
Ning Xu
Yuge Shi
Xin Geng
Yong Rui
FedML
34
6
0
16 Feb 2023
URCDC-Depth: Uncertainty Rectified Cross-Distillation with CutFlip for Monocular Depth Estimation
Shuwei Shao
Z. Pei
Weihai Chen
Ran Li
Zhong Liu
Zhengguo Li
ViT
UQCV
73
35
0
16 Feb 2023
Knowledge Distillation-based Information Sharing for Online Process Monitoring in Decentralized Manufacturing System
Zhangyue Shi
Yuxuan Li
Chenang Liu
29
8
0
08 Feb 2023
Flat Seeking Bayesian Neural Networks
Van-Anh Nguyen
L. Vuong
Hoang Phan
Thanh-Toan Do
Dinh Q. Phung
Trung Le
BDL
42
8
0
06 Feb 2023
Knowledge Distillation
≈
\approx
≈
Label Smoothing: Fact or Fallacy?
Md Arafat Sultan
22
2
0
30 Jan 2023
Supervision Complexity and its Role in Knowledge Distillation
Hrayr Harutyunyan
A. S. Rawat
A. Menon
Seungyeon Kim
Surinder Kumar
35
12
0
28 Jan 2023
Improving Text-based Early Prediction by Distillation from Privileged Time-Series Text
Jinghui Liu
Daniel Capurro
Anthony N. Nguyen
Karin Verspoor
AI4TS
26
3
0
26 Jan 2023
Semi-Supervised Learning with Pseudo-Negative Labels for Image Classification
Hao Xu
Hui Xiao
Huazheng Hao
Li Dong
Xiaojie Qiu
Chengbin Peng
VLM
SSL
16
22
0
10 Jan 2023
NeRN -- Learning Neural Representations for Neural Networks
Maor Ashkenazi
Zohar Rimon
Ron Vainshtein
Shir Levi
Elad Richardson
Pinchas Mintz
Eran Treister
3DH
33
13
0
27 Dec 2022
BD-KD: Balancing the Divergences for Online Knowledge Distillation
Ibtihel Amara
N. Sepahvand
B. Meyer
W. Gross
J. Clark
32
2
0
25 Dec 2022
Training Lightweight Graph Convolutional Networks with Phase-field Models
H. Sahbi
32
0
0
19 Dec 2022
Co-training
2
L
2^L
2
L
Submodels for Visual Recognition
Hugo Touvron
Matthieu Cord
Maxime Oquab
Piotr Bojanowski
Jakob Verbeek
Hervé Jégou
VLM
42
9
0
09 Dec 2022
Leveraging Different Learning Styles for Improved Knowledge Distillation in Biomedical Imaging
Usma Niyaz
A. Sambyal
Deepti R. Bathula
25
0
0
06 Dec 2022
Model and Data Agreement for Learning with Noisy Labels
Yuhang Zhang
Weihong Deng
Xingchen Cui
Yunfeng Yin
Hongzhi Shi
Dongchao Wen
NoLa
34
5
0
02 Dec 2022
Multilingual Communication System with Deaf Individuals Utilizing Natural and Visual Languages
Tuan-Luc Huynh
Khoi-Nguyen Nguyen-Ngoc
Chi-Bien Chu
Minh-Triet Tran
Trung-Nghia Le
SLR
15
0
0
01 Dec 2022
SteppingNet: A Stepping Neural Network with Incremental Accuracy Enhancement
Wenhao Sun
Grace Li Zhang
Xunzhao Yin
Cheng Zhuo
Huaxi Gu
Bing Li
Ulf Schlichtmann
28
2
0
27 Nov 2022
Cross-Domain Ensemble Distillation for Domain Generalization
Kyung-Jin Lee
Sungyeon Kim
Suha Kwak
FedML
OOD
28
38
0
25 Nov 2022
Improving Multi-task Learning via Seeking Task-based Flat Regions
Hoang Phan
Lam C. Tran
Ngoc N. Tran
Nhat Ho
Dinh Q. Phung
Trung Le
38
11
0
24 Nov 2022
Distilling Knowledge from Self-Supervised Teacher by Embedding Graph Alignment
Yuchen Ma
Yanbei Chen
Zeynep Akata
36
8
0
23 Nov 2022
DGEKT: A Dual Graph Ensemble Learning Method for Knowledge Tracing
C. Cui
Yumo Yao
Chunyun Zhang
Hebo Ma
Yuling Ma
Zhaochun Ren
Chen Zhang
James Ko
AI4Ed
33
26
0
23 Nov 2022
FedDCT: Federated Learning of Large Convolutional Neural Networks on Resource Constrained Devices using Divide and Collaborative Training
Quan Nguyen
Hieu H. Pham
Kok-Seng Wong
Phi Le Nguyen
Truong Thao Nguyen
Minh N. Do
FedML
27
7
0
20 Nov 2022
Scalable Collaborative Learning via Representation Sharing
Frédéric Berdoz
Abhishek Singh
Martin Jaggi
Ramesh Raskar
FedML
30
3
0
20 Nov 2022
Previous
1
2
3
4
5
6
...
13
14
15
Next