Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1706.00384
Cited By
Deep Mutual Learning
1 June 2017
Ying Zhang
Tao Xiang
Timothy M. Hospedales
Huchuan Lu
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Deep Mutual Learning"
50 / 710 papers shown
Title
Personalized Federated Learning with Hidden Information on Personalized Prior
Mingjia Shi
Yuhao Zhou
Qing Ye
Jiancheng Lv
FedML
34
3
0
19 Nov 2022
D
3
^3
3
ETR: Decoder Distillation for Detection Transformer
Xiaokang Chen
Jiahui Chen
Yong-Jin Liu
Gang Zeng
47
16
0
17 Nov 2022
Composed Image Retrieval with Text Feedback via Multi-grained Uncertainty Regularization
Yiyang Chen
Zhedong Zheng
Wei Ji
Leigang Qu
Tat-Seng Chua
39
37
0
14 Nov 2022
Structured Knowledge Distillation Towards Efficient and Compact Multi-View 3D Detection
Linfeng Zhang
Yukang Shi
Hung-Shuo Tai
Zhipeng Zhang
Yuan He
Ke Wang
Kaisheng Ma
28
2
0
14 Nov 2022
TIER-A: Denoising Learning Framework for Information Extraction
Yongkang Li
M. Zhang
23
0
0
13 Nov 2022
Robust Training of Graph Neural Networks via Noise Governance
Siyi Qian
Haochao Ying
Renjun Hu
Jingbo Zhou
Jintai Chen
Danny Chen
Jian Wu
NoLa
43
34
0
12 Nov 2022
MDFlow: Unsupervised Optical Flow Learning by Reliable Mutual Knowledge Distillation
Lingtong Kong
J. Yang
35
26
0
11 Nov 2022
Resource-Aware Heterogeneous Federated Learning using Neural Architecture Search
Sixing Yu
J. P. Muñoz
Ali Jannesari
FedML
41
0
0
09 Nov 2022
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
29
35
0
28 Oct 2022
Self-consistent Reasoning For Solving Math Word Problems
Jing Xiong
Zhongwei Wan
Xiping Hu
Min Yang
Chengming Li
ReLM
LRM
54
11
0
27 Oct 2022
Improved Feature Distillation via Projector Ensemble
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Zi Huang
39
38
0
27 Oct 2022
Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks
Cuong Pham
Tuan Hoang
Thanh-Toan Do
FedML
MQ
40
14
0
27 Oct 2022
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Jiongyu Guo
Defang Chen
Can Wang
22
3
0
25 Oct 2022
Efficient Knowledge Distillation from Model Checkpoints
Chaofei Wang
Qisen Yang
Rui Huang
S. Song
Gao Huang
FedML
14
35
0
12 Oct 2022
Deep Combinatorial Aggregation
Yuesong Shen
Daniel Cremers
OOD
UQCV
19
4
0
12 Oct 2022
The Unreasonable Effectiveness of Fully-Connected Layers for Low-Data Regimes
Peter Kocsis
Peter Súkeník
Guillem Brasó
Matthias Nießner
Laura Leal-Taixé
Ismail Elezi
16
7
0
11 Oct 2022
Improving Long-tailed Object Detection with Image-Level Supervision by Multi-Task Collaborative Learning
Bo Li
Yongqiang Yao
Jingru Tan
Xin Lu
F. Yu
Ye Luo
Jianwei Lu
VLM
45
0
0
11 Oct 2022
A Survey on Heterogeneous Federated Learning
Dashan Gao
Xin Yao
Qian Yang
FedML
37
58
0
10 Oct 2022
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again
Xin-Chun Li
Wenxuan Fan
Shaoming Song
Yinchuan Li
Bingshuai Li
Yunfeng Shao
De-Chuan Zhan
62
30
0
10 Oct 2022
Stimulative Training of Residual Networks: A Social Psychology Perspective of Loafing
Peng Ye
Shengji Tang
Baopu Li
Tao Chen
Wanli Ouyang
31
13
0
09 Oct 2022
Slimmable Networks for Contrastive Self-supervised Learning
Shuai Zhao
Xiaohan Wang
Linchao Zhu
Yi Yang
40
1
0
30 Sep 2022
MLink: Linking Black-Box Models from Multiple Domains for Collaborative Inference
Mu Yuan
Lan Zhang
Zimu Zheng
Yi-Nan Zhang
Xiang-Yang Li
32
2
0
28 Sep 2022
Clustering-Induced Generative Incomplete Image-Text Clustering (CIGIT-C)
Dongjin Guo
Xiaoming Su
Jiatai Wang
Rui Su
Zhiyong Pei
Zhiwei Xu
VLM
21
0
0
28 Sep 2022
Toward Understanding Privileged Features Distillation in Learning-to-Rank
Shuo Yang
Sujay Sanghavi
Holakou Rahmanian
J. Bakus
S.V.N. Vishwanathan
126
15
0
19 Sep 2022
CAIBC: Capturing All-round Information Beyond Color for Text-based Person Retrieval
Zijie Wang
Aichun Zhu
Jingyi Xue
Xili Wan
Chao Liu
Tiang-Cong Wang
Yifeng Li
99
79
0
13 Sep 2022
Switchable Online Knowledge Distillation
Biao Qian
Yang Wang
Hongzhi Yin
Richang Hong
Meng Wang
66
39
0
12 Sep 2022
Knowledge-enhanced Iterative Instruction Generation and Reasoning for Knowledge Base Question Answering
Haowei Du
Quzhe Huang
Chen Zhang
Dongyan Zhao
31
3
0
07 Sep 2022
Masked Autoencoders Enable Efficient Knowledge Distillers
Yutong Bai
Zeyu Wang
Junfei Xiao
Chen Wei
Huiyu Wang
Alan Yuille
Yuyin Zhou
Cihang Xie
CLL
32
40
0
25 Aug 2022
Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation
Lihe Yang
Lei Qi
Xue Jiang
Wayne Zhang
Yinghuan Shi
35
237
0
21 Aug 2022
Enhancing Heterogeneous Federated Learning with Knowledge Extraction and Multi-Model Fusion
Duy Phuong Nguyen
Sixing Yu
J. P. Muñoz
Ali Jannesari
FedML
21
12
0
16 Aug 2022
Learning Semantic Correspondence with Sparse Annotations
Shuaiyi Huang
Luyu Yang
Bo He
Songyang Zhang
Xuming He
Abhinav Shrivastava
17
25
0
15 Aug 2022
PA-Seg: Learning from Point Annotations for 3D Medical Image Segmentation using Contextual Regularization and Cross Knowledge Distillation
Shuwei Zhai
Guotai Wang
Xiangde Luo
Qian Yue
Kang Li
Shaoting Zhang
3DPC
32
25
0
11 Aug 2022
Self-Knowledge Distillation via Dropout
Hyoje Lee
Yeachan Park
Hyun Seo
Myung-joo Kang
FedML
24
15
0
11 Aug 2022
Learning with Limited Annotations: A Survey on Deep Semi-Supervised Learning for Medical Image Segmentation
Rushi Jiao
Yichi Zhang
Leiting Ding
Rong Cai
Jicong Zhang
34
152
0
28 Jul 2022
HIRE: Distilling High-order Relational Knowledge From Heterogeneous Graph Neural Networks
Jing Liu
Tongya Zheng
Qinfen Hao
29
7
0
25 Jul 2022
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
48
50
0
23 Jul 2022
Dual Adaptive Transformations for Weakly Supervised Point Cloud Segmentation
Zhonghua Wu
Yicheng Wu
Guosheng Lin
Jianfei Cai
Chen Qian
3DPC
36
23
0
19 Jul 2022
Towards Lightweight Super-Resolution with Dual Regression Learning
Yong Guo
Mingkui Tan
Zeshuai Deng
Jingdong Wang
Qi Chen
Jingyun Liang
Yanwu Xu
Jian Chen
SupR
21
11
0
16 Jul 2022
Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Ji Liu
Daxiang Dong
Xi Wang
An Qin
Xingjian Li
P. Valduriez
Dejing Dou
Dianhai Yu
36
6
0
14 Jul 2022
Task Agnostic Representation Consolidation: a Self-supervised based Continual Learning Approach
Prashant Shivaram Bhat
Bahram Zonooz
Elahe Arani
SSL
CLL
27
12
0
13 Jul 2022
Utilizing Excess Resources in Training Neural Networks
Amit Henig
Raja Giryes
53
0
0
12 Jul 2022
HEAD: HEtero-Assists Distillation for Heterogeneous Object Detectors
Luting Wang
Xiaojie Li
Yue Liao
Jiang
Jianlong Wu
Fei Wang
Chao Qian
Si Liu
30
20
0
12 Jul 2022
VEM
2
^2
2
L: A Plug-and-play Framework for Fusing Text and Structure Knowledge on Sparse Knowledge Graph Completion
Tao He
Ming Liu
Haichao Zhu
Tianwen Jiang
Zihao Zheng
Jingrun Zhang
Sendong Zhao
Bing Qin
22
1
0
04 Jul 2022
PrUE: Distilling Knowledge from Sparse Teacher Networks
Shaopu Wang
Xiaojun Chen
Mengzhen Kou
Jinqiao Shi
30
2
0
03 Jul 2022
ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State
Xinshao Wang
Yang Hua
Elyor Kodirov
S. Mukherjee
David Clifton
N. Robertson
35
6
0
30 Jun 2022
An Empirical Study of Personalized Federated Learning
Koji Matsuda
Yuya Sasaki
Chuan Xiao
Makoto Onizuka
OOD
FedML
27
6
0
27 Jun 2022
Mixed Sample Augmentation for Online Distillation
Yiqing Shen
Liwu Xu
Yuzhe Yang
Yaqian Li
Yandong Guo
31
3
0
24 Jun 2022
Variational Distillation for Multi-View Learning
Xudong Tian
Zhizhong Zhang
Cong Wang
Wensheng Zhang
Yanyun Qu
Lizhuang Ma
Zongze Wu
Yuan Xie
Dacheng Tao
26
5
0
20 Jun 2022
Deep Compatible Learning for Partially-Supervised Medical Image Segmentation
Kecheng Zhang
Xiahai Zhuang
13
3
0
18 Jun 2022
Improving Generalization of Metric Learning via Listwise Self-distillation
Zelong Zeng
Fan Yang
Ziyi Wang
Shiníchi Satoh
FedML
46
1
0
17 Jun 2022
Previous
1
2
3
...
5
6
7
...
13
14
15
Next