ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.00384
  4. Cited By
Deep Mutual Learning

Deep Mutual Learning

1 June 2017
Ying Zhang
Tao Xiang
Timothy M. Hospedales
Huchuan Lu
    FedML
ArXivPDFHTML

Papers citing "Deep Mutual Learning"

50 / 710 papers shown
Title
Learning to Learn Parameterized Classification Networks for Scalable
  Input Images
Learning to Learn Parameterized Classification Networks for Scalable Input Images
Duo Li
Anbang Yao
Qifeng Chen
22
11
0
13 Jul 2020
Temporal Self-Ensembling Teacher for Semi-Supervised Object Detection
Temporal Self-Ensembling Teacher for Semi-Supervised Object Detection
Cong Chen
Shouyang Dong
Ye Tian
K. Cao
Li Liu
Yuanhao Guo
33
28
0
13 Jul 2020
Knowledge Distillation Beyond Model Compression
Knowledge Distillation Beyond Model Compression
F. Sarfraz
Elahe Arani
Bahram Zonooz
38
40
0
03 Jul 2020
Multiple Expert Brainstorming for Domain Adaptive Person
  Re-identification
Multiple Expert Brainstorming for Domain Adaptive Person Re-identification
Yunpeng Zhai
QiXiang Ye
Shijian Lu
Mengxi Jia
Rongrong Ji
Yonghong Tian
18
163
0
03 Jul 2020
Guided Learning of Nonconvex Models through Successive Functional
  Gradient Optimization
Guided Learning of Nonconvex Models through Successive Functional Gradient Optimization
Rie Johnson
Tong Zhang
6
8
0
30 Jun 2020
On the Demystification of Knowledge Distillation: A Residual Network
  Perspective
On the Demystification of Knowledge Distillation: A Residual Network Perspective
N. Jha
Rajat Saini
Sparsh Mittal
18
4
0
30 Jun 2020
Federated Mutual Learning
Federated Mutual Learning
T. Shen
Jie Zhang
Xinkang Jia
Fengda Zhang
Gang Huang
Pan Zhou
Kun Kuang
Fei Wu
Chao-Xiang Wu
FedML
25
120
0
27 Jun 2020
ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
  Medical Image Segmentation
ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised Medical Image Segmentation
Xinyue Huo
Lingxi Xie
Jianzhong He
Zijie Yang
Qi Tian
28
0
0
24 Jun 2020
Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training
Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training
Xuxi Chen
Wuyang Chen
Tianlong Chen
Ye Yuan
Chen Gong
Kewei Chen
Zhangyang Wang
28
77
0
22 Jun 2020
Paying more attention to snapshots of Iterative Pruning: Improving Model
  Compression via Ensemble Distillation
Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation
Duong H. Le
Vo Trung Nhan
N. Thoai
VLM
33
7
0
20 Jun 2020
CPR: Classifier-Projection Regularization for Continual Learning
CPR: Classifier-Projection Regularization for Continual Learning
Sungmin Cha
Hsiang Hsu
Taebaek Hwang
Flavio du Pin Calmon
Taesup Moon
CLL
34
76
0
12 Jun 2020
Attentive WaveBlock: Complementarity-enhanced Mutual Networks for
  Unsupervised Domain Adaptation in Person Re-identification and Beyond
Attentive WaveBlock: Complementarity-enhanced Mutual Networks for Unsupervised Domain Adaptation in Person Re-identification and Beyond
Wenhao Wang
Fang Zhao
Tianran Ouyang
Ling Shao
24
49
0
11 Jun 2020
Adjoined Networks: A Training Paradigm with Applications to Network
  Compression
Adjoined Networks: A Training Paradigm with Applications to Network Compression
Utkarsh Nath
Shrinu Kushagra
Yingzhen Yang
32
2
0
10 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
31
2,857
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
31
47
0
08 Jun 2020
Peer Collaborative Learning for Online Knowledge Distillation
Peer Collaborative Learning for Online Knowledge Distillation
Guile Wu
S. Gong
FedML
22
128
0
07 Jun 2020
Multi-view Contrastive Learning for Online Knowledge Distillation
Multi-view Contrastive Learning for Online Knowledge Distillation
Chuanguang Yang
Zhulin An
Yongjun Xu
27
23
0
07 Jun 2020
Image Classification in the Dark using Quanta Image Sensors
Image Classification in the Dark using Quanta Image Sensors
Abhiram Gnanasambandam
Stanley H. Chan
VLM
34
36
0
03 Jun 2020
Grafted network for person re-identification
Grafted network for person re-identification
Jiabao Wang
Yang Li
Shanshan Jiao
Zhuang Miao
Rui Zhang
21
6
0
02 Jun 2020
On Vocabulary Reliance in Scene Text Recognition
On Vocabulary Reliance in Scene Text Recognition
Zhaoyi Wan
Jielei Zhang
Liang Zhang
Jiebo Luo
Cong Yao
27
56
0
08 May 2020
ProSelfLC: Progressive Self Label Correction for Training Robust Deep
  Neural Networks
ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks
Xinshao Wang
Yang Hua
Elyor Kodirov
David Clifton
N. Robertson
NoLa
24
60
0
07 May 2020
Heterogeneous Knowledge Distillation using Information Flow Modeling
Heterogeneous Knowledge Distillation using Information Flow Modeling
Nikolaos Passalis
Maria Tzelepi
Anastasios Tefas
32
138
0
02 May 2020
Role-Wise Data Augmentation for Knowledge Distillation
Role-Wise Data Augmentation for Knowledge Distillation
Jie Fu
Xue Geng
Zhijian Duan
Bohan Zhuang
Xingdi Yuan
Adam Trischler
Jie Lin
C. Pal
Hao Dong
27
15
0
19 Apr 2020
Dark Experience for General Continual Learning: a Strong, Simple
  Baseline
Dark Experience for General Continual Learning: a Strong, Simple Baseline
Pietro Buzzega
Matteo Boschini
Angelo Porrello
Davide Abati
Simone Calderara
BDL
CLL
37
884
0
15 Apr 2020
LIAAD: Lightweight Attentive Angular Distillation for Large-scale
  Age-Invariant Face Recognition
LIAAD: Lightweight Attentive Angular Distillation for Large-scale Age-Invariant Face Recognition
Thanh-Dat Truong
C. Duong
Kha Gia Quach
Ngan Le
Tien D. Bui
Khoa Luu
CVBM
22
8
0
09 Apr 2020
Knowing What, Where and When to Look: Efficient Video Action Modeling
  with Attention
Knowing What, Where and When to Look: Efficient Video Action Modeling with Attention
Juan-Manuel Perez-Rua
Brais Martínez
Xiatian Zhu
Antoine Toisoul
Victor Escorcia
Tao Xiang
53
19
0
02 Apr 2020
Self-Augmentation: Generalizing Deep Networks to Unseen Classes for
  Few-Shot Learning
Self-Augmentation: Generalizing Deep Networks to Unseen Classes for Few-Shot Learning
Jinhwan Seo
Hong G Jung
Seong-Whan Lee
SSL
21
39
0
01 Apr 2020
Mutual Learning Network for Multi-Source Domain Adaptation
Mutual Learning Network for Multi-Source Domain Adaptation
Zhenpeng Li
Z. Zhao
Yuhong Guo
Haifeng Shen
Jieping Ye
OOD
25
12
0
29 Mar 2020
Unpacking Information Bottlenecks: Unifying Information-Theoretic
  Objectives in Deep Learning
Unpacking Information Bottlenecks: Unifying Information-Theoretic Objectives in Deep Learning
Andreas Kirsch
Clare Lyle
Y. Gal
27
16
0
27 Mar 2020
Circumventing Outliers of AutoAugment with Knowledge Distillation
Circumventing Outliers of AutoAugment with Knowledge Distillation
Longhui Wei
Anxiang Xiao
Lingxi Xie
Xin Chen
Xiaopeng Zhang
Qi Tian
29
62
0
25 Mar 2020
Dynamic Hierarchical Mimicking Towards Consistent Optimization Objectives
Duo Li
Qifeng Chen
153
19
0
24 Mar 2020
Efficient Crowd Counting via Structured Knowledge Transfer
Efficient Crowd Counting via Structured Knowledge Transfer
Lingbo Liu
Jiaqi Chen
Hefeng Wu
Tianshui Chen
Guanbin Li
Liang Lin
29
64
0
23 Mar 2020
Closed-loop Matters: Dual Regression Networks for Single Image
  Super-Resolution
Closed-loop Matters: Dual Regression Networks for Single Image Super-Resolution
Yong Guo
Jian Chen
Jingdong Wang
Qi Chen
Jingyun Liang
Zeshuai Deng
Yanwu Xu
Mingkui Tan
SupR
26
281
0
16 Mar 2020
Knowledge distillation via adaptive instance normalization
Knowledge distillation via adaptive instance normalization
Jing Yang
Brais Martínez
Adrian Bulat
Georgios Tzimiropoulos
21
23
0
09 Mar 2020
Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation
Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation
Dongliang Chang
Aneeshan Sain
Zhanyu Ma
Yi-Zhe Song
Jun Guo
12
5
0
08 Mar 2020
Combating noisy labels by agreement: A joint training method with
  co-regularization
Combating noisy labels by agreement: A joint training method with co-regularization
Hongxin Wei
Lei Feng
Xiangyu Chen
Bo An
NoLa
319
501
0
05 Mar 2020
Unity Style Transfer for Person Re-Identification
Unity Style Transfer for Person Re-Identification
C. Liu
Xiaojun Chang
Yi-Dong Shen
49
72
0
04 Mar 2020
Anytime Inference with Distilled Hierarchical Neural Ensembles
Anytime Inference with Distilled Hierarchical Neural Ensembles
Adria Ruiz
Jakob Verbeek
UQCV
BDL
FedML
52
6
0
03 Mar 2020
Cross-Spectrum Dual-Subspace Pairing for RGB-infrared Cross-Modality
  Person Re-Identification
Cross-Spectrum Dual-Subspace Pairing for RGB-infrared Cross-Modality Person Re-Identification
Xing Fan
Hao Luo
Chi Zhang
Wei Jiang
25
17
0
29 Feb 2020
Is the Meta-Learning Idea Able to Improve the Generalization of Deep
  Neural Networks on the Standard Supervised Learning?
Is the Meta-Learning Idea Able to Improve the Generalization of Deep Neural Networks on the Standard Supervised Learning?
Xiang Deng
Zhongfei Zhang
AI4CE
22
5
0
27 Feb 2020
Multi-Representation Knowledge Distillation For Audio Classification
Multi-Representation Knowledge Distillation For Audio Classification
Liang Gao
Kele Xu
Huaimin Wang
Yuxing Peng
67
25
0
22 Feb 2020
Multilinear Compressive Learning with Prior Knowledge
Multilinear Compressive Learning with Prior Knowledge
D. Tran
Moncef Gabbouj
Alexandros Iosifidis
9
7
0
17 Feb 2020
Improving Face Recognition from Hard Samples via Distribution
  Distillation Loss
Improving Face Recognition from Hard Samples via Distribution Distillation Loss
Yanhua Huang
Pengcheng Shen
Ying Tai
Shaoxin Li
Xiaoming Liu
Jilin Li
Feiyue Huang
Rongrong Ji
CVBM
42
1
0
10 Feb 2020
MS-Net: Multi-Site Network for Improving Prostate Segmentation with
  Heterogeneous MRI Data
MS-Net: Multi-Site Network for Improving Prostate Segmentation with Heterogeneous MRI Data
Quande Liu
Qi Dou
Lequan Yu
Pheng Ann Heng
OOD
84
275
0
09 Feb 2020
DeepSIC: Deep Soft Interference Cancellation for Multiuser MIMO
  Detection
DeepSIC: Deep Soft Interference Cancellation for Multiuser MIMO Detection
Nir Shlezinger
Rong Fu
Yonina C. Eldar
50
102
0
08 Feb 2020
Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model
  Distillation Approach
Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model Distillation Approach
Zeyue Xue
Shuang Luo
Chao-Xiang Wu
Pan Zhou
Kaigui Bian
Wei Du
16
4
0
06 Feb 2020
Feature-map-level Online Adversarial Knowledge Distillation
Feature-map-level Online Adversarial Knowledge Distillation
Inseop Chung
Seonguk Park
Jangho Kim
Nojun Kwak
GAN
33
128
0
05 Feb 2020
Cooperative Learning via Federated Distillation over Fading Channels
Cooperative Learning via Federated Distillation over Fading Channels
Jinhyun Ahn
Osvaldo Simeone
Joonhyuk Kang
FedML
27
29
0
03 Feb 2020
Periodic Intra-Ensemble Knowledge Distillation for Reinforcement
  Learning
Periodic Intra-Ensemble Knowledge Distillation for Reinforcement Learning
Zhang-Wei Hong
P. Nagarajan
Guilherme J. Maeda
OffRL
23
4
0
01 Feb 2020
Improving Domain-Adapted Sentiment Classification by Deep Adversarial
  Mutual Learning
Improving Domain-Adapted Sentiment Classification by Deep Adversarial Mutual Learning
Qian Xue
Wei Zhang
H. Zha
32
39
0
01 Feb 2020
Previous
123...1112131415
Next