ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.08106
  4. Cited By
Instance-aware Model Ensemble With Distillation For Unsupervised Domain
  Adaptation

Instance-aware Model Ensemble With Distillation For Unsupervised Domain Adaptation

15 November 2022
Weimin Wu
Jiayuan Fan
Tao Chen
Hancheng Ye
Bo Zhang
Baopu Li
ArXiv (abs)PDFHTML

Papers citing "Instance-aware Model Ensemble With Distillation For Unsupervised Domain Adaptation"

18 / 18 papers shown
Title
MDFlow: Unsupervised Optical Flow Learning by Reliable Mutual Knowledge
  Distillation
MDFlow: Unsupervised Optical Flow Learning by Reliable Mutual Knowledge Distillation
Lingtong Kong
J. Yang
54
26
0
11 Nov 2022
A Closer Look at Smoothness in Domain Adversarial Training
A Closer Look at Smoothness in Domain Adversarial Training
Harsh Rangwani
Sumukh K Aithal
Mayank Mishra
Arihant Jain
R. Venkatesh Babu
101
122
0
16 Jun 2022
TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation
TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation
Jinyu Yang
Jingjing Liu
N. Xu
Junzhou Huang
80
127
0
12 Aug 2021
Student Network Learning via Evolutionary Knowledge Distillation
Student Network Learning via Evolutionary Knowledge Distillation
Kangkai Zhang
Chunhui Zhang
Shikun Li
Dan Zeng
Shiming Ge
56
85
0
23 Mar 2021
Towards Understanding Ensemble, Knowledge Distillation and
  Self-Distillation in Deep Learning
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
128
374
0
17 Dec 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
670
41,430
0
22 Oct 2020
Sharpness-Aware Minimization for Efficiently Improving Generalization
Sharpness-Aware Minimization for Efficiently Improving Generalization
Pierre Foret
Ariel Kleiner
H. Mobahi
Behnam Neyshabur
AAML
199
1,358
0
03 Oct 2020
Unsupervised Multi-Target Domain Adaptation Through Knowledge
  Distillation
Unsupervised Multi-Target Domain Adaptation Through Knowledge Distillation
Le Thanh Nguyen-Meidine
Atif Bela
M. Kiran
Jose Dolz
Louis-Antoine Blais-Morin
Eric Granger
68
84
0
14 Jul 2020
Domain Adaptive Ensemble Learning
Domain Adaptive Ensemble Learning
Kaiyang Zhou
Yongxin Yang
Yu Qiao
Tao Xiang
OOD
204
280
0
16 Mar 2020
Ensemble Knowledge Distillation for Learning Improved and Efficient
  Networks
Ensemble Knowledge Distillation for Learning Improved and Efficient Networks
Umar Asif
Jianbin Tang
S. Harrer
FedML
61
75
0
17 Sep 2019
Highlight Every Step: Knowledge Distillation via Collaborative Teaching
Highlight Every Step: Knowledge Distillation via Collaborative Teaching
Haoran Zhao
Xin Sun
Junyu Dong
Changrui Chen
Zihe Dong
71
59
0
23 Jul 2019
Contrastive Adaptation Network for Unsupervised Domain Adaptation
Contrastive Adaptation Network for Unsupervised Domain Adaptation
Guoliang Kang
Lu Jiang
Yi Yang
Alexander G. Hauptmann
112
839
0
04 Jan 2019
Deep Transfer Learning with Joint Adaptation Networks
Deep Transfer Learning with Joint Adaptation Networks
Mingsheng Long
Hanhua Zhu
Jianmin Wang
Michael I. Jordan
TTA
96
2,460
0
21 May 2016
Unsupervised Domain Adaptation with Residual Transfer Networks
Unsupervised Domain Adaptation with Residual Transfer Networks
Mingsheng Long
Hanjing Zhu
Jianmin Wang
Michael I. Jordan
OOD
93
1,491
0
14 Feb 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,426
0
10 Dec 2015
Domain-Adversarial Training of Neural Networks
Domain-Adversarial Training of Neural Networks
Yaroslav Ganin
E. Ustinova
Hana Ajakan
Pascal Germain
Hugo Larochelle
François Laviolette
M. Marchand
Victor Lempitsky
GANOOD
390
9,515
0
28 May 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
364
19,733
0
09 Mar 2015
Learning Transferable Features with Deep Adaptation Networks
Learning Transferable Features with Deep Adaptation Networks
Mingsheng Long
Yue Cao
Jianmin Wang
Michael I. Jordan
OOD
223
5,211
0
10 Feb 2015
1