ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.03571
  4. Cited By
Cycle Self-Training for Domain Adaptation

Cycle Self-Training for Domain Adaptation

5 March 2021
Hong Liu
Jianmin Wang
Mingsheng Long
ArXivPDFHTML

Papers citing "Cycle Self-Training for Domain Adaptation"

9 / 109 papers shown
Title
RADU: Ray-Aligned Depth Update Convolutions for ToF Data Denoising
RADU: Ray-Aligned Depth Update Convolutions for ToF Data Denoising
Michael Schelling
Pedro Hermosilla
Timo Ropinski
3DPC
25
11
0
30 Nov 2021
Probabilistic Contrastive Learning for Domain Adaptation
Probabilistic Contrastive Learning for Domain Adaptation
Junjie Li
Yixin Zhang
Zilei Wang
Saihui Hou
Keyu Tu
Man Zhang
33
14
0
11 Nov 2021
STraTA: Self-Training with Task Augmentation for Better Few-shot
  Learning
STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
Tu Vu
Minh-Thang Luong
Quoc V. Le
Grady Simon
Mohit Iyyer
128
60
0
13 Sep 2021
Partial Domain Adaptation without Domain Alignment
Partial Domain Adaptation without Domain Alignment
Weikai Li
Songcan Chen
24
13
0
29 Aug 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
149
68
0
04 May 2021
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for
  Out-of-Distribution Robustness
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Sang Michael Xie
Ananya Kumar
Robbie Jones
Fereshte Khani
Tengyu Ma
Percy Liang
OOD
166
62
0
08 Dec 2020
Confidence Regularized Self-Training
Confidence Regularized Self-Training
Yang Zou
Zhiding Yu
Xiaofeng Liu
B. Kumar
Jinsong Wang
233
789
0
26 Aug 2019
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
338
11,684
0
09 Mar 2017
Mean teachers are better role models: Weight-averaged consistency
  targets improve semi-supervised deep learning results
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen
Harri Valpola
OOD
MoMe
261
1,275
0
06 Mar 2017
Previous
123