ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00884
39
0

Re-Evaluating the Impact of Unseen-Class Unlabeled Data on Semi-Supervised Learning Model

2 March 2025
Rundong He
Yicong Dong
Lanzhe Guo
Yilong Yin
Tailin Wu
ArXivPDFHTML
Abstract

Semi-supervised learning (SSL) effectively leverages unlabeled data and has been proven successful across various fields. Current safe SSL methods believe that unseen classes in unlabeled data harm the performance of SSL models. However, previous methods for assessing the impact of unseen classes on SSL model performance are flawed. They fix the size of the unlabeled dataset and adjust the proportion of unseen classes within the unlabeled data to assess the impact. This process contravenes the principle of controlling variables. Adjusting the proportion of unseen classes in unlabeled data alters the proportion of seen classes, meaning the decreased classification performance of seen classes may not be due to an increase in unseen class samples in the unlabeled data, but rather a decrease in seen class samples. Thus, the prior flawed assessment standard that ``unseen classes in unlabeled data can damage SSL model performance" may not always hold true. This paper strictly adheres to the principle of controlling variables, maintaining the proportion of seen classes in unlabeled data while only changing the unseen classes across five critical dimensions, to investigate their impact on SSL models from global robustness and local robustness. Experiments demonstrate that unseen classes in unlabeled data do not necessarily impair the performance of SSL models; in fact, under certain conditions, unseen classes may even enhance them.

View on arXiv
@article{he2025_2503.00884,
  title={ Re-Evaluating the Impact of Unseen-Class Unlabeled Data on Semi-Supervised Learning Model },
  author={ Rundong He and Yicong Dong and Lanzhe Guo and Yilong Yin and Tailin Wu },
  journal={arXiv preprint arXiv:2503.00884},
  year={ 2025 }
}
Comments on this paper