ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.04917
26
1

Avoid Wasted Annotation Costs in Open-set Active Learning with Pre-trained Vision-Language Model

9 August 2024
Jaehyuk Heo
Pilsung Kang
    VLM
ArXivPDFHTML
Abstract

Active learning (AL) aims to enhance model performance by selectively collecting highly informative data, thereby minimizing annotation costs. However, in practical scenarios, unlabeled data may contain out-of-distribution (OOD) samples, which are not used for training, leading to wasted annotation costs if data is incorrectly selected. Therefore, to make active learning feasible in real-world applications, it is crucial to consider not only the informativeness of unlabeled samples but also their purity to determine whether they belong to the in-distribution (ID). Recent studies have applied AL under these assumptions, but challenges remain due to the trade-off between informativeness and purity, as well as the heavy dependence on OOD samples. These issues lead to the collection of OOD samples, resulting in a significant waste of annotation costs. To address these challenges, we propose a novel query strategy, VLPure-AL, which minimizes cost losses while reducing dependence on OOD samples. VLPure-AL sequentially evaluates the purity and informativeness of data. First, it utilizes a pre-trained vision-language model to detect and exclude OOD data with high accuracy by leveraging linguistic and visual information of ID data. Second, it selects highly informative data from the remaining ID data, and then the selected samples are annotated by human experts. Experimental results on datasets with various open-set conditions demonstrate that VLPure-AL achieves the lowest cost loss and highest performance across all scenarios. Code is available atthis https URL.

View on arXiv
@article{heo2025_2408.04917,
  title={ Avoid Wasted Annotation Costs in Open-set Active Learning with Pre-trained Vision-Language Model },
  author={ Jaehyuk Heo and Pilsung Kang },
  journal={arXiv preprint arXiv:2408.04917},
  year={ 2025 }
}
Comments on this paper