7
0

Collaborative Unlabeled Data Optimization

Abstract

This paper pioneers a novel data-centric paradigm to maximize the utility of unlabeled data, tackling a critical question: How can we enhance the efficiency and sustainability of deep learning training by optimizing the data itself? We begin by identifying three key limitations in existing model-centric approaches, all rooted in a shared bottleneck: knowledge extracted from data is locked to model parameters, hindering its reusability and scalability. To this end, we propose CoOpt, a highly efficient, parallelized framework for collaborative unlabeled data optimization, thereby effectively encoding knowledge into the data itself. By distributing unlabeled data and leveraging publicly available task-agnostic models, CoOpt facilitates scalable, reusable, and sustainable training pipelines. Extensive experiments across diverse datasets and architectures demonstrate its efficacy and efficiency, achieving 13.6% and 6.8% improvements on Tiny-ImageNet and ImageNet-1K, respectively, with training speedups of 1.94×1.94 \times and 1.2×1.2 \times.

View on arXiv
@article{shang2025_2505.14117,
  title={ Collaborative Unlabeled Data Optimization },
  author={ Xinyi Shang and Peng Sun and Fengyuan Liu and Tao Lin },
  journal={arXiv preprint arXiv:2505.14117},
  year={ 2025 }
}
Comments on this paper