ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.08070
15
0
v1v2 (latest)

Info-Coevolution: An Efficient Framework for Data Model Coevolution

9 June 2025
Ziheng Qin
Hailun Xu
Wei Chee Yew
Qi Jia
Yang Luo
Kanchan Sarkar
Danhui Guan
Kai Wang
Yang You
ArXiv (abs)PDFHTML
Abstract

Machine learning relies heavily on data, yet the continuous growth of real-world data poses challenges for efficient dataset construction and training. A fundamental yet unsolved question is: given our current model and data, does a new data (sample/batch) need annotation/learning? Conventional approaches retain all available data, leading to non-optimal data and training efficiency. Active learning aims to reduce data redundancy by selecting a subset of samples to annotate, while it increases pipeline complexity and introduces bias. In this work, we propose Info-Coevolution, a novel framework that efficiently enables models and data to coevolve through online selective annotation with no bias. Leveraging task-specific models (and open-source models), it selectively annotates and integrates online and web data to improve datasets efficiently. For real-world datasets like ImageNet-1K, Info-Coevolution reduces annotation and training costs by 32\% without performance loss. It is able to automatically give the saving ratio without tuning the ratio. It can further reduce the annotation ratio to 50\% with semi-supervised learning. We also explore retrieval-based dataset enhancement using unlabeled open-source data. Code is available atthis https URL.

View on arXiv
@article{qin2025_2506.08070,
  title={ Info-Coevolution: An Efficient Framework for Data Model Coevolution },
  author={ Ziheng Qin and Hailun Xu and Wei Chee Yew and Qi Jia and Yang Luo and Kanchan Sarkar and Danhui Guan and Kai Wang and Yang You },
  journal={arXiv preprint arXiv:2506.08070},
  year={ 2025 }
}
Main:8 Pages
8 Figures
Bibliography:3 Pages
4 Tables
Appendix:1 Pages
Comments on this paper