2
0

Scalable Video-to-Dataset Generation for Cross-Platform Mobile Agents

Abstract

Recent advancements in Large Language Models (LLMs) and Vision-Language Models (VLMs) have sparked significant interest in developing GUI visual agents. We introduce MONDAY (Mobile OS Navigation Task Dataset for Agents from YouTube), a large-scale dataset of 313K annotated frames from 20K instructional videos capturing diverse real-world mobile OS navigation across multiple platforms. Models that include MONDAY in their pre-training phases demonstrate robust cross-platform generalization capabilities, consistently outperforming models trained on existing single OS datasets while achieving an average performance gain of 18.11%p on an unseen mobile OS platform. To enable continuous dataset expansion as mobile platforms evolve, we present an automated framework that leverages publicly available video content to create comprehensive task datasets without manual annotation. Our framework comprises robust OCR-based scene detection (95.04% F1score), near-perfect UI element detection (99.87% hit ratio), and novel multi-step action identification to extract reliable action sequences across diverse interface configurations. We contribute both the MONDAY dataset and our automated collection framework to facilitate future research in mobile OS navigation.

View on arXiv
@article{jang2025_2505.12632,
  title={ Scalable Video-to-Dataset Generation for Cross-Platform Mobile Agents },
  author={ Yunseok Jang and Yeda Song and Sungryull Sohn and Lajanugen Logeswaran and Tiange Luo and Dong-Ki Kim and Kyunghoon Bae and Honglak Lee },
  journal={arXiv preprint arXiv:2505.12632},
  year={ 2025 }
}
Comments on this paper