99
0

ZeroGUI: Automating Online GUI Learning at Zero Human Cost

Abstract

The rapid advancement of large Vision-Language Models (VLMs) has propelled the development of pure-vision-based GUI Agents, capable of perceiving and operating Graphical User Interfaces (GUI) to autonomously fulfill user instructions. However, existing approaches usually adopt an offline learning framework, which faces two core limitations: (1) heavy reliance on high-quality manual annotations for element grounding and action supervision, and (2) limited adaptability to dynamic and interactive environments. To address these limitations, we propose ZeroGUI, a scalable, online learning framework for automating GUI Agent training at Zero human cost. Specifically, ZeroGUI integrates (i) VLM-based automatic task generation to produce diverse training goals from the current environment state, (ii) VLM-based automatic reward estimation to assess task success without hand-crafted evaluation functions, and (iii) two-stage online reinforcement learning to continuously interact with and learn from GUI environments. Experiments on two advanced GUI Agents (UI-TARS and Aguvis) demonstrate that ZeroGUI significantly boosts performance across OSWorld and AndroidLab environments. The code is available atthis https URL.

View on arXiv
@article{yang2025_2505.23762,
  title={ ZeroGUI: Automating Online GUI Learning at Zero Human Cost },
  author={ Chenyu Yang and Shiqian Su and Shi Liu and Xuan Dong and Yue Yu and Weijie Su and Xuehui Wang and Zhaoyang Liu and Jinguo Zhu and Hao Li and Wenhai Wang and Yu Qiao and Xizhou Zhu and Jifeng Dai },
  journal={arXiv preprint arXiv:2505.23762},
  year={ 2025 }
}
Comments on this paper