Enhancing Visual Grounding for GUI Agents via Self-Evolutionary Reinforcement Learning

Graphical User Interface (GUI) agents have made substantial strides in understanding and executing user instructions across diverse platforms. Yet, grounding these instructions to precise interface elements remains challenging, especially in complex, high-resolution, professional environments. Traditional supervised finetuning (SFT) methods often require large volumes of diverse data and exhibit weak generalization. To overcome these limitations, we introduce a reinforcement learning (RL) based framework that incorporates three core strategies: (1) seed data curation to ensure high quality training samples, (2) a dense policy gradient that provides continuous feedback based on prediction accuracy, and (3) a self evolutionary reinforcement finetuning mechanism that iteratively refines the model using attention maps. With only 3k training samples, our 7B-parameter model achieves state-of-the-art results among similarly sized models on three grounding benchmarks. Notably, it attains 47.3\% accuracy on the ScreenSpot-Pro dataset, outperforming much larger models, such as UI-TARS-72B, by a margin of 24.2\%. These findings underscore the effectiveness of RL-based approaches in enhancing GUI agent performance, particularly in high-resolution, complex environments.
View on arXiv@article{yuan2025_2505.12370, title={ Enhancing Visual Grounding for GUI Agents via Self-Evolutionary Reinforcement Learning }, author={ Xinbin Yuan and Jian Zhang and Kaixin Li and Zhuoxuan Cai and Lujian Yao and Jie Chen and Enguang Wang and Qibin Hou and Jinwei Chen and Peng-Tao Jiang and Bo Li }, journal={arXiv preprint arXiv:2505.12370}, year={ 2025 } }