36
0

A Summary on GUI Agents with Foundation Models Enhanced by Reinforcement Learning

Jiahao Li
Kaer Huang
Abstract

Graphical User Interface (GUI) agents, driven by Multi-modal Large Language Models (MLLMs), have emerged as a promising paradigm for enabling intelligent interaction with digital systems. This paper provides a structured summary of recent advances in GUI agents, focusing on architectures enhanced by Reinforcement Learning (RL). We first formalize GUI agent tasks as Markov Decision Processes and discuss typical execution environments and evaluation metrics. We then review the modular architecture of (M)LLM-based GUI agents, covering Perception, Planning, and Acting modules, and trace their evolution through representative works. Furthermore, we categorize GUI agent training methodologies into Prompt-based, Supervised Fine-Tuning (SFT)-based, and RL-based approaches, highlighting the progression from simple prompt engineering to dynamic policy learning via RL. Our summary illustrates how recent innovations in multimodal perception, decision reasoning, and adaptive action generation have significantly improved the generalization and robustness of GUI agents in complex real-world environments. We conclude by identifying key challenges and future directions for building more capable and reliable GUI agents.

View on arXiv
@article{li2025_2504.20464,
  title={ A Summary on GUI Agents with Foundation Models Enhanced by Reinforcement Learning },
  author={ Jiahao Li and Kaer Huang },
  journal={arXiv preprint arXiv:2504.20464},
  year={ 2025 }
}
Comments on this paper