10
0

Understanding GUI Agent Localization Biases through Logit Sharpness

Main:8 Pages
8 Figures
Bibliography:2 Pages
6 Tables
Appendix:3 Pages
Abstract

Multimodal large language models (MLLMs) have enabled GUI agents to interact with operating systems by grounding language into spatial actions. Despite their promising performance, these models frequently exhibit hallucinations-systematic localization errors that compromise reliability. We propose a fine-grained evaluation framework that categorizes model predictions into four distinct types, revealing nuanced failure modes beyond traditional accuracy metrics. To better quantify model uncertainty, we introduce the Peak Sharpness Score (PSS), a metric that evaluates the alignment between semantic continuity and logits distribution in coordinate prediction. Building on this insight, we further propose Context-Aware Cropping, a training-free technique that improves model performance by adaptively refining input context. Extensive experiments demonstrate that our framework and methods provide actionable insights and enhance the interpretability and robustness of GUI agent behavior.

View on arXiv
@article{tao2025_2506.15425,
  title={ Understanding GUI Agent Localization Biases through Logit Sharpness },
  author={ Xingjian Tao and Yiwei Wang and Yujun Cai and Zhicheng Yang and Jing Tang },
  journal={arXiv preprint arXiv:2506.15425},
  year={ 2025 }
}
Comments on this paper