OSWorld-Human: Benchmarking the Efficiency of Computer-Use Agents
- LLMAG

Generative AI is being leveraged to solve a variety of computer-use tasks involving desktop applications. State-of-the-art systems have focused solely on improving accuracy on leading benchmarks. However, these systems are practically unusable due to extremely high end-to-end latency (e.g., tens of minutes) for tasks that typically take humans just a few minutes to complete. To understand the cause behind this and to guide future developments of computer agents, we conduct the first study on the temporal performance of computer-use agents on OSWorld, the flagship benchmark in computer-use AI. We find that large model calls for planning and reflection account for the majority of the overall latency, and as an agent uses more steps to complete a task, each successive step can take 3x longer than steps at the beginning of a task. We then construct OSWorld-Human, a manually annotated version of the original OSWorld dataset that contains a human-determined trajectory for each task. We evaluate 16 agents on their efficiency using OSWorld-Human and found that even the highest-scoring agents on OSWorld take 1.4-2.7x more steps than necessary.
View on arXiv@article{abhyankar2025_2506.16042, title={ OSWorld-Human: Benchmarking the Efficiency of Computer-Use Agents }, author={ Reyna Abhyankar and Qi Qi and Yiying Zhang }, journal={arXiv preprint arXiv:2506.16042}, year={ 2025 } }