ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07818
15
0

WebUIBench: A Comprehensive Benchmark for Evaluating Multimodal Large Language Models in WebUI-to-Code

9 June 2025
Zhiyu Lin
Zhengda Zhou
Zhiyuan Zhao
Tianrui Wan
Yilun Ma
Junyu Gao
Xuelong Li
    ELM
ArXiv (abs)PDFHTML
Main:9 Pages
17 Figures
Bibliography:2 Pages
6 Tables
Appendix:7 Pages
Abstract

With the rapid advancement of Generative AI technology, Multimodal Large Language Models(MLLMs) have the potential to act as AI software engineers capable of executing complex web application development. Considering that the model requires a confluence of multidimensional sub-capabilities to address the challenges of various development phases, constructing a multi-view evaluation framework is crucial for accurately guiding the enhancement of development efficiency. However, existing benchmarks usually fail to provide an assessment of sub-capabilities and focus solely on webpage generation outcomes. In this work, we draw inspiration from the principles of software engineering and further propose WebUIBench, a benchmark systematically designed to evaluate MLLMs in four key areas: WebUI Perception, HTML Programming,WebUI-HTML Understanding, and WebUI-to-Code. WebUIBench comprises 21K high-quality question-answer pairs derived from over 0.7K real-world websites. The extensive evaluation of 29 mainstream MLLMs uncovers the skill characteristics and various weakness that models encountered during the development process.

View on arXiv
@article{lin2025_2506.07818,
  title={ WebUIBench: A Comprehensive Benchmark for Evaluating Multimodal Large Language Models in WebUI-to-Code },
  author={ Zhiyu Lin and Zhengda Zhou and Zhiyuan Zhao and Tianrui Wan and Yilun Ma and Junyu Gao and Xuelong Li },
  journal={arXiv preprint arXiv:2506.07818},
  year={ 2025 }
}
Comments on this paper