ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.18362
75
1
v1v2 (latest)

WAFFLE: Multi-Modal Model for Automated Front-End Development

24 October 2024
Shanchao Liang
Nan Jiang
Shangshu Qian
Lin Tan
ArXiv (abs)PDFHTML
Main:7 Pages
15 Figures
Bibliography:4 Pages
8 Tables
Appendix:6 Pages
Abstract

Web development involves turning UI designs into functional webpages, which can be difficult for both beginners and experienced developers due to the complexity of HTML's hierarchical structures and styles. While Large Language Models (LLMs) have shown promise in generating source code, two major challenges persist in UI-to-HTML code generation: (1) effectively representing HTML's hierarchical structure for LLMs, and (2) bridging the gap between the visual nature of UI designs and the text-based format of HTML code. To tackle these challenges, we introduce Waffle, a new fine-tuning strategy that uses a structure-aware attention mechanism to improve LLMs' understanding of HTML's structure and a contrastive fine-tuning approach to align LLMs' understanding of UI images and HTML code. Models fine-tuned with Waffle show up to 9.00 pp (percentage point) higher HTML match, 0.0982 higher CW-SSIM, 32.99 higher CLIP, and 27.12 pp higher LLEM on our new benchmark WebSight-Test and an existing benchmark Design2Code, outperforming current fine-tuning methods.

View on arXiv
@article{liang2025_2410.18362,
  title={ WAFFLE: Finetuning Multi-Modal Model for Automated Front-End Development },
  author={ Shanchao Liang and Nan Jiang and Shangshu Qian and Lin Tan },
  journal={arXiv preprint arXiv:2410.18362},
  year={ 2025 }
}
Comments on this paper