25

Frontier-Eng: Benchmarking Self-Evolving Agents on Real-World Engineering Tasks with Generative Optimization

Yizhe Chi
Deyao Hong
Dapeng Jiang
Tianwei Luo
Kaisen Yang
Boshi Zhang
Zhe Cao
Xiaoyan Fan
Bingxiang He
Han Hao
Weiyang Jin
Dianqiao Lei
Qingle Liu
Houde Qian
Bowen Wang
Situ Wang
Youjie Zheng
Yifan Zhou
Calvin Xiao
Eren Cai
Qinhuai Na
Main:18 Pages
12 Figures
Bibliography:3 Pages
8 Tables
Appendix:11 Pages
Abstract

Current LLM agent benchmarks, which predominantly focus on binary pass/fail tasks such as code generation or search-based question answering, often neglect the value of real-world engineering that is often captured through the iterative optimization of feasible designs. To this end, we introduce Frontier-Eng, a human-verified benchmark for generative optimization -- an iterative propose-execute-evaluate loop in which an agent generates candidate artifacts, receives executable verifier feedback, and revises them under a fixed interaction budget -- spanning 4747 tasks across five broad engineering categories. Unlike previous suites, Frontier-Eng tasks are grounded in industrial-grade simulators and verifiers that provide continuous reward signals and enforce hard feasibility constraints under constrained budgets. We evaluate eight frontier language models using representative search frameworks, finding that while Claude 4.6 Opus achieves the most robust performance, the benchmark remains challenging for all models. Our analysis suggests a dual power-law decay in improvement frequency (\sim 1/iteration) and magnitude (\sim 1/improvement count). We further show that although width improves parallelism and diversity, depth remains crucial for hard-won improvements under a fixed budget. Frontier-Eng establishes a new standard for assessing the capacity of AI agents to integrate domain knowledge with executable feedback to solve complex, open-ended engineering problems.

View on arXiv
Comments on this paper