Evaluating Large Language Models for Real-World Engineering Tasks

Large Language Models (LLMs) are transformative not only for daily activities but also for engineering tasks. However, current evaluations of LLMs in engineering exhibit two critical shortcomings: (i) the reliance on simplified use cases, often adapted from examination materials where correctness is easily verifiable, and (ii) the use of ad hoc scenarios that insufficiently capture critical engineering competencies. Consequently, the assessment of LLMs on complex, real-world engineering problems remains largely unexplored. This paper addresses this gap by introducing a curated database comprising over 100 questions derived from authentic, production-oriented engineering scenarios, systematically designed to cover core competencies such as product design, prognosis, and diagnosis. Using this dataset, we evaluate four state-of-the-art LLMs, including both cloud-based and locally hosted instances, to systematically investigate their performance on complex engineering tasks. Our results show that LLMs demonstrate strengths in basic temporal and structural reasoning but struggle significantly with abstract reasoning, formal modeling, and context-sensitive engineering logic.
View on arXiv@article{heesch2025_2505.13484, title={ Evaluating Large Language Models for Real-World Engineering Tasks }, author={ Rene Heesch and Sebastian Eilermann and Alexander Windmann and Alexander Diedrich and Philipp Rosenthal and Oliver Niggemann }, journal={arXiv preprint arXiv:2505.13484}, year={ 2025 } }