54
0

From Understanding to Generation: An Efficient Shortcut for Evaluating Language Models

Main:7 Pages
2 Figures
Bibliography:3 Pages
9 Tables
Appendix:7 Pages
Abstract

Iterative evaluation of LLMs during training is essential to ensure expected capability development, but can be time- and compute-intensive. While NLU tasks, where the model selects from fixed answer choices, are cheap to evaluate, essential capabilities like reasoning and code generation rely on the more time-consuming NLG (token-by-token generation) format. In this work, our aim is to decrease the computational burden of NLG benchmarks in order to enable monitoring crucial LLM capabilities during model training. We reformulate generative tasks into computationally cheaper NLU alternatives. We test the performance correlation between the original and reformulated tasks using 8 LMs of various sizes and 4 capabilities: mathematical reasoning, code generation, factual knowledge and reading comprehension. Our results show a strong correlation between task formats, supporting capability assessment via cheaper alternatives and achieving over 35x average reduction in evaluation time. We plan to publish our benchmark adaptions.

View on arXiv
@article{hangya2025_2506.03592,
  title={ From Understanding to Generation: An Efficient Shortcut for Evaluating Language Models },
  author={ Viktor Hangya and Fabian Küch and Darina Gold },
  journal={arXiv preprint arXiv:2506.03592},
  year={ 2025 }
}
Comments on this paper