22
0

A Practical Guide for Evaluating LLMs and LLM-Reliant Systems

Main:9 Pages
1 Figures
Bibliography:4 Pages
2 Tables
Abstract

Recent advances in generative AI have led to remarkable interest in using systems that rely on large language models (LLMs) for practical applications. However, meaningful evaluation of these systems in real-world scenarios comes with a distinct set of challenges, which are not well-addressed by synthetic benchmarks and de-facto metrics that are often seen in the literature. We present a practical evaluation framework which outlines how to proactively curate representative datasets, select meaningful evaluation metrics, and employ meaningful evaluation methodologies that integrate well with practical development and deployment of LLM-reliant systems that must adhere to real-world requirements and meet user-facing needs.

View on arXiv
@article{rudd2025_2506.13023,
  title={ A Practical Guide for Evaluating LLMs and LLM-Reliant Systems },
  author={ Ethan M. Rudd and Christopher Andrews and Philip Tully },
  journal={arXiv preprint arXiv:2506.13023},
  year={ 2025 }
}
Comments on this paper